Apr 13 19:58:37.230988 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 13 19:58:37.231009 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Apr 13 18:04:44 -00 2026 Apr 13 19:58:37.231017 kernel: KASLR enabled Apr 13 19:58:37.231023 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 13 19:58:37.231030 kernel: printk: bootconsole [pl11] enabled Apr 13 19:58:37.231036 kernel: efi: EFI v2.7 by EDK II Apr 13 19:58:37.231043 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f213018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Apr 13 19:58:37.231049 kernel: random: crng init done Apr 13 19:58:37.231055 kernel: ACPI: Early table checksum verification disabled Apr 13 19:58:37.231061 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Apr 13 19:58:37.231067 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:58:37.231073 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:58:37.231080 kernel: ACPI: DSDT 0x000000003FD41018 01DF7E (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 13 19:58:37.231086 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:58:37.231094 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:58:37.231100 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:58:37.231107 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:58:37.231115 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:58:37.231121 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:58:37.231128 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 13 19:58:37.231134 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:58:37.231140 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 13 19:58:37.231147 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Apr 13 19:58:37.231153 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Apr 13 19:58:37.231159 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Apr 13 19:58:37.231166 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Apr 13 19:58:37.231172 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Apr 13 19:58:37.231179 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Apr 13 19:58:37.231186 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Apr 13 19:58:37.231193 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Apr 13 19:58:37.231199 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Apr 13 19:58:37.231206 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Apr 13 19:58:37.231212 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Apr 13 19:58:37.231218 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Apr 13 19:58:37.231224 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Apr 13 19:58:37.231231 kernel: Zone ranges: Apr 13 19:58:37.231237 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 13 19:58:37.231243 kernel: DMA32 empty Apr 13 19:58:37.231249 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 13 19:58:37.231256 kernel: Movable zone start for each node Apr 13 19:58:37.231266 kernel: Early memory node ranges Apr 13 19:58:37.231273 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 13 19:58:37.231280 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Apr 13 19:58:37.231287 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Apr 13 19:58:37.231293 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Apr 13 19:58:37.231301 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Apr 13 19:58:37.231308 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Apr 13 19:58:37.231315 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 13 19:58:37.231321 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 13 19:58:37.231328 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 13 19:58:37.231335 kernel: psci: probing for conduit method from ACPI. Apr 13 19:58:37.231342 kernel: psci: PSCIv1.1 detected in firmware. Apr 13 19:58:37.231348 kernel: psci: Using standard PSCI v0.2 function IDs Apr 13 19:58:37.231355 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 13 19:58:37.231362 kernel: psci: SMC Calling Convention v1.4 Apr 13 19:58:37.231368 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Apr 13 19:58:37.231375 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Apr 13 19:58:37.231383 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 13 19:58:37.231390 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 13 19:58:37.231397 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 13 19:58:37.231404 kernel: Detected PIPT I-cache on CPU0 Apr 13 19:58:37.231410 kernel: CPU features: detected: GIC system register CPU interface Apr 13 19:58:37.231417 kernel: CPU features: detected: Hardware dirty bit management Apr 13 19:58:37.231424 kernel: CPU features: detected: Spectre-BHB Apr 13 19:58:37.231431 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 13 19:58:37.231437 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 13 19:58:37.231444 kernel: CPU features: detected: ARM erratum 1418040 Apr 13 19:58:37.231451 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Apr 13 19:58:37.231459 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 13 19:58:37.231466 kernel: alternatives: applying boot alternatives Apr 13 19:58:37.231474 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:58:37.231481 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 19:58:37.231488 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 19:58:37.231494 kernel: Fallback order for Node 0: 0 Apr 13 19:58:37.231501 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Apr 13 19:58:37.231508 kernel: Policy zone: Normal Apr 13 19:58:37.231515 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 19:58:37.231521 kernel: software IO TLB: area num 2. Apr 13 19:58:37.231528 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Apr 13 19:58:37.231537 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Apr 13 19:58:37.231544 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 19:58:37.231550 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 19:58:37.231558 kernel: rcu: RCU event tracing is enabled. Apr 13 19:58:37.231565 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 19:58:37.231572 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 19:58:37.231579 kernel: Tracing variant of Tasks RCU enabled. Apr 13 19:58:37.231585 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 19:58:37.231592 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 19:58:37.231599 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 13 19:58:37.231606 kernel: GICv3: 960 SPIs implemented Apr 13 19:58:37.231614 kernel: GICv3: 0 Extended SPIs implemented Apr 13 19:58:37.231620 kernel: Root IRQ handler: gic_handle_irq Apr 13 19:58:37.231627 kernel: GICv3: GICv3 features: 16 PPIs, RSS Apr 13 19:58:37.231634 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 13 19:58:37.231641 kernel: ITS: No ITS available, not enabling LPIs Apr 13 19:58:37.231647 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 19:58:37.231654 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 13 19:58:37.231661 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 13 19:58:37.231668 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 13 19:58:37.231675 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 13 19:58:37.233698 kernel: Console: colour dummy device 80x25 Apr 13 19:58:37.233726 kernel: printk: console [tty1] enabled Apr 13 19:58:37.233734 kernel: ACPI: Core revision 20230628 Apr 13 19:58:37.233742 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 13 19:58:37.233749 kernel: pid_max: default: 32768 minimum: 301 Apr 13 19:58:37.233756 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 19:58:37.233763 kernel: landlock: Up and running. Apr 13 19:58:37.233770 kernel: SELinux: Initializing. Apr 13 19:58:37.233777 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:58:37.233784 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:58:37.233793 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:58:37.233800 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:58:37.233807 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Apr 13 19:58:37.233814 kernel: Hyper-V: Host Build 10.0.26100.1542-1-0 Apr 13 19:58:37.233821 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 13 19:58:37.233828 kernel: rcu: Hierarchical SRCU implementation. Apr 13 19:58:37.233835 kernel: rcu: Max phase no-delay instances is 400. Apr 13 19:58:37.233842 kernel: Remapping and enabling EFI services. Apr 13 19:58:37.233856 kernel: smp: Bringing up secondary CPUs ... Apr 13 19:58:37.233864 kernel: Detected PIPT I-cache on CPU1 Apr 13 19:58:37.233871 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 13 19:58:37.233878 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 13 19:58:37.233887 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 13 19:58:37.233895 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 19:58:37.233902 kernel: SMP: Total of 2 processors activated. Apr 13 19:58:37.233910 kernel: CPU features: detected: 32-bit EL0 Support Apr 13 19:58:37.233917 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 13 19:58:37.233926 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 13 19:58:37.233933 kernel: CPU features: detected: CRC32 instructions Apr 13 19:58:37.233941 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 13 19:58:37.233948 kernel: CPU features: detected: LSE atomic instructions Apr 13 19:58:37.233955 kernel: CPU features: detected: Privileged Access Never Apr 13 19:58:37.233962 kernel: CPU: All CPU(s) started at EL1 Apr 13 19:58:37.233970 kernel: alternatives: applying system-wide alternatives Apr 13 19:58:37.233977 kernel: devtmpfs: initialized Apr 13 19:58:37.233984 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 19:58:37.233993 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 19:58:37.234001 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 19:58:37.234008 kernel: SMBIOS 3.1.0 present. Apr 13 19:58:37.234016 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/09/2026 Apr 13 19:58:37.234023 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 19:58:37.234030 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 13 19:58:37.234038 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 13 19:58:37.234046 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 13 19:58:37.234053 kernel: audit: initializing netlink subsys (disabled) Apr 13 19:58:37.234062 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Apr 13 19:58:37.234070 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 19:58:37.234077 kernel: cpuidle: using governor menu Apr 13 19:58:37.234085 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 13 19:58:37.234092 kernel: ASID allocator initialised with 32768 entries Apr 13 19:58:37.234100 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 19:58:37.234108 kernel: Serial: AMBA PL011 UART driver Apr 13 19:58:37.234115 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 13 19:58:37.234123 kernel: Modules: 0 pages in range for non-PLT usage Apr 13 19:58:37.234132 kernel: Modules: 509008 pages in range for PLT usage Apr 13 19:58:37.234139 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 19:58:37.234146 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 19:58:37.234154 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 13 19:58:37.234161 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 13 19:58:37.234169 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 19:58:37.234176 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 19:58:37.234184 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 13 19:58:37.234191 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 13 19:58:37.234200 kernel: ACPI: Added _OSI(Module Device) Apr 13 19:58:37.234207 kernel: ACPI: Added _OSI(Processor Device) Apr 13 19:58:37.234215 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 19:58:37.234222 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 19:58:37.234229 kernel: ACPI: Interpreter enabled Apr 13 19:58:37.234237 kernel: ACPI: Using GIC for interrupt routing Apr 13 19:58:37.234244 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 13 19:58:37.234251 kernel: printk: console [ttyAMA0] enabled Apr 13 19:58:37.234258 kernel: printk: bootconsole [pl11] disabled Apr 13 19:58:37.234267 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 13 19:58:37.234274 kernel: iommu: Default domain type: Translated Apr 13 19:58:37.234282 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 13 19:58:37.234289 kernel: efivars: Registered efivars operations Apr 13 19:58:37.234297 kernel: vgaarb: loaded Apr 13 19:58:37.234304 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 13 19:58:37.234312 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 19:58:37.234319 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 19:58:37.234326 kernel: pnp: PnP ACPI init Apr 13 19:58:37.234335 kernel: pnp: PnP ACPI: found 0 devices Apr 13 19:58:37.234342 kernel: NET: Registered PF_INET protocol family Apr 13 19:58:37.234350 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 19:58:37.234358 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 19:58:37.234366 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 19:58:37.234373 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 19:58:37.234381 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 19:58:37.234389 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 19:58:37.234397 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:58:37.234405 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:58:37.234413 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 19:58:37.234420 kernel: PCI: CLS 0 bytes, default 64 Apr 13 19:58:37.234427 kernel: kvm [1]: HYP mode not available Apr 13 19:58:37.234435 kernel: Initialise system trusted keyrings Apr 13 19:58:37.234443 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 19:58:37.234450 kernel: Key type asymmetric registered Apr 13 19:58:37.234458 kernel: Asymmetric key parser 'x509' registered Apr 13 19:58:37.234465 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 13 19:58:37.234474 kernel: io scheduler mq-deadline registered Apr 13 19:58:37.234481 kernel: io scheduler kyber registered Apr 13 19:58:37.234489 kernel: io scheduler bfq registered Apr 13 19:58:37.234496 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 19:58:37.234504 kernel: thunder_xcv, ver 1.0 Apr 13 19:58:37.234511 kernel: thunder_bgx, ver 1.0 Apr 13 19:58:37.234518 kernel: nicpf, ver 1.0 Apr 13 19:58:37.234525 kernel: nicvf, ver 1.0 Apr 13 19:58:37.234653 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 13 19:58:37.234742 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-13T19:58:36 UTC (1776110316) Apr 13 19:58:37.234754 kernel: efifb: probing for efifb Apr 13 19:58:37.234761 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 13 19:58:37.234769 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 13 19:58:37.234776 kernel: efifb: scrolling: redraw Apr 13 19:58:37.234784 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 13 19:58:37.234791 kernel: Console: switching to colour frame buffer device 128x48 Apr 13 19:58:37.234798 kernel: fb0: EFI VGA frame buffer device Apr 13 19:58:37.234809 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 13 19:58:37.234816 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 19:58:37.234824 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Apr 13 19:58:37.234832 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 13 19:58:37.234839 kernel: watchdog: Hard watchdog permanently disabled Apr 13 19:58:37.234847 kernel: NET: Registered PF_INET6 protocol family Apr 13 19:58:37.234855 kernel: Segment Routing with IPv6 Apr 13 19:58:37.234862 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 19:58:37.234870 kernel: NET: Registered PF_PACKET protocol family Apr 13 19:58:37.234878 kernel: Key type dns_resolver registered Apr 13 19:58:37.234886 kernel: registered taskstats version 1 Apr 13 19:58:37.234894 kernel: Loading compiled-in X.509 certificates Apr 13 19:58:37.234901 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51f707dd0fb1eacaaa32bdbd733952de038a5bd7' Apr 13 19:58:37.234909 kernel: Key type .fscrypt registered Apr 13 19:58:37.234916 kernel: Key type fscrypt-provisioning registered Apr 13 19:58:37.234923 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 19:58:37.234931 kernel: ima: Allocated hash algorithm: sha1 Apr 13 19:58:37.234938 kernel: ima: No architecture policies found Apr 13 19:58:37.234947 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 13 19:58:37.234954 kernel: clk: Disabling unused clocks Apr 13 19:58:37.234962 kernel: Freeing unused kernel memory: 39424K Apr 13 19:58:37.234969 kernel: Run /init as init process Apr 13 19:58:37.234976 kernel: with arguments: Apr 13 19:58:37.234983 kernel: /init Apr 13 19:58:37.234990 kernel: with environment: Apr 13 19:58:37.234997 kernel: HOME=/ Apr 13 19:58:37.235004 kernel: TERM=linux Apr 13 19:58:37.235014 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:58:37.235025 systemd[1]: Detected virtualization microsoft. Apr 13 19:58:37.235033 systemd[1]: Detected architecture arm64. Apr 13 19:58:37.235041 systemd[1]: Running in initrd. Apr 13 19:58:37.235048 systemd[1]: No hostname configured, using default hostname. Apr 13 19:58:37.235056 systemd[1]: Hostname set to . Apr 13 19:58:37.235064 systemd[1]: Initializing machine ID from random generator. Apr 13 19:58:37.235074 systemd[1]: Queued start job for default target initrd.target. Apr 13 19:58:37.235082 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:58:37.235090 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:58:37.235099 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 19:58:37.235107 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:58:37.235115 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 19:58:37.235123 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 19:58:37.235133 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 19:58:37.235143 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 19:58:37.235151 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:58:37.235159 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:58:37.235168 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:58:37.235176 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:58:37.235184 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:58:37.235192 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:58:37.235199 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:58:37.235209 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:58:37.235217 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 19:58:37.235225 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 19:58:37.235233 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:58:37.235241 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:58:37.235249 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:58:37.235257 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:58:37.235265 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 19:58:37.235275 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:58:37.235283 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 19:58:37.235291 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 19:58:37.235299 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:58:37.235306 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:58:37.235332 systemd-journald[218]: Collecting audit messages is disabled. Apr 13 19:58:37.235353 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:58:37.235362 systemd-journald[218]: Journal started Apr 13 19:58:37.235381 systemd-journald[218]: Runtime Journal (/run/log/journal/f2bc0ae5f5904a8e967f1b1c28e8a63f) is 8.0M, max 78.5M, 70.5M free. Apr 13 19:58:37.246632 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:58:37.242057 systemd-modules-load[219]: Inserted module 'overlay' Apr 13 19:58:37.257587 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 19:58:37.268305 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:58:37.290200 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 19:58:37.290229 kernel: Bridge firewalling registered Apr 13 19:58:37.279211 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 19:58:37.290165 systemd-modules-load[219]: Inserted module 'br_netfilter' Apr 13 19:58:37.292963 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:58:37.302047 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:58:37.325008 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:58:37.336328 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:58:37.350890 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:58:37.372825 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:58:37.378212 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:58:37.399863 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:58:37.405294 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:58:37.422559 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 19:58:37.432598 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:58:37.438088 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:58:37.463114 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:58:37.480219 dracut-cmdline[250]: dracut-dracut-053 Apr 13 19:58:37.486866 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:58:37.480462 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:58:37.524246 systemd-resolved[257]: Positive Trust Anchors: Apr 13 19:58:37.524257 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:58:37.524289 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:58:37.527197 systemd-resolved[257]: Defaulting to hostname 'linux'. Apr 13 19:58:37.528002 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:58:37.534816 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:58:37.632704 kernel: SCSI subsystem initialized Apr 13 19:58:37.641696 kernel: Loading iSCSI transport class v2.0-870. Apr 13 19:58:37.649710 kernel: iscsi: registered transport (tcp) Apr 13 19:58:37.665642 kernel: iscsi: registered transport (qla4xxx) Apr 13 19:58:37.665678 kernel: QLogic iSCSI HBA Driver Apr 13 19:58:37.697454 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 19:58:37.712041 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 19:58:37.740686 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 19:58:37.740740 kernel: device-mapper: uevent: version 1.0.3 Apr 13 19:58:37.745818 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 19:58:37.792707 kernel: raid6: neonx8 gen() 15821 MB/s Apr 13 19:58:37.811701 kernel: raid6: neonx4 gen() 15692 MB/s Apr 13 19:58:37.830695 kernel: raid6: neonx2 gen() 13284 MB/s Apr 13 19:58:37.850692 kernel: raid6: neonx1 gen() 10527 MB/s Apr 13 19:58:37.869693 kernel: raid6: int64x8 gen() 6982 MB/s Apr 13 19:58:37.888692 kernel: raid6: int64x4 gen() 7362 MB/s Apr 13 19:58:37.908693 kernel: raid6: int64x2 gen() 6142 MB/s Apr 13 19:58:37.930922 kernel: raid6: int64x1 gen() 5069 MB/s Apr 13 19:58:37.930942 kernel: raid6: using algorithm neonx8 gen() 15821 MB/s Apr 13 19:58:37.953496 kernel: raid6: .... xor() 12057 MB/s, rmw enabled Apr 13 19:58:37.953515 kernel: raid6: using neon recovery algorithm Apr 13 19:58:37.963927 kernel: xor: measuring software checksum speed Apr 13 19:58:37.963941 kernel: 8regs : 19793 MB/sec Apr 13 19:58:37.967894 kernel: 32regs : 19617 MB/sec Apr 13 19:58:37.970824 kernel: arm64_neon : 26919 MB/sec Apr 13 19:58:37.974141 kernel: xor: using function: arm64_neon (26919 MB/sec) Apr 13 19:58:38.023707 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 19:58:38.034433 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:58:38.047814 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:58:38.068234 systemd-udevd[437]: Using default interface naming scheme 'v255'. Apr 13 19:58:38.072778 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:58:38.086120 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 19:58:38.105464 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation Apr 13 19:58:38.132360 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:58:38.147919 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:58:38.184706 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:58:38.205890 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 19:58:38.234215 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 19:58:38.243583 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:58:38.255824 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:58:38.266915 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:58:38.280822 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 19:58:38.294202 kernel: hv_vmbus: Vmbus version:5.3 Apr 13 19:58:38.295006 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:58:38.299142 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:58:38.311725 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:58:38.321538 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:58:38.363278 kernel: hv_vmbus: registering driver hid_hyperv Apr 13 19:58:38.363301 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 13 19:58:38.363310 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 13 19:58:38.363329 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Apr 13 19:58:38.363339 kernel: hv_vmbus: registering driver hv_storvsc Apr 13 19:58:38.363348 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 13 19:58:38.321761 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:58:38.378394 kernel: hv_vmbus: registering driver hv_netvsc Apr 13 19:58:38.378413 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 13 19:58:38.343496 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:58:38.385724 kernel: scsi host1: storvsc_host_t Apr 13 19:58:38.410515 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Apr 13 19:58:38.410567 kernel: scsi host0: storvsc_host_t Apr 13 19:58:38.410749 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 13 19:58:38.401048 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:58:38.425827 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 13 19:58:38.416666 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:58:38.438051 kernel: PTP clock support registered Apr 13 19:58:38.440794 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:58:38.446363 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:58:38.616610 kernel: hv_utils: Registering HyperV Utility Driver Apr 13 19:58:38.616633 kernel: hv_vmbus: registering driver hv_utils Apr 13 19:58:38.616643 kernel: hv_utils: Heartbeat IC version 3.0 Apr 13 19:58:38.616655 kernel: hv_utils: Shutdown IC version 3.2 Apr 13 19:58:38.616665 kernel: hv_utils: TimeSync IC version 4.0 Apr 13 19:58:38.616674 kernel: hv_netvsc 7ced8d78-c61e-7ced-8d78-c61e7ced8d78 eth0: VF slot 1 added Apr 13 19:58:38.616834 kernel: hv_vmbus: registering driver hv_pci Apr 13 19:58:38.616844 kernel: hv_pci de699076-5301-4947-9207-47b2194fb257: PCI VMBus probing: Using version 0x10004 Apr 13 19:58:38.616933 kernel: hv_pci de699076-5301-4947-9207-47b2194fb257: PCI host bridge to bus 5301:00 Apr 13 19:58:38.617009 kernel: pci_bus 5301:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 13 19:58:38.617105 kernel: pci_bus 5301:00: No busn resource found for root bus, will use [bus 00-ff] Apr 13 19:58:38.618256 kernel: pci 5301:00:02.0: [15b3:1018] type 00 class 0x020000 Apr 13 19:58:38.618284 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 13 19:58:38.618392 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 13 19:58:38.563540 systemd-resolved[257]: Clock change detected. Flushing caches. Apr 13 19:58:38.638966 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 13 19:58:38.639167 kernel: pci 5301:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 13 19:58:38.617534 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:58:38.649326 kernel: pci 5301:00:02.0: enabling Extended Tags Apr 13 19:58:38.645761 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:58:38.685045 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 13 19:58:38.685259 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 13 19:58:38.685352 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 13 19:58:38.685436 kernel: pci 5301:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5301:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Apr 13 19:58:38.698609 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 13 19:58:38.704556 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 13 19:58:38.704712 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 13 19:58:38.704803 kernel: pci_bus 5301:00: busn_res: [bus 00-ff] end is updated to 00 Apr 13 19:58:38.704975 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:58:38.723001 kernel: pci 5301:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 13 19:58:38.732137 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:58:38.736580 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 13 19:58:38.751291 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#183 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 13 19:58:38.760367 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:58:38.792404 kernel: mlx5_core 5301:00:02.0: enabling device (0000 -> 0002) Apr 13 19:58:38.798861 kernel: mlx5_core 5301:00:02.0: firmware version: 16.30.5026 Apr 13 19:58:38.995644 kernel: hv_netvsc 7ced8d78-c61e-7ced-8d78-c61e7ced8d78 eth0: VF registering: eth1 Apr 13 19:58:38.996019 kernel: mlx5_core 5301:00:02.0 eth1: joined to eth0 Apr 13 19:58:39.002404 kernel: mlx5_core 5301:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Apr 13 19:58:39.013149 kernel: mlx5_core 5301:00:02.0 enP21249s1: renamed from eth1 Apr 13 19:58:39.191233 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 13 19:58:39.360450 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (481) Apr 13 19:58:39.370185 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 13 19:58:39.381872 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 13 19:58:39.401146 kernel: BTRFS: device fsid ed38fcff-9752-482a-82dd-c0f0fcf94cdd devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (497) Apr 13 19:58:39.414768 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 13 19:58:39.420537 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 13 19:58:39.445292 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 19:58:39.469400 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:58:39.478139 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:58:39.487138 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:58:40.490149 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:58:40.491146 disk-uuid[606]: The operation has completed successfully. Apr 13 19:58:40.546483 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 19:58:40.546576 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 19:58:40.590267 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 19:58:40.602503 sh[720]: Success Apr 13 19:58:40.631143 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 13 19:58:40.966648 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 19:58:40.974142 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 19:58:40.985254 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 19:58:41.011565 kernel: BTRFS info (device dm-0): first mount of filesystem ed38fcff-9752-482a-82dd-c0f0fcf94cdd Apr 13 19:58:41.011612 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:58:41.017073 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 19:58:41.021809 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 19:58:41.025330 kernel: BTRFS info (device dm-0): using free space tree Apr 13 19:58:41.275693 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 19:58:41.284054 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 19:58:41.302349 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 19:58:41.309274 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 19:58:41.347073 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:58:41.350684 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:58:41.350703 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:58:41.402668 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:58:41.420138 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:58:41.421334 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:58:41.436220 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 19:58:41.446362 kernel: BTRFS info (device sda6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:58:41.447775 systemd-networkd[896]: lo: Link UP Apr 13 19:58:41.447785 systemd-networkd[896]: lo: Gained carrier Apr 13 19:58:41.449325 systemd-networkd[896]: Enumeration completed Apr 13 19:58:41.449403 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:58:41.457289 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:58:41.457292 systemd-networkd[896]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:58:41.457987 systemd[1]: Reached target network.target - Network. Apr 13 19:58:41.465381 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 19:58:41.497387 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 19:58:41.553141 kernel: mlx5_core 5301:00:02.0 enP21249s1: Link up Apr 13 19:58:41.593378 kernel: hv_netvsc 7ced8d78-c61e-7ced-8d78-c61e7ced8d78 eth0: Data path switched to VF: enP21249s1 Apr 13 19:58:41.593064 systemd-networkd[896]: enP21249s1: Link UP Apr 13 19:58:41.593162 systemd-networkd[896]: eth0: Link UP Apr 13 19:58:41.593265 systemd-networkd[896]: eth0: Gained carrier Apr 13 19:58:41.593273 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:58:41.612407 systemd-networkd[896]: enP21249s1: Gained carrier Apr 13 19:58:41.625161 systemd-networkd[896]: eth0: DHCPv4 address 10.0.0.17/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 13 19:58:42.302699 ignition[905]: Ignition 2.19.0 Apr 13 19:58:42.302713 ignition[905]: Stage: fetch-offline Apr 13 19:58:42.302753 ignition[905]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:58:42.309640 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:58:42.302762 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:58:42.302862 ignition[905]: parsed url from cmdline: "" Apr 13 19:58:42.302865 ignition[905]: no config URL provided Apr 13 19:58:42.302870 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:58:42.302877 ignition[905]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:58:42.331327 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 19:58:42.302882 ignition[905]: failed to fetch config: resource requires networking Apr 13 19:58:42.305871 ignition[905]: Ignition finished successfully Apr 13 19:58:42.351424 ignition[914]: Ignition 2.19.0 Apr 13 19:58:42.351430 ignition[914]: Stage: fetch Apr 13 19:58:42.351617 ignition[914]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:58:42.351629 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:58:42.351725 ignition[914]: parsed url from cmdline: "" Apr 13 19:58:42.351728 ignition[914]: no config URL provided Apr 13 19:58:42.351733 ignition[914]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:58:42.351739 ignition[914]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:58:42.351763 ignition[914]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 13 19:58:42.480846 ignition[914]: GET result: OK Apr 13 19:58:42.481439 ignition[914]: config has been read from IMDS userdata Apr 13 19:58:42.481483 ignition[914]: parsing config with SHA512: 9cd3b58220048c49e9406e0a32b0a13c92847b180ca863c78768a3b77db818e21c10140eb28003d115f9ebb0bef5ca5cf6ad86fe7712720453d9f3c87c0bfa97 Apr 13 19:58:42.485774 unknown[914]: fetched base config from "system" Apr 13 19:58:42.486166 ignition[914]: fetch: fetch complete Apr 13 19:58:42.485787 unknown[914]: fetched base config from "system" Apr 13 19:58:42.486170 ignition[914]: fetch: fetch passed Apr 13 19:58:42.485792 unknown[914]: fetched user config from "azure" Apr 13 19:58:42.486223 ignition[914]: Ignition finished successfully Apr 13 19:58:42.488801 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 19:58:42.508401 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 19:58:42.528640 ignition[920]: Ignition 2.19.0 Apr 13 19:58:42.528649 ignition[920]: Stage: kargs Apr 13 19:58:42.532722 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 19:58:42.528816 ignition[920]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:58:42.528825 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:58:42.529694 ignition[920]: kargs: kargs passed Apr 13 19:58:42.553246 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 19:58:42.529740 ignition[920]: Ignition finished successfully Apr 13 19:58:42.569579 ignition[926]: Ignition 2.19.0 Apr 13 19:58:42.569588 ignition[926]: Stage: disks Apr 13 19:58:42.573203 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 19:58:42.569757 ignition[926]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:58:42.578840 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 19:58:42.569766 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:58:42.583930 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 19:58:42.570665 ignition[926]: disks: disks passed Apr 13 19:58:42.591666 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:58:42.570710 ignition[926]: Ignition finished successfully Apr 13 19:58:42.600211 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:58:42.608459 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:58:42.634374 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 19:58:42.711304 systemd-fsck[934]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 13 19:58:42.719624 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 19:58:42.733361 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 19:58:42.791146 kernel: EXT4-fs (sda9): mounted filesystem 775210d8-8fbf-4f17-be2d-56007930061c r/w with ordered data mode. Quota mode: none. Apr 13 19:58:42.789562 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 19:58:42.793512 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 19:58:42.836202 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:58:42.857135 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (945) Apr 13 19:58:42.868622 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:58:42.868677 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:58:42.872166 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:58:42.878840 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 19:58:42.887913 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:58:42.888181 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 13 19:58:42.898927 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 19:58:42.898966 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:58:42.915130 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:58:42.922534 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 19:58:42.939397 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 19:58:43.354241 systemd-networkd[896]: eth0: Gained IPv6LL Apr 13 19:58:43.472151 coreos-metadata[962]: Apr 13 19:58:43.472 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 13 19:58:43.478661 coreos-metadata[962]: Apr 13 19:58:43.478 INFO Fetch successful Apr 13 19:58:43.478661 coreos-metadata[962]: Apr 13 19:58:43.478 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 13 19:58:43.492483 coreos-metadata[962]: Apr 13 19:58:43.492 INFO Fetch successful Apr 13 19:58:43.510172 coreos-metadata[962]: Apr 13 19:58:43.509 INFO wrote hostname ci-4081.3.7-a-39cd336750 to /sysroot/etc/hostname Apr 13 19:58:43.518496 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 19:58:43.806786 initrd-setup-root[974]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 19:58:43.850157 initrd-setup-root[981]: cut: /sysroot/etc/group: No such file or directory Apr 13 19:58:43.873569 initrd-setup-root[988]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 19:58:43.880905 initrd-setup-root[995]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 19:58:45.081036 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 19:58:45.096367 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 19:58:45.102962 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 19:58:45.124360 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 19:58:45.129113 kernel: BTRFS info (device sda6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:58:45.151056 ignition[1063]: INFO : Ignition 2.19.0 Apr 13 19:58:45.151056 ignition[1063]: INFO : Stage: mount Apr 13 19:58:45.158976 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:58:45.158976 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:58:45.158976 ignition[1063]: INFO : mount: mount passed Apr 13 19:58:45.158976 ignition[1063]: INFO : Ignition finished successfully Apr 13 19:58:45.155730 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 19:58:45.168397 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 19:58:45.189340 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 19:58:45.204172 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:58:45.234242 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1074) Apr 13 19:58:45.244534 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:58:45.244562 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:58:45.247948 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:58:45.254133 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:58:45.255874 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:58:45.281913 ignition[1091]: INFO : Ignition 2.19.0 Apr 13 19:58:45.281913 ignition[1091]: INFO : Stage: files Apr 13 19:58:45.288542 ignition[1091]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:58:45.288542 ignition[1091]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:58:45.288542 ignition[1091]: DEBUG : files: compiled without relabeling support, skipping Apr 13 19:58:45.650356 ignition[1091]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 19:58:45.650356 ignition[1091]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 19:58:46.563112 ignition[1091]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 19:58:46.569374 ignition[1091]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 19:58:46.569374 ignition[1091]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 19:58:46.563558 unknown[1091]: wrote ssh authorized keys file for user: core Apr 13 19:58:46.584941 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:58:46.584941 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 13 19:58:46.621423 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 19:58:46.721164 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 13 19:58:47.197288 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 19:58:47.448725 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:58:47.448725 ignition[1091]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 19:58:47.464469 ignition[1091]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:58:47.464469 ignition[1091]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:58:47.464469 ignition[1091]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 19:58:47.464469 ignition[1091]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 13 19:58:47.464469 ignition[1091]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 19:58:47.464469 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:58:47.464469 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:58:47.464469 ignition[1091]: INFO : files: files passed Apr 13 19:58:47.464469 ignition[1091]: INFO : Ignition finished successfully Apr 13 19:58:47.465496 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 19:58:47.499339 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 19:58:47.512299 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 19:58:47.534026 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 19:58:47.534714 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 19:58:47.574268 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:58:47.574268 initrd-setup-root-after-ignition[1119]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:58:47.594080 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:58:47.575041 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:58:47.587401 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 19:58:47.612334 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 19:58:47.644401 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 19:58:47.645310 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 19:58:47.654586 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 19:58:47.664862 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 19:58:47.674213 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 19:58:47.690350 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 19:58:47.710147 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:58:47.726645 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 19:58:47.742647 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:58:47.748134 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:58:47.758622 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 19:58:47.767848 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 19:58:47.767963 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:58:47.781500 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 19:58:47.786619 systemd[1]: Stopped target basic.target - Basic System. Apr 13 19:58:47.795957 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 19:58:47.805682 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:58:47.814802 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 19:58:47.824822 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 19:58:47.834546 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:58:47.844744 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 19:58:47.853870 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 19:58:47.863806 systemd[1]: Stopped target swap.target - Swaps. Apr 13 19:58:47.871885 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 19:58:47.872001 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:58:47.884192 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:58:47.889154 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:58:47.899151 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 19:58:47.903399 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:58:47.909043 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 19:58:47.909159 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 19:58:47.923949 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 19:58:47.924065 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:58:47.933674 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 19:58:47.933761 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 19:58:47.944300 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 13 19:58:48.007383 ignition[1143]: INFO : Ignition 2.19.0 Apr 13 19:58:47.944388 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 19:58:48.037229 ignition[1143]: INFO : Stage: umount Apr 13 19:58:48.037229 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:58:48.037229 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:58:48.037229 ignition[1143]: INFO : umount: umount passed Apr 13 19:58:48.037229 ignition[1143]: INFO : Ignition finished successfully Apr 13 19:58:47.971445 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 19:58:47.981453 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 19:58:47.998163 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 19:58:47.998327 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:58:48.008448 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 19:58:48.008597 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:58:48.019815 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 19:58:48.019907 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 19:58:48.032287 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 19:58:48.032368 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 19:58:48.039741 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 19:58:48.039839 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 19:58:48.046765 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 19:58:48.046808 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 19:58:48.056534 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 19:58:48.056574 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 19:58:48.067115 systemd[1]: Stopped target network.target - Network. Apr 13 19:58:48.075118 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 19:58:48.075174 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:58:48.084508 systemd[1]: Stopped target paths.target - Path Units. Apr 13 19:58:48.092474 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 19:58:48.096198 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:58:48.102491 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 19:58:48.111528 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 19:58:48.120665 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 19:58:48.120721 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:58:48.129404 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 19:58:48.129444 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:58:48.138436 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 19:58:48.138480 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 19:58:48.147113 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 19:58:48.147153 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 19:58:48.155965 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 19:58:48.169479 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 19:58:48.184164 systemd-networkd[896]: eth0: DHCPv6 lease lost Apr 13 19:58:48.188315 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 19:58:48.379571 kernel: hv_netvsc 7ced8d78-c61e-7ced-8d78-c61e7ced8d78 eth0: Data path switched from VF: enP21249s1 Apr 13 19:58:48.188432 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 19:58:48.195322 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 19:58:48.195409 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 19:58:48.207819 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 19:58:48.207873 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:58:48.238237 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 19:58:48.246375 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 19:58:48.246449 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:58:48.257411 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:58:48.257472 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:58:48.268443 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 19:58:48.268487 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 19:58:48.273482 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 19:58:48.273523 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:58:48.282587 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:58:48.322482 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 19:58:48.322640 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:58:48.332493 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 19:58:48.332536 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 19:58:48.341335 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 19:58:48.341369 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:58:48.351204 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 19:58:48.351252 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:58:48.370827 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 19:58:48.370876 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 19:58:48.379593 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:58:48.379639 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:58:48.403334 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 19:58:48.415744 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 19:58:48.415811 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:58:48.424828 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 19:58:48.424874 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:58:48.435025 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 19:58:48.435075 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:58:48.446230 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:58:48.446271 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:58:48.455370 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 19:58:48.455451 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 19:58:48.479286 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 19:58:48.479408 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 19:58:48.971687 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 19:58:48.980030 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 19:58:48.980160 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 19:58:48.988614 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 19:58:48.996651 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 19:58:48.996707 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 19:58:49.018416 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 19:58:49.029971 systemd[1]: Switching root. Apr 13 19:58:49.108541 systemd-journald[218]: Journal stopped Apr 13 19:58:37.230988 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 13 19:58:37.231009 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Apr 13 18:04:44 -00 2026 Apr 13 19:58:37.231017 kernel: KASLR enabled Apr 13 19:58:37.231023 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 13 19:58:37.231030 kernel: printk: bootconsole [pl11] enabled Apr 13 19:58:37.231036 kernel: efi: EFI v2.7 by EDK II Apr 13 19:58:37.231043 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f213018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Apr 13 19:58:37.231049 kernel: random: crng init done Apr 13 19:58:37.231055 kernel: ACPI: Early table checksum verification disabled Apr 13 19:58:37.231061 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Apr 13 19:58:37.231067 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:58:37.231073 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:58:37.231080 kernel: ACPI: DSDT 0x000000003FD41018 01DF7E (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 13 19:58:37.231086 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:58:37.231094 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:58:37.231100 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:58:37.231107 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:58:37.231115 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:58:37.231121 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:58:37.231128 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 13 19:58:37.231134 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 13 19:58:37.231140 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 13 19:58:37.231147 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Apr 13 19:58:37.231153 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Apr 13 19:58:37.231159 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Apr 13 19:58:37.231166 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Apr 13 19:58:37.231172 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Apr 13 19:58:37.231179 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Apr 13 19:58:37.231186 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Apr 13 19:58:37.231193 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Apr 13 19:58:37.231199 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Apr 13 19:58:37.231206 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Apr 13 19:58:37.231212 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Apr 13 19:58:37.231218 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Apr 13 19:58:37.231224 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Apr 13 19:58:37.231231 kernel: Zone ranges: Apr 13 19:58:37.231237 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 13 19:58:37.231243 kernel: DMA32 empty Apr 13 19:58:37.231249 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 13 19:58:37.231256 kernel: Movable zone start for each node Apr 13 19:58:37.231266 kernel: Early memory node ranges Apr 13 19:58:37.231273 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 13 19:58:37.231280 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Apr 13 19:58:37.231287 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Apr 13 19:58:37.231293 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Apr 13 19:58:37.231301 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Apr 13 19:58:37.231308 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Apr 13 19:58:37.231315 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 13 19:58:37.231321 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 13 19:58:37.231328 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 13 19:58:37.231335 kernel: psci: probing for conduit method from ACPI. Apr 13 19:58:37.231342 kernel: psci: PSCIv1.1 detected in firmware. Apr 13 19:58:37.231348 kernel: psci: Using standard PSCI v0.2 function IDs Apr 13 19:58:37.231355 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 13 19:58:37.231362 kernel: psci: SMC Calling Convention v1.4 Apr 13 19:58:37.231368 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Apr 13 19:58:37.231375 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Apr 13 19:58:37.231383 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 13 19:58:37.231390 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 13 19:58:37.231397 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 13 19:58:37.231404 kernel: Detected PIPT I-cache on CPU0 Apr 13 19:58:37.231410 kernel: CPU features: detected: GIC system register CPU interface Apr 13 19:58:37.231417 kernel: CPU features: detected: Hardware dirty bit management Apr 13 19:58:37.231424 kernel: CPU features: detected: Spectre-BHB Apr 13 19:58:37.231431 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 13 19:58:37.231437 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 13 19:58:37.231444 kernel: CPU features: detected: ARM erratum 1418040 Apr 13 19:58:37.231451 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Apr 13 19:58:37.231459 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 13 19:58:37.231466 kernel: alternatives: applying boot alternatives Apr 13 19:58:37.231474 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:58:37.231481 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 19:58:37.231488 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 19:58:37.231494 kernel: Fallback order for Node 0: 0 Apr 13 19:58:37.231501 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Apr 13 19:58:37.231508 kernel: Policy zone: Normal Apr 13 19:58:37.231515 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 19:58:37.231521 kernel: software IO TLB: area num 2. Apr 13 19:58:37.231528 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Apr 13 19:58:37.231537 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Apr 13 19:58:37.231544 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 19:58:37.231550 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 19:58:37.231558 kernel: rcu: RCU event tracing is enabled. Apr 13 19:58:37.231565 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 19:58:37.231572 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 19:58:37.231579 kernel: Tracing variant of Tasks RCU enabled. Apr 13 19:58:37.231585 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 19:58:37.231592 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 19:58:37.231599 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 13 19:58:37.231606 kernel: GICv3: 960 SPIs implemented Apr 13 19:58:37.231614 kernel: GICv3: 0 Extended SPIs implemented Apr 13 19:58:37.231620 kernel: Root IRQ handler: gic_handle_irq Apr 13 19:58:37.231627 kernel: GICv3: GICv3 features: 16 PPIs, RSS Apr 13 19:58:37.231634 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 13 19:58:37.231641 kernel: ITS: No ITS available, not enabling LPIs Apr 13 19:58:37.231647 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 19:58:37.231654 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 13 19:58:37.231661 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 13 19:58:37.231668 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 13 19:58:37.231675 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 13 19:58:37.233698 kernel: Console: colour dummy device 80x25 Apr 13 19:58:37.233726 kernel: printk: console [tty1] enabled Apr 13 19:58:37.233734 kernel: ACPI: Core revision 20230628 Apr 13 19:58:37.233742 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 13 19:58:37.233749 kernel: pid_max: default: 32768 minimum: 301 Apr 13 19:58:37.233756 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 19:58:37.233763 kernel: landlock: Up and running. Apr 13 19:58:37.233770 kernel: SELinux: Initializing. Apr 13 19:58:37.233777 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:58:37.233784 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:58:37.233793 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:58:37.233800 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:58:37.233807 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Apr 13 19:58:37.233814 kernel: Hyper-V: Host Build 10.0.26100.1542-1-0 Apr 13 19:58:37.233821 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 13 19:58:37.233828 kernel: rcu: Hierarchical SRCU implementation. Apr 13 19:58:37.233835 kernel: rcu: Max phase no-delay instances is 400. Apr 13 19:58:37.233842 kernel: Remapping and enabling EFI services. Apr 13 19:58:37.233856 kernel: smp: Bringing up secondary CPUs ... Apr 13 19:58:37.233864 kernel: Detected PIPT I-cache on CPU1 Apr 13 19:58:37.233871 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 13 19:58:37.233878 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 13 19:58:37.233887 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 13 19:58:37.233895 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 19:58:37.233902 kernel: SMP: Total of 2 processors activated. Apr 13 19:58:37.233910 kernel: CPU features: detected: 32-bit EL0 Support Apr 13 19:58:37.233917 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 13 19:58:37.233926 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 13 19:58:37.233933 kernel: CPU features: detected: CRC32 instructions Apr 13 19:58:37.233941 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 13 19:58:37.233948 kernel: CPU features: detected: LSE atomic instructions Apr 13 19:58:37.233955 kernel: CPU features: detected: Privileged Access Never Apr 13 19:58:37.233962 kernel: CPU: All CPU(s) started at EL1 Apr 13 19:58:37.233970 kernel: alternatives: applying system-wide alternatives Apr 13 19:58:37.233977 kernel: devtmpfs: initialized Apr 13 19:58:37.233984 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 19:58:37.233993 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 19:58:37.234001 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 19:58:37.234008 kernel: SMBIOS 3.1.0 present. Apr 13 19:58:37.234016 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/09/2026 Apr 13 19:58:37.234023 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 19:58:37.234030 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 13 19:58:37.234038 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 13 19:58:37.234046 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 13 19:58:37.234053 kernel: audit: initializing netlink subsys (disabled) Apr 13 19:58:37.234062 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Apr 13 19:58:37.234070 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 19:58:37.234077 kernel: cpuidle: using governor menu Apr 13 19:58:37.234085 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 13 19:58:37.234092 kernel: ASID allocator initialised with 32768 entries Apr 13 19:58:37.234100 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 19:58:37.234108 kernel: Serial: AMBA PL011 UART driver Apr 13 19:58:37.234115 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 13 19:58:37.234123 kernel: Modules: 0 pages in range for non-PLT usage Apr 13 19:58:37.234132 kernel: Modules: 509008 pages in range for PLT usage Apr 13 19:58:37.234139 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 19:58:37.234146 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 19:58:37.234154 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 13 19:58:37.234161 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 13 19:58:37.234169 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 19:58:37.234176 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 19:58:37.234184 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 13 19:58:37.234191 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 13 19:58:37.234200 kernel: ACPI: Added _OSI(Module Device) Apr 13 19:58:37.234207 kernel: ACPI: Added _OSI(Processor Device) Apr 13 19:58:37.234215 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 19:58:37.234222 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 19:58:37.234229 kernel: ACPI: Interpreter enabled Apr 13 19:58:37.234237 kernel: ACPI: Using GIC for interrupt routing Apr 13 19:58:37.234244 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 13 19:58:37.234251 kernel: printk: console [ttyAMA0] enabled Apr 13 19:58:37.234258 kernel: printk: bootconsole [pl11] disabled Apr 13 19:58:37.234267 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 13 19:58:37.234274 kernel: iommu: Default domain type: Translated Apr 13 19:58:37.234282 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 13 19:58:37.234289 kernel: efivars: Registered efivars operations Apr 13 19:58:37.234297 kernel: vgaarb: loaded Apr 13 19:58:37.234304 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 13 19:58:37.234312 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 19:58:37.234319 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 19:58:37.234326 kernel: pnp: PnP ACPI init Apr 13 19:58:37.234335 kernel: pnp: PnP ACPI: found 0 devices Apr 13 19:58:37.234342 kernel: NET: Registered PF_INET protocol family Apr 13 19:58:37.234350 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 19:58:37.234358 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 19:58:37.234366 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 19:58:37.234373 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 19:58:37.234381 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 19:58:37.234389 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 19:58:37.234397 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:58:37.234405 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:58:37.234413 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 19:58:37.234420 kernel: PCI: CLS 0 bytes, default 64 Apr 13 19:58:37.234427 kernel: kvm [1]: HYP mode not available Apr 13 19:58:37.234435 kernel: Initialise system trusted keyrings Apr 13 19:58:37.234443 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 19:58:37.234450 kernel: Key type asymmetric registered Apr 13 19:58:37.234458 kernel: Asymmetric key parser 'x509' registered Apr 13 19:58:37.234465 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 13 19:58:37.234474 kernel: io scheduler mq-deadline registered Apr 13 19:58:37.234481 kernel: io scheduler kyber registered Apr 13 19:58:37.234489 kernel: io scheduler bfq registered Apr 13 19:58:37.234496 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 19:58:37.234504 kernel: thunder_xcv, ver 1.0 Apr 13 19:58:37.234511 kernel: thunder_bgx, ver 1.0 Apr 13 19:58:37.234518 kernel: nicpf, ver 1.0 Apr 13 19:58:37.234525 kernel: nicvf, ver 1.0 Apr 13 19:58:37.234653 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 13 19:58:37.234742 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-13T19:58:36 UTC (1776110316) Apr 13 19:58:37.234754 kernel: efifb: probing for efifb Apr 13 19:58:37.234761 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 13 19:58:37.234769 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 13 19:58:37.234776 kernel: efifb: scrolling: redraw Apr 13 19:58:37.234784 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 13 19:58:37.234791 kernel: Console: switching to colour frame buffer device 128x48 Apr 13 19:58:37.234798 kernel: fb0: EFI VGA frame buffer device Apr 13 19:58:37.234809 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 13 19:58:37.234816 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 19:58:37.234824 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Apr 13 19:58:37.234832 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 13 19:58:37.234839 kernel: watchdog: Hard watchdog permanently disabled Apr 13 19:58:37.234847 kernel: NET: Registered PF_INET6 protocol family Apr 13 19:58:37.234855 kernel: Segment Routing with IPv6 Apr 13 19:58:37.234862 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 19:58:37.234870 kernel: NET: Registered PF_PACKET protocol family Apr 13 19:58:37.234878 kernel: Key type dns_resolver registered Apr 13 19:58:37.234886 kernel: registered taskstats version 1 Apr 13 19:58:37.234894 kernel: Loading compiled-in X.509 certificates Apr 13 19:58:37.234901 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51f707dd0fb1eacaaa32bdbd733952de038a5bd7' Apr 13 19:58:37.234909 kernel: Key type .fscrypt registered Apr 13 19:58:37.234916 kernel: Key type fscrypt-provisioning registered Apr 13 19:58:37.234923 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 19:58:37.234931 kernel: ima: Allocated hash algorithm: sha1 Apr 13 19:58:37.234938 kernel: ima: No architecture policies found Apr 13 19:58:37.234947 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 13 19:58:37.234954 kernel: clk: Disabling unused clocks Apr 13 19:58:37.234962 kernel: Freeing unused kernel memory: 39424K Apr 13 19:58:37.234969 kernel: Run /init as init process Apr 13 19:58:37.234976 kernel: with arguments: Apr 13 19:58:37.234983 kernel: /init Apr 13 19:58:37.234990 kernel: with environment: Apr 13 19:58:37.234997 kernel: HOME=/ Apr 13 19:58:37.235004 kernel: TERM=linux Apr 13 19:58:37.235014 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:58:37.235025 systemd[1]: Detected virtualization microsoft. Apr 13 19:58:37.235033 systemd[1]: Detected architecture arm64. Apr 13 19:58:37.235041 systemd[1]: Running in initrd. Apr 13 19:58:37.235048 systemd[1]: No hostname configured, using default hostname. Apr 13 19:58:37.235056 systemd[1]: Hostname set to . Apr 13 19:58:37.235064 systemd[1]: Initializing machine ID from random generator. Apr 13 19:58:37.235074 systemd[1]: Queued start job for default target initrd.target. Apr 13 19:58:37.235082 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:58:37.235090 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:58:37.235099 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 19:58:37.235107 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:58:37.235115 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 19:58:37.235123 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 19:58:37.235133 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 19:58:37.235143 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 19:58:37.235151 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:58:37.235159 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:58:37.235168 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:58:37.235176 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:58:37.235184 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:58:37.235192 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:58:37.235199 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:58:37.235209 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:58:37.235217 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 19:58:37.235225 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 19:58:37.235233 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:58:37.235241 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:58:37.235249 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:58:37.235257 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:58:37.235265 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 19:58:37.235275 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:58:37.235283 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 19:58:37.235291 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 19:58:37.235299 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:58:37.235306 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:58:37.235332 systemd-journald[218]: Collecting audit messages is disabled. Apr 13 19:58:37.235353 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:58:37.235362 systemd-journald[218]: Journal started Apr 13 19:58:37.235381 systemd-journald[218]: Runtime Journal (/run/log/journal/f2bc0ae5f5904a8e967f1b1c28e8a63f) is 8.0M, max 78.5M, 70.5M free. Apr 13 19:58:37.246632 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:58:37.242057 systemd-modules-load[219]: Inserted module 'overlay' Apr 13 19:58:37.257587 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 19:58:37.268305 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:58:37.290200 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 19:58:37.290229 kernel: Bridge firewalling registered Apr 13 19:58:37.279211 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 19:58:37.290165 systemd-modules-load[219]: Inserted module 'br_netfilter' Apr 13 19:58:37.292963 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:58:37.302047 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:58:37.325008 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:58:37.336328 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:58:37.350890 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:58:37.372825 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:58:37.378212 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:58:37.399863 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:58:37.405294 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:58:37.422559 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 19:58:37.432598 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:58:37.438088 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:58:37.463114 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:58:37.480219 dracut-cmdline[250]: dracut-dracut-053 Apr 13 19:58:37.486866 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:58:37.480462 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:58:37.524246 systemd-resolved[257]: Positive Trust Anchors: Apr 13 19:58:37.524257 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:58:37.524289 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:58:37.527197 systemd-resolved[257]: Defaulting to hostname 'linux'. Apr 13 19:58:37.528002 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:58:37.534816 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:58:37.632704 kernel: SCSI subsystem initialized Apr 13 19:58:37.641696 kernel: Loading iSCSI transport class v2.0-870. Apr 13 19:58:37.649710 kernel: iscsi: registered transport (tcp) Apr 13 19:58:37.665642 kernel: iscsi: registered transport (qla4xxx) Apr 13 19:58:37.665678 kernel: QLogic iSCSI HBA Driver Apr 13 19:58:37.697454 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 19:58:37.712041 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 19:58:37.740686 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 19:58:37.740740 kernel: device-mapper: uevent: version 1.0.3 Apr 13 19:58:37.745818 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 19:58:37.792707 kernel: raid6: neonx8 gen() 15821 MB/s Apr 13 19:58:37.811701 kernel: raid6: neonx4 gen() 15692 MB/s Apr 13 19:58:37.830695 kernel: raid6: neonx2 gen() 13284 MB/s Apr 13 19:58:37.850692 kernel: raid6: neonx1 gen() 10527 MB/s Apr 13 19:58:37.869693 kernel: raid6: int64x8 gen() 6982 MB/s Apr 13 19:58:37.888692 kernel: raid6: int64x4 gen() 7362 MB/s Apr 13 19:58:37.908693 kernel: raid6: int64x2 gen() 6142 MB/s Apr 13 19:58:37.930922 kernel: raid6: int64x1 gen() 5069 MB/s Apr 13 19:58:37.930942 kernel: raid6: using algorithm neonx8 gen() 15821 MB/s Apr 13 19:58:37.953496 kernel: raid6: .... xor() 12057 MB/s, rmw enabled Apr 13 19:58:37.953515 kernel: raid6: using neon recovery algorithm Apr 13 19:58:37.963927 kernel: xor: measuring software checksum speed Apr 13 19:58:37.963941 kernel: 8regs : 19793 MB/sec Apr 13 19:58:37.967894 kernel: 32regs : 19617 MB/sec Apr 13 19:58:37.970824 kernel: arm64_neon : 26919 MB/sec Apr 13 19:58:37.974141 kernel: xor: using function: arm64_neon (26919 MB/sec) Apr 13 19:58:38.023707 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 19:58:38.034433 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:58:38.047814 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:58:38.068234 systemd-udevd[437]: Using default interface naming scheme 'v255'. Apr 13 19:58:38.072778 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:58:38.086120 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 19:58:38.105464 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation Apr 13 19:58:38.132360 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:58:38.147919 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:58:38.184706 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:58:38.205890 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 19:58:38.234215 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 19:58:38.243583 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:58:38.255824 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:58:38.266915 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:58:38.280822 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 19:58:38.294202 kernel: hv_vmbus: Vmbus version:5.3 Apr 13 19:58:38.295006 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:58:38.299142 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:58:38.311725 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:58:38.321538 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:58:38.363278 kernel: hv_vmbus: registering driver hid_hyperv Apr 13 19:58:38.363301 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 13 19:58:38.363310 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 13 19:58:38.363329 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Apr 13 19:58:38.363339 kernel: hv_vmbus: registering driver hv_storvsc Apr 13 19:58:38.363348 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 13 19:58:38.321761 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:58:38.378394 kernel: hv_vmbus: registering driver hv_netvsc Apr 13 19:58:38.378413 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 13 19:58:38.343496 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:58:38.385724 kernel: scsi host1: storvsc_host_t Apr 13 19:58:38.410515 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Apr 13 19:58:38.410567 kernel: scsi host0: storvsc_host_t Apr 13 19:58:38.410749 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 13 19:58:38.401048 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:58:38.425827 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 13 19:58:38.416666 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:58:38.438051 kernel: PTP clock support registered Apr 13 19:58:38.440794 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:58:38.446363 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:58:38.616610 kernel: hv_utils: Registering HyperV Utility Driver Apr 13 19:58:38.616633 kernel: hv_vmbus: registering driver hv_utils Apr 13 19:58:38.616643 kernel: hv_utils: Heartbeat IC version 3.0 Apr 13 19:58:38.616655 kernel: hv_utils: Shutdown IC version 3.2 Apr 13 19:58:38.616665 kernel: hv_utils: TimeSync IC version 4.0 Apr 13 19:58:38.616674 kernel: hv_netvsc 7ced8d78-c61e-7ced-8d78-c61e7ced8d78 eth0: VF slot 1 added Apr 13 19:58:38.616834 kernel: hv_vmbus: registering driver hv_pci Apr 13 19:58:38.616844 kernel: hv_pci de699076-5301-4947-9207-47b2194fb257: PCI VMBus probing: Using version 0x10004 Apr 13 19:58:38.616933 kernel: hv_pci de699076-5301-4947-9207-47b2194fb257: PCI host bridge to bus 5301:00 Apr 13 19:58:38.617009 kernel: pci_bus 5301:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 13 19:58:38.617105 kernel: pci_bus 5301:00: No busn resource found for root bus, will use [bus 00-ff] Apr 13 19:58:38.618256 kernel: pci 5301:00:02.0: [15b3:1018] type 00 class 0x020000 Apr 13 19:58:38.618284 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 13 19:58:38.618392 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 13 19:58:38.563540 systemd-resolved[257]: Clock change detected. Flushing caches. Apr 13 19:58:38.638966 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 13 19:58:38.639167 kernel: pci 5301:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 13 19:58:38.617534 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:58:38.649326 kernel: pci 5301:00:02.0: enabling Extended Tags Apr 13 19:58:38.645761 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:58:38.685045 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 13 19:58:38.685259 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 13 19:58:38.685352 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 13 19:58:38.685436 kernel: pci 5301:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5301:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Apr 13 19:58:38.698609 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 13 19:58:38.704556 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 13 19:58:38.704712 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 13 19:58:38.704803 kernel: pci_bus 5301:00: busn_res: [bus 00-ff] end is updated to 00 Apr 13 19:58:38.704975 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:58:38.723001 kernel: pci 5301:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 13 19:58:38.732137 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:58:38.736580 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 13 19:58:38.751291 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#183 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 13 19:58:38.760367 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:58:38.792404 kernel: mlx5_core 5301:00:02.0: enabling device (0000 -> 0002) Apr 13 19:58:38.798861 kernel: mlx5_core 5301:00:02.0: firmware version: 16.30.5026 Apr 13 19:58:38.995644 kernel: hv_netvsc 7ced8d78-c61e-7ced-8d78-c61e7ced8d78 eth0: VF registering: eth1 Apr 13 19:58:38.996019 kernel: mlx5_core 5301:00:02.0 eth1: joined to eth0 Apr 13 19:58:39.002404 kernel: mlx5_core 5301:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Apr 13 19:58:39.013149 kernel: mlx5_core 5301:00:02.0 enP21249s1: renamed from eth1 Apr 13 19:58:39.191233 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 13 19:58:39.360450 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (481) Apr 13 19:58:39.370185 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 13 19:58:39.381872 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 13 19:58:39.401146 kernel: BTRFS: device fsid ed38fcff-9752-482a-82dd-c0f0fcf94cdd devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (497) Apr 13 19:58:39.414768 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 13 19:58:39.420537 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 13 19:58:39.445292 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 19:58:39.469400 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:58:39.478139 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:58:39.487138 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:58:40.490149 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 19:58:40.491146 disk-uuid[606]: The operation has completed successfully. Apr 13 19:58:40.546483 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 19:58:40.546576 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 19:58:40.590267 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 19:58:40.602503 sh[720]: Success Apr 13 19:58:40.631143 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 13 19:58:40.966648 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 19:58:40.974142 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 19:58:40.985254 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 19:58:41.011565 kernel: BTRFS info (device dm-0): first mount of filesystem ed38fcff-9752-482a-82dd-c0f0fcf94cdd Apr 13 19:58:41.011612 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:58:41.017073 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 19:58:41.021809 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 19:58:41.025330 kernel: BTRFS info (device dm-0): using free space tree Apr 13 19:58:41.275693 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 19:58:41.284054 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 19:58:41.302349 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 19:58:41.309274 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 19:58:41.347073 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:58:41.350684 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:58:41.350703 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:58:41.402668 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:58:41.420138 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:58:41.421334 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:58:41.436220 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 19:58:41.446362 kernel: BTRFS info (device sda6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:58:41.447775 systemd-networkd[896]: lo: Link UP Apr 13 19:58:41.447785 systemd-networkd[896]: lo: Gained carrier Apr 13 19:58:41.449325 systemd-networkd[896]: Enumeration completed Apr 13 19:58:41.449403 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:58:41.457289 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:58:41.457292 systemd-networkd[896]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:58:41.457987 systemd[1]: Reached target network.target - Network. Apr 13 19:58:41.465381 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 19:58:41.497387 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 19:58:41.553141 kernel: mlx5_core 5301:00:02.0 enP21249s1: Link up Apr 13 19:58:41.593378 kernel: hv_netvsc 7ced8d78-c61e-7ced-8d78-c61e7ced8d78 eth0: Data path switched to VF: enP21249s1 Apr 13 19:58:41.593064 systemd-networkd[896]: enP21249s1: Link UP Apr 13 19:58:41.593162 systemd-networkd[896]: eth0: Link UP Apr 13 19:58:41.593265 systemd-networkd[896]: eth0: Gained carrier Apr 13 19:58:41.593273 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:58:41.612407 systemd-networkd[896]: enP21249s1: Gained carrier Apr 13 19:58:41.625161 systemd-networkd[896]: eth0: DHCPv4 address 10.0.0.17/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 13 19:58:42.302699 ignition[905]: Ignition 2.19.0 Apr 13 19:58:42.302713 ignition[905]: Stage: fetch-offline Apr 13 19:58:42.302753 ignition[905]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:58:42.309640 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:58:42.302762 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:58:42.302862 ignition[905]: parsed url from cmdline: "" Apr 13 19:58:42.302865 ignition[905]: no config URL provided Apr 13 19:58:42.302870 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:58:42.302877 ignition[905]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:58:42.331327 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 19:58:42.302882 ignition[905]: failed to fetch config: resource requires networking Apr 13 19:58:42.305871 ignition[905]: Ignition finished successfully Apr 13 19:58:42.351424 ignition[914]: Ignition 2.19.0 Apr 13 19:58:42.351430 ignition[914]: Stage: fetch Apr 13 19:58:42.351617 ignition[914]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:58:42.351629 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:58:42.351725 ignition[914]: parsed url from cmdline: "" Apr 13 19:58:42.351728 ignition[914]: no config URL provided Apr 13 19:58:42.351733 ignition[914]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:58:42.351739 ignition[914]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:58:42.351763 ignition[914]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 13 19:58:42.480846 ignition[914]: GET result: OK Apr 13 19:58:42.481439 ignition[914]: config has been read from IMDS userdata Apr 13 19:58:42.481483 ignition[914]: parsing config with SHA512: 9cd3b58220048c49e9406e0a32b0a13c92847b180ca863c78768a3b77db818e21c10140eb28003d115f9ebb0bef5ca5cf6ad86fe7712720453d9f3c87c0bfa97 Apr 13 19:58:42.485774 unknown[914]: fetched base config from "system" Apr 13 19:58:42.486166 ignition[914]: fetch: fetch complete Apr 13 19:58:42.485787 unknown[914]: fetched base config from "system" Apr 13 19:58:42.486170 ignition[914]: fetch: fetch passed Apr 13 19:58:42.485792 unknown[914]: fetched user config from "azure" Apr 13 19:58:42.486223 ignition[914]: Ignition finished successfully Apr 13 19:58:42.488801 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 19:58:42.508401 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 19:58:42.528640 ignition[920]: Ignition 2.19.0 Apr 13 19:58:42.528649 ignition[920]: Stage: kargs Apr 13 19:58:42.532722 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 19:58:42.528816 ignition[920]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:58:42.528825 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:58:42.529694 ignition[920]: kargs: kargs passed Apr 13 19:58:42.553246 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 19:58:42.529740 ignition[920]: Ignition finished successfully Apr 13 19:58:42.569579 ignition[926]: Ignition 2.19.0 Apr 13 19:58:42.569588 ignition[926]: Stage: disks Apr 13 19:58:42.573203 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 19:58:42.569757 ignition[926]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:58:42.578840 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 19:58:42.569766 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:58:42.583930 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 19:58:42.570665 ignition[926]: disks: disks passed Apr 13 19:58:42.591666 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:58:42.570710 ignition[926]: Ignition finished successfully Apr 13 19:58:42.600211 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:58:42.608459 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:58:42.634374 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 19:58:42.711304 systemd-fsck[934]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 13 19:58:42.719624 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 19:58:42.733361 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 19:58:42.791146 kernel: EXT4-fs (sda9): mounted filesystem 775210d8-8fbf-4f17-be2d-56007930061c r/w with ordered data mode. Quota mode: none. Apr 13 19:58:42.789562 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 19:58:42.793512 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 19:58:42.836202 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:58:42.857135 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (945) Apr 13 19:58:42.868622 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:58:42.868677 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:58:42.872166 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:58:42.878840 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 19:58:42.887913 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:58:42.888181 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 13 19:58:42.898927 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 19:58:42.898966 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:58:42.915130 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:58:42.922534 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 19:58:42.939397 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 19:58:43.354241 systemd-networkd[896]: eth0: Gained IPv6LL Apr 13 19:58:43.472151 coreos-metadata[962]: Apr 13 19:58:43.472 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 13 19:58:43.478661 coreos-metadata[962]: Apr 13 19:58:43.478 INFO Fetch successful Apr 13 19:58:43.478661 coreos-metadata[962]: Apr 13 19:58:43.478 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 13 19:58:43.492483 coreos-metadata[962]: Apr 13 19:58:43.492 INFO Fetch successful Apr 13 19:58:43.510172 coreos-metadata[962]: Apr 13 19:58:43.509 INFO wrote hostname ci-4081.3.7-a-39cd336750 to /sysroot/etc/hostname Apr 13 19:58:43.518496 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 19:58:43.806786 initrd-setup-root[974]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 19:58:43.850157 initrd-setup-root[981]: cut: /sysroot/etc/group: No such file or directory Apr 13 19:58:43.873569 initrd-setup-root[988]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 19:58:43.880905 initrd-setup-root[995]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 19:58:45.081036 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 19:58:45.096367 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 19:58:45.102962 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 19:58:45.124360 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 19:58:45.129113 kernel: BTRFS info (device sda6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:58:45.151056 ignition[1063]: INFO : Ignition 2.19.0 Apr 13 19:58:45.151056 ignition[1063]: INFO : Stage: mount Apr 13 19:58:45.158976 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:58:45.158976 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:58:45.158976 ignition[1063]: INFO : mount: mount passed Apr 13 19:58:45.158976 ignition[1063]: INFO : Ignition finished successfully Apr 13 19:58:45.155730 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 19:58:45.168397 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 19:58:45.189340 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 19:58:45.204172 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:58:45.234242 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1074) Apr 13 19:58:45.244534 kernel: BTRFS info (device sda6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:58:45.244562 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:58:45.247948 kernel: BTRFS info (device sda6): using free space tree Apr 13 19:58:45.254133 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 19:58:45.255874 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:58:45.281913 ignition[1091]: INFO : Ignition 2.19.0 Apr 13 19:58:45.281913 ignition[1091]: INFO : Stage: files Apr 13 19:58:45.288542 ignition[1091]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:58:45.288542 ignition[1091]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:58:45.288542 ignition[1091]: DEBUG : files: compiled without relabeling support, skipping Apr 13 19:58:45.650356 ignition[1091]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 19:58:45.650356 ignition[1091]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 19:58:46.563112 ignition[1091]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 19:58:46.569374 ignition[1091]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 19:58:46.569374 ignition[1091]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 19:58:46.563558 unknown[1091]: wrote ssh authorized keys file for user: core Apr 13 19:58:46.584941 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:58:46.584941 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 13 19:58:46.621423 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 19:58:46.721164 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:58:46.729700 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 13 19:58:47.197288 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 19:58:47.448725 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:58:47.448725 ignition[1091]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 19:58:47.464469 ignition[1091]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:58:47.464469 ignition[1091]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:58:47.464469 ignition[1091]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 19:58:47.464469 ignition[1091]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 13 19:58:47.464469 ignition[1091]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 19:58:47.464469 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:58:47.464469 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:58:47.464469 ignition[1091]: INFO : files: files passed Apr 13 19:58:47.464469 ignition[1091]: INFO : Ignition finished successfully Apr 13 19:58:47.465496 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 19:58:47.499339 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 19:58:47.512299 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 19:58:47.534026 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 19:58:47.534714 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 19:58:47.574268 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:58:47.574268 initrd-setup-root-after-ignition[1119]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:58:47.594080 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:58:47.575041 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:58:47.587401 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 19:58:47.612334 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 19:58:47.644401 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 19:58:47.645310 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 19:58:47.654586 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 19:58:47.664862 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 19:58:47.674213 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 19:58:47.690350 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 19:58:47.710147 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:58:47.726645 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 19:58:47.742647 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:58:47.748134 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:58:47.758622 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 19:58:47.767848 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 19:58:47.767963 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:58:47.781500 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 19:58:47.786619 systemd[1]: Stopped target basic.target - Basic System. Apr 13 19:58:47.795957 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 19:58:47.805682 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:58:47.814802 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 19:58:47.824822 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 19:58:47.834546 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:58:47.844744 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 19:58:47.853870 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 19:58:47.863806 systemd[1]: Stopped target swap.target - Swaps. Apr 13 19:58:47.871885 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 19:58:47.872001 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:58:47.884192 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:58:47.889154 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:58:47.899151 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 19:58:47.903399 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:58:47.909043 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 19:58:47.909159 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 19:58:47.923949 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 19:58:47.924065 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:58:47.933674 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 19:58:47.933761 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 19:58:47.944300 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 13 19:58:48.007383 ignition[1143]: INFO : Ignition 2.19.0 Apr 13 19:58:47.944388 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 19:58:48.037229 ignition[1143]: INFO : Stage: umount Apr 13 19:58:48.037229 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:58:48.037229 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 13 19:58:48.037229 ignition[1143]: INFO : umount: umount passed Apr 13 19:58:48.037229 ignition[1143]: INFO : Ignition finished successfully Apr 13 19:58:47.971445 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 19:58:47.981453 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 19:58:47.998163 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 19:58:47.998327 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:58:48.008448 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 19:58:48.008597 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:58:48.019815 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 19:58:48.019907 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 19:58:48.032287 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 19:58:48.032368 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 19:58:48.039741 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 19:58:48.039839 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 19:58:48.046765 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 19:58:48.046808 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 19:58:48.056534 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 19:58:48.056574 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 19:58:48.067115 systemd[1]: Stopped target network.target - Network. Apr 13 19:58:48.075118 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 19:58:48.075174 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:58:48.084508 systemd[1]: Stopped target paths.target - Path Units. Apr 13 19:58:48.092474 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 19:58:48.096198 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:58:48.102491 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 19:58:48.111528 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 19:58:48.120665 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 19:58:48.120721 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:58:48.129404 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 19:58:48.129444 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:58:48.138436 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 19:58:48.138480 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 19:58:48.147113 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 19:58:48.147153 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 19:58:48.155965 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 19:58:48.169479 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 19:58:48.184164 systemd-networkd[896]: eth0: DHCPv6 lease lost Apr 13 19:58:48.188315 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 19:58:48.379571 kernel: hv_netvsc 7ced8d78-c61e-7ced-8d78-c61e7ced8d78 eth0: Data path switched from VF: enP21249s1 Apr 13 19:58:48.188432 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 19:58:48.195322 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 19:58:48.195409 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 19:58:48.207819 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 19:58:48.207873 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:58:48.238237 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 19:58:48.246375 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 19:58:48.246449 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:58:48.257411 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:58:48.257472 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:58:48.268443 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 19:58:48.268487 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 19:58:48.273482 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 19:58:48.273523 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:58:48.282587 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:58:48.322482 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 19:58:48.322640 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:58:48.332493 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 19:58:48.332536 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 19:58:48.341335 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 19:58:48.341369 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:58:48.351204 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 19:58:48.351252 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:58:48.370827 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 19:58:48.370876 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 19:58:48.379593 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:58:48.379639 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:58:48.403334 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 19:58:48.415744 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 19:58:48.415811 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:58:48.424828 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 19:58:48.424874 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:58:48.435025 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 19:58:48.435075 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:58:48.446230 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:58:48.446271 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:58:48.455370 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 19:58:48.455451 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 19:58:48.479286 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 19:58:48.479408 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 19:58:48.971687 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 19:58:48.980030 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 19:58:48.980160 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 19:58:48.988614 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 19:58:48.996651 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 19:58:48.996707 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 19:58:49.018416 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 19:58:49.029971 systemd[1]: Switching root. Apr 13 19:58:49.108541 systemd-journald[218]: Journal stopped Apr 13 19:58:54.404508 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Apr 13 19:58:54.404535 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 19:58:54.404546 kernel: SELinux: policy capability open_perms=1 Apr 13 19:58:54.404558 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 19:58:54.404565 kernel: SELinux: policy capability always_check_network=0 Apr 13 19:58:54.404573 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 19:58:54.404582 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 19:58:54.404590 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 19:58:54.404598 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 19:58:54.404606 kernel: audit: type=1403 audit(1776110330.351:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 19:58:54.404616 systemd[1]: Successfully loaded SELinux policy in 136.226ms. Apr 13 19:58:54.404626 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.857ms. Apr 13 19:58:54.404636 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:58:54.404645 systemd[1]: Detected virtualization microsoft. Apr 13 19:58:54.404654 systemd[1]: Detected architecture arm64. Apr 13 19:58:54.404664 systemd[1]: Detected first boot. Apr 13 19:58:54.404674 systemd[1]: Hostname set to . Apr 13 19:58:54.404683 systemd[1]: Initializing machine ID from random generator. Apr 13 19:58:54.404692 zram_generator::config[1185]: No configuration found. Apr 13 19:58:54.404701 systemd[1]: Populated /etc with preset unit settings. Apr 13 19:58:54.404710 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 19:58:54.404720 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 19:58:54.404730 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 19:58:54.404740 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 19:58:54.404750 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 19:58:54.404760 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 19:58:54.404769 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 19:58:54.404778 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 19:58:54.404789 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 19:58:54.404798 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 19:58:54.404807 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 19:58:54.404817 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:58:54.404826 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:58:54.404835 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 19:58:54.404845 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 19:58:54.404854 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 19:58:54.404863 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:58:54.404874 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 13 19:58:54.404883 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:58:54.404892 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 19:58:54.404904 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 19:58:54.404914 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 19:58:54.404923 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 19:58:54.404933 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:58:54.404944 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:58:54.404954 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:58:54.404963 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:58:54.404972 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 19:58:54.404982 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 19:58:54.404991 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:58:54.405001 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:58:54.405012 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:58:54.405022 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 19:58:54.405031 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 19:58:54.405041 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 19:58:54.405050 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 19:58:54.405060 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 19:58:54.405071 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 19:58:54.405081 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 19:58:54.405091 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 19:58:54.405100 systemd[1]: Reached target machines.target - Containers. Apr 13 19:58:54.405110 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 19:58:54.405120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:58:54.405137 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:58:54.405147 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 19:58:54.405159 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:58:54.405170 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:58:54.405179 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:58:54.405189 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 19:58:54.405199 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:58:54.405208 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 19:58:54.405218 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 19:58:54.405227 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 19:58:54.405237 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 19:58:54.405248 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 19:58:54.405258 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:58:54.405267 kernel: loop: module loaded Apr 13 19:58:54.405275 kernel: fuse: init (API version 7.39) Apr 13 19:58:54.405284 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:58:54.405294 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 19:58:54.405303 kernel: ACPI: bus type drm_connector registered Apr 13 19:58:54.405328 systemd-journald[1271]: Collecting audit messages is disabled. Apr 13 19:58:54.405350 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 19:58:54.405360 systemd-journald[1271]: Journal started Apr 13 19:58:54.405380 systemd-journald[1271]: Runtime Journal (/run/log/journal/f7c931f282bb4099947bb3e729b90d26) is 8.0M, max 78.5M, 70.5M free. Apr 13 19:58:53.430834 systemd[1]: Queued start job for default target multi-user.target. Apr 13 19:58:53.671399 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 13 19:58:53.671706 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 19:58:53.671989 systemd[1]: systemd-journald.service: Consumed 2.550s CPU time. Apr 13 19:58:54.428385 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:58:54.435574 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 19:58:54.435627 systemd[1]: Stopped verity-setup.service. Apr 13 19:58:54.451006 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:58:54.451867 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 19:58:54.456594 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 19:58:54.461454 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 19:58:54.465770 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 19:58:54.470895 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 19:58:54.475760 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 19:58:54.480085 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 19:58:54.486572 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:58:54.492432 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 19:58:54.492567 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 19:58:54.498011 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:58:54.498152 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:58:54.503656 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:58:54.503782 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:58:54.508838 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:58:54.508967 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:58:54.514870 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 19:58:54.514996 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 19:58:54.520518 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:58:54.520646 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:58:54.525705 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:58:54.530762 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 19:58:54.536616 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 19:58:54.542430 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:58:54.557633 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 19:58:54.567199 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 19:58:54.576251 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 19:58:54.583507 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 19:58:54.583544 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:58:54.589050 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 19:58:54.602240 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 19:58:54.608243 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 19:58:54.612887 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:58:54.638334 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 19:58:54.644051 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 19:58:54.648977 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:58:54.651269 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 19:58:54.656509 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:58:54.658379 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:58:54.676493 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 19:58:54.683291 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:58:54.691309 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 19:58:54.698667 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 19:58:54.704300 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 19:58:54.710971 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 19:58:54.717635 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 19:58:54.728407 kernel: loop0: detected capacity change from 0 to 209336 Apr 13 19:58:54.730852 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 19:58:54.739096 systemd-journald[1271]: Time spent on flushing to /var/log/journal/f7c931f282bb4099947bb3e729b90d26 is 16.898ms for 901 entries. Apr 13 19:58:54.739096 systemd-journald[1271]: System Journal (/var/log/journal/f7c931f282bb4099947bb3e729b90d26) is 8.0M, max 2.6G, 2.6G free. Apr 13 19:58:54.833322 systemd-journald[1271]: Received client request to flush runtime journal. Apr 13 19:58:54.746467 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 19:58:54.752141 udevadm[1322]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 13 19:58:54.796826 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:58:54.839496 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 19:58:54.854180 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 19:58:54.874110 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Apr 13 19:58:54.874137 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Apr 13 19:58:54.878825 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:58:54.892303 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 19:58:54.900523 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 19:58:54.901293 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 19:58:54.915148 kernel: loop1: detected capacity change from 0 to 114432 Apr 13 19:58:55.009205 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 19:58:55.028228 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:58:55.043941 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Apr 13 19:58:55.044235 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Apr 13 19:58:55.048358 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:58:55.280143 kernel: loop2: detected capacity change from 0 to 114328 Apr 13 19:58:55.366100 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 19:58:55.378336 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:58:55.397499 systemd-udevd[1346]: Using default interface naming scheme 'v255'. Apr 13 19:58:55.670906 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:58:55.692874 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:58:55.698153 kernel: loop3: detected capacity change from 0 to 31320 Apr 13 19:58:55.732110 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 13 19:58:55.743304 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 19:58:55.804674 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 19:58:55.857215 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#211 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 13 19:58:55.884118 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 19:58:55.884229 kernel: hv_vmbus: registering driver hv_balloon Apr 13 19:58:55.884255 kernel: hv_vmbus: registering driver hyperv_fb Apr 13 19:58:55.884271 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 13 19:58:55.892875 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 13 19:58:55.892982 kernel: hv_balloon: Memory hot add disabled on ARM64 Apr 13 19:58:55.901189 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 13 19:58:55.912180 kernel: Console: switching to colour dummy device 80x25 Apr 13 19:58:55.902910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:58:55.921266 kernel: Console: switching to colour frame buffer device 128x48 Apr 13 19:58:55.922401 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:58:55.925957 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:58:55.941418 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:58:55.962048 systemd-networkd[1361]: lo: Link UP Apr 13 19:58:55.962055 systemd-networkd[1361]: lo: Gained carrier Apr 13 19:58:55.964971 systemd-networkd[1361]: Enumeration completed Apr 13 19:58:55.965178 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:58:55.966450 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:58:55.966454 systemd-networkd[1361]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:58:55.979569 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1366) Apr 13 19:58:55.986482 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 19:58:56.012738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:58:56.012992 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:58:56.032928 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:58:56.037158 kernel: mlx5_core 5301:00:02.0 enP21249s1: Link up Apr 13 19:58:56.055994 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 13 19:58:56.069400 kernel: hv_netvsc 7ced8d78-c61e-7ced-8d78-c61e7ced8d78 eth0: Data path switched to VF: enP21249s1 Apr 13 19:58:56.071641 systemd-networkd[1361]: enP21249s1: Link UP Apr 13 19:58:56.071734 systemd-networkd[1361]: eth0: Link UP Apr 13 19:58:56.071737 systemd-networkd[1361]: eth0: Gained carrier Apr 13 19:58:56.071751 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:58:56.074360 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 19:58:56.075422 systemd-networkd[1361]: enP21249s1: Gained carrier Apr 13 19:58:56.087179 systemd-networkd[1361]: eth0: DHCPv4 address 10.0.0.17/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 13 19:58:56.109395 kernel: loop4: detected capacity change from 0 to 209336 Apr 13 19:58:56.132246 kernel: loop5: detected capacity change from 0 to 114432 Apr 13 19:58:56.152484 kernel: loop6: detected capacity change from 0 to 114328 Apr 13 19:58:56.169759 kernel: loop7: detected capacity change from 0 to 31320 Apr 13 19:58:56.170936 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 19:58:56.184418 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 19:58:56.190312 (sd-merge)[1442]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 13 19:58:56.190742 (sd-merge)[1442]: Merged extensions into '/usr'. Apr 13 19:58:56.194280 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 19:58:56.211169 systemd[1]: Reloading requested from client PID 1319 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 19:58:56.211272 systemd[1]: Reloading... Apr 13 19:58:56.241706 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:58:56.275155 zram_generator::config[1477]: No configuration found. Apr 13 19:58:56.396407 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:58:56.470205 systemd[1]: Reloading finished in 258 ms. Apr 13 19:58:56.504630 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 19:58:56.510642 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 19:58:56.518836 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:58:56.527256 systemd[1]: Starting ensure-sysext.service... Apr 13 19:58:56.533258 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 19:58:56.540084 lvm[1534]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:58:56.543303 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:58:56.554309 systemd[1]: Reloading requested from client PID 1533 ('systemctl') (unit ensure-sysext.service)... Apr 13 19:58:56.554323 systemd[1]: Reloading... Apr 13 19:58:56.571034 systemd-tmpfiles[1535]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 19:58:56.571918 systemd-tmpfiles[1535]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 19:58:56.573273 systemd-tmpfiles[1535]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 19:58:56.573674 systemd-tmpfiles[1535]: ACLs are not supported, ignoring. Apr 13 19:58:56.573719 systemd-tmpfiles[1535]: ACLs are not supported, ignoring. Apr 13 19:58:56.577846 systemd-tmpfiles[1535]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:58:56.577855 systemd-tmpfiles[1535]: Skipping /boot Apr 13 19:58:56.588092 systemd-tmpfiles[1535]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:58:56.588202 systemd-tmpfiles[1535]: Skipping /boot Apr 13 19:58:56.624150 zram_generator::config[1564]: No configuration found. Apr 13 19:58:56.733687 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:58:56.812314 systemd[1]: Reloading finished in 257 ms. Apr 13 19:58:56.830873 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:58:56.840682 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 19:58:56.846441 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:58:56.858620 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:58:56.871439 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 19:58:56.879418 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 19:58:56.888312 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:58:56.896464 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 19:58:56.905307 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:58:56.910297 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:58:56.919346 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:58:56.930444 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:58:56.939500 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:58:56.940420 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:58:56.940821 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:58:56.947504 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:58:56.948091 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:58:56.955359 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:58:56.955834 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:58:56.968012 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:58:56.978409 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:58:56.989664 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:58:56.998242 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:58:57.003115 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:58:57.005618 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 19:58:57.011714 systemd-resolved[1633]: Positive Trust Anchors: Apr 13 19:58:57.011728 systemd-resolved[1633]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:58:57.011790 systemd-resolved[1633]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:58:57.011939 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 19:58:57.020542 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:58:57.020703 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:58:57.026458 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:58:57.026590 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:58:57.033554 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:58:57.033684 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:58:57.041594 augenrules[1658]: No rules Apr 13 19:58:57.042836 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:58:57.054389 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:58:57.058311 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:58:57.066610 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:58:57.074415 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:58:57.081371 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:58:57.090550 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:58:57.090614 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 19:58:57.095601 systemd[1]: Finished ensure-sysext.service. Apr 13 19:58:57.099269 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:58:57.099404 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:58:57.105617 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:58:57.107466 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:58:57.113851 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:58:57.114237 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:58:57.114573 systemd-resolved[1633]: Using system hostname 'ci-4081.3.7-a-39cd336750'. Apr 13 19:58:57.119806 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:58:57.125455 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:58:57.125610 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:58:57.136051 systemd[1]: Reached target network.target - Network. Apr 13 19:58:57.140862 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:58:57.146732 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:58:57.146799 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:58:57.370278 systemd-networkd[1361]: eth0: Gained IPv6LL Apr 13 19:58:57.371968 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 19:58:57.378992 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 19:58:57.771814 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 19:58:57.777637 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 19:59:00.135685 ldconfig[1314]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 19:59:00.160267 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 19:59:00.170324 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 19:59:00.183495 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 19:59:00.188775 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:59:00.193939 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 19:59:00.199394 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 19:59:00.205431 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 19:59:00.210476 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 19:59:00.216178 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 19:59:00.221844 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 19:59:00.221876 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:59:00.226078 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:59:00.243908 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 19:59:00.250158 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 19:59:00.257704 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 19:59:00.262607 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 19:59:00.267407 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:59:00.271621 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:59:00.276016 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:59:00.276043 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:59:00.283208 systemd[1]: Starting chronyd.service - NTP client/server... Apr 13 19:59:00.290274 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 19:59:00.305084 (chronyd)[1685]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Apr 13 19:59:00.307336 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 19:59:00.316208 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 19:59:00.323291 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 19:59:00.333589 chronyd[1693]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Apr 13 19:59:00.336529 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 19:59:00.342400 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 19:59:00.342442 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Apr 13 19:59:00.343455 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 13 19:59:00.349603 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 13 19:59:00.350391 KVP[1695]: KVP starting; pid is:1695 Apr 13 19:59:00.350645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:59:00.360199 chronyd[1693]: Timezone right/UTC failed leap second check, ignoring Apr 13 19:59:00.360379 chronyd[1693]: Loaded seccomp filter (level 2) Apr 13 19:59:00.362356 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 19:59:00.367141 jq[1691]: false Apr 13 19:59:00.370336 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 19:59:00.377300 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 19:59:00.386716 extend-filesystems[1694]: Found loop4 Apr 13 19:59:00.386716 extend-filesystems[1694]: Found loop5 Apr 13 19:59:00.386716 extend-filesystems[1694]: Found loop6 Apr 13 19:59:00.410365 kernel: hv_utils: KVP IC version 4.0 Apr 13 19:59:00.410441 extend-filesystems[1694]: Found loop7 Apr 13 19:59:00.410441 extend-filesystems[1694]: Found sda Apr 13 19:59:00.410441 extend-filesystems[1694]: Found sda1 Apr 13 19:59:00.410441 extend-filesystems[1694]: Found sda2 Apr 13 19:59:00.410441 extend-filesystems[1694]: Found sda3 Apr 13 19:59:00.410441 extend-filesystems[1694]: Found usr Apr 13 19:59:00.410441 extend-filesystems[1694]: Found sda4 Apr 13 19:59:00.410441 extend-filesystems[1694]: Found sda6 Apr 13 19:59:00.410441 extend-filesystems[1694]: Found sda7 Apr 13 19:59:00.410441 extend-filesystems[1694]: Found sda9 Apr 13 19:59:00.410441 extend-filesystems[1694]: Checking size of /dev/sda9 Apr 13 19:59:00.559993 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1733) Apr 13 19:59:00.403873 KVP[1695]: KVP LIC Version: 3.1 Apr 13 19:59:00.387262 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 19:59:00.560290 extend-filesystems[1694]: Old size kept for /dev/sda9 Apr 13 19:59:00.560290 extend-filesystems[1694]: Found sr0 Apr 13 19:59:00.549485 dbus-daemon[1688]: [system] SELinux support is enabled Apr 13 19:59:00.409486 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 19:59:00.427592 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 19:59:00.446457 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 19:59:00.614150 update_engine[1722]: I20260413 19:59:00.505362 1722 main.cc:92] Flatcar Update Engine starting Apr 13 19:59:00.614150 update_engine[1722]: I20260413 19:59:00.554375 1722 update_check_scheduler.cc:74] Next update check in 7m4s Apr 13 19:59:00.446996 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 19:59:00.614429 jq[1725]: true Apr 13 19:59:00.452304 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 19:59:00.471258 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 19:59:00.483856 systemd[1]: Started chronyd.service - NTP client/server. Apr 13 19:59:00.504526 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 19:59:00.504706 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 19:59:00.504950 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 19:59:00.505086 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 19:59:00.517538 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 19:59:00.517691 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 19:59:00.533268 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 19:59:00.547469 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 19:59:00.548196 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 19:59:00.552413 systemd-logind[1714]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Apr 13 19:59:00.624094 jq[1760]: true Apr 13 19:59:00.568941 systemd-logind[1714]: New seat seat0. Apr 13 19:59:00.571468 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 19:59:00.582172 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 19:59:00.626044 (ntainerd)[1763]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 19:59:00.636853 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 19:59:00.637580 dbus-daemon[1688]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 13 19:59:00.636884 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 19:59:00.645027 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 19:59:00.645050 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 19:59:00.654632 systemd[1]: Started update-engine.service - Update Engine. Apr 13 19:59:00.655608 tar[1744]: linux-arm64/LICENSE Apr 13 19:59:00.655608 tar[1744]: linux-arm64/helm Apr 13 19:59:00.663349 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 19:59:00.668475 coreos-metadata[1687]: Apr 13 19:59:00.667 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 13 19:59:00.678132 coreos-metadata[1687]: Apr 13 19:59:00.676 INFO Fetch successful Apr 13 19:59:00.678132 coreos-metadata[1687]: Apr 13 19:59:00.676 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 13 19:59:00.681883 coreos-metadata[1687]: Apr 13 19:59:00.681 INFO Fetch successful Apr 13 19:59:00.681883 coreos-metadata[1687]: Apr 13 19:59:00.681 INFO Fetching http://168.63.129.16/machine/6d7eb444-ad12-467a-9c5f-613e6a50984c/d4f0cbb4%2D7b9d%2D406a%2Db892%2D9130bef38c60.%5Fci%2D4081.3.7%2Da%2D39cd336750?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 13 19:59:00.684118 coreos-metadata[1687]: Apr 13 19:59:00.684 INFO Fetch successful Apr 13 19:59:00.684118 coreos-metadata[1687]: Apr 13 19:59:00.684 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 13 19:59:00.696300 coreos-metadata[1687]: Apr 13 19:59:00.695 INFO Fetch successful Apr 13 19:59:00.751665 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 19:59:00.761324 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 19:59:00.769140 bash[1796]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:59:00.786972 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 19:59:00.795221 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 13 19:59:00.904013 locksmithd[1782]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 19:59:01.300732 tar[1744]: linux-arm64/README.md Apr 13 19:59:01.317774 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 19:59:01.361929 sshd_keygen[1720]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 19:59:01.382217 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 19:59:01.394468 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 19:59:01.402913 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 13 19:59:01.411793 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 19:59:01.411992 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 19:59:01.423365 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 19:59:01.442269 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 19:59:01.459464 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 19:59:01.467434 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 13 19:59:01.475645 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 19:59:01.490403 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 13 19:59:01.502595 containerd[1763]: time="2026-04-13T19:59:01.502504460Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 19:59:01.532189 containerd[1763]: time="2026-04-13T19:59:01.531968580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:59:01.534860 containerd[1763]: time="2026-04-13T19:59:01.533776060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:59:01.534860 containerd[1763]: time="2026-04-13T19:59:01.533813380Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 19:59:01.534860 containerd[1763]: time="2026-04-13T19:59:01.533831180Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 19:59:01.534860 containerd[1763]: time="2026-04-13T19:59:01.533989060Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 19:59:01.534860 containerd[1763]: time="2026-04-13T19:59:01.534006020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 19:59:01.534860 containerd[1763]: time="2026-04-13T19:59:01.534063300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:59:01.534860 containerd[1763]: time="2026-04-13T19:59:01.534075060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:59:01.534860 containerd[1763]: time="2026-04-13T19:59:01.534247260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:59:01.534860 containerd[1763]: time="2026-04-13T19:59:01.534263900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 19:59:01.534860 containerd[1763]: time="2026-04-13T19:59:01.534276820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:59:01.534860 containerd[1763]: time="2026-04-13T19:59:01.534286220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 19:59:01.535169 containerd[1763]: time="2026-04-13T19:59:01.534350820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:59:01.535169 containerd[1763]: time="2026-04-13T19:59:01.534527100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:59:01.535169 containerd[1763]: time="2026-04-13T19:59:01.534622740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:59:01.535169 containerd[1763]: time="2026-04-13T19:59:01.534636140Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 19:59:01.535169 containerd[1763]: time="2026-04-13T19:59:01.534705580Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 19:59:01.535169 containerd[1763]: time="2026-04-13T19:59:01.534743540Z" level=info msg="metadata content store policy set" policy=shared Apr 13 19:59:01.556458 containerd[1763]: time="2026-04-13T19:59:01.555863660Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 19:59:01.556458 containerd[1763]: time="2026-04-13T19:59:01.555932780Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 19:59:01.556458 containerd[1763]: time="2026-04-13T19:59:01.555950620Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 19:59:01.556458 containerd[1763]: time="2026-04-13T19:59:01.555966340Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 19:59:01.556458 containerd[1763]: time="2026-04-13T19:59:01.555982900Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 19:59:01.556458 containerd[1763]: time="2026-04-13T19:59:01.556167700Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 19:59:01.556616 containerd[1763]: time="2026-04-13T19:59:01.556463340Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 19:59:01.556641 containerd[1763]: time="2026-04-13T19:59:01.556612860Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 19:59:01.556641 containerd[1763]: time="2026-04-13T19:59:01.556631340Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 19:59:01.556675 containerd[1763]: time="2026-04-13T19:59:01.556645140Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 19:59:01.556675 containerd[1763]: time="2026-04-13T19:59:01.556660100Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 19:59:01.556675 containerd[1763]: time="2026-04-13T19:59:01.556672700Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 19:59:01.556724 containerd[1763]: time="2026-04-13T19:59:01.556684980Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 19:59:01.556724 containerd[1763]: time="2026-04-13T19:59:01.556698580Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 19:59:01.556724 containerd[1763]: time="2026-04-13T19:59:01.556712580Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 19:59:01.556779 containerd[1763]: time="2026-04-13T19:59:01.556725460Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 19:59:01.556779 containerd[1763]: time="2026-04-13T19:59:01.556738580Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 19:59:01.556779 containerd[1763]: time="2026-04-13T19:59:01.556750500Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 19:59:01.556779 containerd[1763]: time="2026-04-13T19:59:01.556770980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 19:59:01.556851 containerd[1763]: time="2026-04-13T19:59:01.556786300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 19:59:01.556851 containerd[1763]: time="2026-04-13T19:59:01.556798660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 19:59:01.556851 containerd[1763]: time="2026-04-13T19:59:01.556811380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 19:59:01.556851 containerd[1763]: time="2026-04-13T19:59:01.556822620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 19:59:01.556851 containerd[1763]: time="2026-04-13T19:59:01.556835340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 19:59:01.556851 containerd[1763]: time="2026-04-13T19:59:01.556847540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 19:59:01.556957 containerd[1763]: time="2026-04-13T19:59:01.556860100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 19:59:01.556957 containerd[1763]: time="2026-04-13T19:59:01.556872380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 19:59:01.556957 containerd[1763]: time="2026-04-13T19:59:01.556892380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 19:59:01.556957 containerd[1763]: time="2026-04-13T19:59:01.556906540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 19:59:01.556957 containerd[1763]: time="2026-04-13T19:59:01.556919140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 19:59:01.556957 containerd[1763]: time="2026-04-13T19:59:01.556931900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 19:59:01.556957 containerd[1763]: time="2026-04-13T19:59:01.556947020Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 19:59:01.557080 containerd[1763]: time="2026-04-13T19:59:01.556974900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 19:59:01.557080 containerd[1763]: time="2026-04-13T19:59:01.556987300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 19:59:01.557080 containerd[1763]: time="2026-04-13T19:59:01.556997540Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 19:59:01.557080 containerd[1763]: time="2026-04-13T19:59:01.557044300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 19:59:01.557080 containerd[1763]: time="2026-04-13T19:59:01.557061300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 19:59:01.557080 containerd[1763]: time="2026-04-13T19:59:01.557073100Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 19:59:01.557202 containerd[1763]: time="2026-04-13T19:59:01.557084380Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 19:59:01.557202 containerd[1763]: time="2026-04-13T19:59:01.557093660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 19:59:01.557202 containerd[1763]: time="2026-04-13T19:59:01.557105220Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 19:59:01.557202 containerd[1763]: time="2026-04-13T19:59:01.557115180Z" level=info msg="NRI interface is disabled by configuration." Apr 13 19:59:01.557202 containerd[1763]: time="2026-04-13T19:59:01.557157660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 19:59:01.558158 containerd[1763]: time="2026-04-13T19:59:01.557433620Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 19:59:01.558158 containerd[1763]: time="2026-04-13T19:59:01.557495980Z" level=info msg="Connect containerd service" Apr 13 19:59:01.558158 containerd[1763]: time="2026-04-13T19:59:01.557533420Z" level=info msg="using legacy CRI server" Apr 13 19:59:01.558158 containerd[1763]: time="2026-04-13T19:59:01.557539900Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 19:59:01.558158 containerd[1763]: time="2026-04-13T19:59:01.557630340Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 19:59:01.558915 containerd[1763]: time="2026-04-13T19:59:01.558269140Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:59:01.558915 containerd[1763]: time="2026-04-13T19:59:01.558575940Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 19:59:01.558915 containerd[1763]: time="2026-04-13T19:59:01.558615700Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 19:59:01.558915 containerd[1763]: time="2026-04-13T19:59:01.558649380Z" level=info msg="Start subscribing containerd event" Apr 13 19:59:01.558915 containerd[1763]: time="2026-04-13T19:59:01.558681740Z" level=info msg="Start recovering state" Apr 13 19:59:01.558915 containerd[1763]: time="2026-04-13T19:59:01.558734100Z" level=info msg="Start event monitor" Apr 13 19:59:01.558915 containerd[1763]: time="2026-04-13T19:59:01.558753220Z" level=info msg="Start snapshots syncer" Apr 13 19:59:01.558915 containerd[1763]: time="2026-04-13T19:59:01.558761380Z" level=info msg="Start cni network conf syncer for default" Apr 13 19:59:01.558915 containerd[1763]: time="2026-04-13T19:59:01.558772180Z" level=info msg="Start streaming server" Apr 13 19:59:01.558914 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 19:59:01.564931 containerd[1763]: time="2026-04-13T19:59:01.564731660Z" level=info msg="containerd successfully booted in 0.064930s" Apr 13 19:59:01.615639 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:59:01.621759 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 19:59:01.621931 (kubelet)[1851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:59:01.632261 systemd[1]: Startup finished in 609ms (kernel) + 13.388s (initrd) + 11.415s (userspace) = 25.413s. Apr 13 19:59:01.939574 login[1839]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:59:01.939905 login[1838]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:59:01.951696 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 19:59:01.956342 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 19:59:01.959877 systemd-logind[1714]: New session 1 of user core. Apr 13 19:59:01.964757 systemd-logind[1714]: New session 2 of user core. Apr 13 19:59:01.985049 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 19:59:01.990352 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 19:59:01.996648 (systemd)[1863]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 19:59:02.106139 kubelet[1851]: E0413 19:59:02.105046 1851 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:59:02.108009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:59:02.110015 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:59:02.157419 systemd[1863]: Queued start job for default target default.target. Apr 13 19:59:02.168010 systemd[1863]: Created slice app.slice - User Application Slice. Apr 13 19:59:02.168187 systemd[1863]: Reached target paths.target - Paths. Apr 13 19:59:02.168267 systemd[1863]: Reached target timers.target - Timers. Apr 13 19:59:02.169501 systemd[1863]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 19:59:02.180089 systemd[1863]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 19:59:02.180362 systemd[1863]: Reached target sockets.target - Sockets. Apr 13 19:59:02.180399 systemd[1863]: Reached target basic.target - Basic System. Apr 13 19:59:02.180442 systemd[1863]: Reached target default.target - Main User Target. Apr 13 19:59:02.180467 systemd[1863]: Startup finished in 177ms. Apr 13 19:59:02.180800 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 19:59:02.191302 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 19:59:02.192030 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 19:59:03.196143 waagent[1841]: 2026-04-13T19:59:03.195368Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Apr 13 19:59:03.200110 waagent[1841]: 2026-04-13T19:59:03.200051Z INFO Daemon Daemon OS: flatcar 4081.3.7 Apr 13 19:59:03.203712 waagent[1841]: 2026-04-13T19:59:03.203674Z INFO Daemon Daemon Python: 3.11.9 Apr 13 19:59:03.209227 waagent[1841]: 2026-04-13T19:59:03.209178Z INFO Daemon Daemon Run daemon Apr 13 19:59:03.212640 waagent[1841]: 2026-04-13T19:59:03.212599Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.7' Apr 13 19:59:03.219473 waagent[1841]: 2026-04-13T19:59:03.219426Z INFO Daemon Daemon Using waagent for provisioning Apr 13 19:59:03.223813 waagent[1841]: 2026-04-13T19:59:03.223776Z INFO Daemon Daemon Activate resource disk Apr 13 19:59:03.227503 waagent[1841]: 2026-04-13T19:59:03.227468Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 13 19:59:03.236796 waagent[1841]: 2026-04-13T19:59:03.236749Z INFO Daemon Daemon Found device: None Apr 13 19:59:03.240732 waagent[1841]: 2026-04-13T19:59:03.240690Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 13 19:59:03.247233 waagent[1841]: 2026-04-13T19:59:03.247200Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 13 19:59:03.257637 waagent[1841]: 2026-04-13T19:59:03.257592Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 13 19:59:03.262076 waagent[1841]: 2026-04-13T19:59:03.262040Z INFO Daemon Daemon Running default provisioning handler Apr 13 19:59:03.272409 waagent[1841]: 2026-04-13T19:59:03.272336Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 13 19:59:03.283419 waagent[1841]: 2026-04-13T19:59:03.283363Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 13 19:59:03.291185 waagent[1841]: 2026-04-13T19:59:03.291143Z INFO Daemon Daemon cloud-init is enabled: False Apr 13 19:59:03.295137 waagent[1841]: 2026-04-13T19:59:03.295099Z INFO Daemon Daemon Copying ovf-env.xml Apr 13 19:59:03.350226 waagent[1841]: 2026-04-13T19:59:03.350113Z INFO Daemon Daemon Successfully mounted dvd Apr 13 19:59:03.365424 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 13 19:59:03.367377 waagent[1841]: 2026-04-13T19:59:03.367317Z INFO Daemon Daemon Detect protocol endpoint Apr 13 19:59:03.371066 waagent[1841]: 2026-04-13T19:59:03.371024Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 13 19:59:03.375456 waagent[1841]: 2026-04-13T19:59:03.375420Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 13 19:59:03.380515 waagent[1841]: 2026-04-13T19:59:03.380484Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 13 19:59:03.384672 waagent[1841]: 2026-04-13T19:59:03.384638Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 13 19:59:03.388653 waagent[1841]: 2026-04-13T19:59:03.388622Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 13 19:59:03.440496 waagent[1841]: 2026-04-13T19:59:03.440451Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 13 19:59:03.445929 waagent[1841]: 2026-04-13T19:59:03.445906Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 13 19:59:03.449969 waagent[1841]: 2026-04-13T19:59:03.449906Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 13 19:59:03.706289 waagent[1841]: 2026-04-13T19:59:03.706150Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 13 19:59:03.711308 waagent[1841]: 2026-04-13T19:59:03.711262Z INFO Daemon Daemon Forcing an update of the goal state. Apr 13 19:59:03.719014 waagent[1841]: 2026-04-13T19:59:03.718970Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 13 19:59:03.736672 waagent[1841]: 2026-04-13T19:59:03.736634Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Apr 13 19:59:03.741274 waagent[1841]: 2026-04-13T19:59:03.741235Z INFO Daemon Apr 13 19:59:03.743641 waagent[1841]: 2026-04-13T19:59:03.743605Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 5cc82614-a994-4336-8bca-f210bd805836 eTag: 11129439945873345356 source: Fabric] Apr 13 19:59:03.752434 waagent[1841]: 2026-04-13T19:59:03.752397Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 13 19:59:03.757661 waagent[1841]: 2026-04-13T19:59:03.757622Z INFO Daemon Apr 13 19:59:03.759961 waagent[1841]: 2026-04-13T19:59:03.759928Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 13 19:59:03.768717 waagent[1841]: 2026-04-13T19:59:03.768688Z INFO Daemon Daemon Downloading artifacts profile blob Apr 13 19:59:03.906866 waagent[1841]: 2026-04-13T19:59:03.906785Z INFO Daemon Downloaded certificate {'thumbprint': '38C2507D41136B6F45BCA04DC3E7D4F86E1A4F29', 'hasPrivateKey': True} Apr 13 19:59:03.914912 waagent[1841]: 2026-04-13T19:59:03.914851Z INFO Daemon Fetch goal state completed Apr 13 19:59:03.956761 waagent[1841]: 2026-04-13T19:59:03.956664Z INFO Daemon Daemon Starting provisioning Apr 13 19:59:03.960760 waagent[1841]: 2026-04-13T19:59:03.960714Z INFO Daemon Daemon Handle ovf-env.xml. Apr 13 19:59:03.964637 waagent[1841]: 2026-04-13T19:59:03.964596Z INFO Daemon Daemon Set hostname [ci-4081.3.7-a-39cd336750] Apr 13 19:59:03.971388 waagent[1841]: 2026-04-13T19:59:03.971337Z INFO Daemon Daemon Publish hostname [ci-4081.3.7-a-39cd336750] Apr 13 19:59:03.976565 waagent[1841]: 2026-04-13T19:59:03.976524Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 13 19:59:03.981631 waagent[1841]: 2026-04-13T19:59:03.981591Z INFO Daemon Daemon Primary interface is [eth0] Apr 13 19:59:04.034715 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:59:04.034721 systemd-networkd[1361]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:59:04.034753 systemd-networkd[1361]: eth0: DHCP lease lost Apr 13 19:59:04.036182 waagent[1841]: 2026-04-13T19:59:04.035998Z INFO Daemon Daemon Create user account if not exists Apr 13 19:59:04.040951 waagent[1841]: 2026-04-13T19:59:04.040907Z INFO Daemon Daemon User core already exists, skip useradd Apr 13 19:59:04.045723 waagent[1841]: 2026-04-13T19:59:04.045685Z INFO Daemon Daemon Configure sudoer Apr 13 19:59:04.046219 systemd-networkd[1361]: eth0: DHCPv6 lease lost Apr 13 19:59:04.049497 waagent[1841]: 2026-04-13T19:59:04.049449Z INFO Daemon Daemon Configure sshd Apr 13 19:59:04.053174 waagent[1841]: 2026-04-13T19:59:04.053118Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 13 19:59:04.063653 waagent[1841]: 2026-04-13T19:59:04.063614Z INFO Daemon Daemon Deploy ssh public key. Apr 13 19:59:04.075183 systemd-networkd[1361]: eth0: DHCPv4 address 10.0.0.17/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 13 19:59:05.144217 waagent[1841]: 2026-04-13T19:59:05.144158Z INFO Daemon Daemon Provisioning complete Apr 13 19:59:05.159316 waagent[1841]: 2026-04-13T19:59:05.159275Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 13 19:59:05.164539 waagent[1841]: 2026-04-13T19:59:05.164497Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 13 19:59:05.172136 waagent[1841]: 2026-04-13T19:59:05.172097Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Apr 13 19:59:05.298183 waagent[1915]: 2026-04-13T19:59:05.297555Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Apr 13 19:59:05.298183 waagent[1915]: 2026-04-13T19:59:05.297711Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.7 Apr 13 19:59:05.298183 waagent[1915]: 2026-04-13T19:59:05.297762Z INFO ExtHandler ExtHandler Python: 3.11.9 Apr 13 19:59:05.333923 waagent[1915]: 2026-04-13T19:59:05.333840Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.7; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 13 19:59:05.334269 waagent[1915]: 2026-04-13T19:59:05.334232Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 13 19:59:05.334412 waagent[1915]: 2026-04-13T19:59:05.334379Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 13 19:59:05.341980 waagent[1915]: 2026-04-13T19:59:05.341926Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 13 19:59:05.347525 waagent[1915]: 2026-04-13T19:59:05.347476Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Apr 13 19:59:05.348183 waagent[1915]: 2026-04-13T19:59:05.348142Z INFO ExtHandler Apr 13 19:59:05.348337 waagent[1915]: 2026-04-13T19:59:05.348304Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 008807ba-a4f9-4140-a51a-a316db1d7cfa eTag: 11129439945873345356 source: Fabric] Apr 13 19:59:05.349940 waagent[1915]: 2026-04-13T19:59:05.348677Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 13 19:59:05.349940 waagent[1915]: 2026-04-13T19:59:05.349258Z INFO ExtHandler Apr 13 19:59:05.349940 waagent[1915]: 2026-04-13T19:59:05.349331Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 13 19:59:05.354149 waagent[1915]: 2026-04-13T19:59:05.352843Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 13 19:59:05.465547 waagent[1915]: 2026-04-13T19:59:05.465419Z INFO ExtHandler Downloaded certificate {'thumbprint': '38C2507D41136B6F45BCA04DC3E7D4F86E1A4F29', 'hasPrivateKey': True} Apr 13 19:59:05.466027 waagent[1915]: 2026-04-13T19:59:05.465948Z INFO ExtHandler Fetch goal state completed Apr 13 19:59:05.479837 waagent[1915]: 2026-04-13T19:59:05.479780Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1915 Apr 13 19:59:05.479985 waagent[1915]: 2026-04-13T19:59:05.479950Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 13 19:59:05.481586 waagent[1915]: 2026-04-13T19:59:05.481545Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.7', '', 'Flatcar Container Linux by Kinvolk'] Apr 13 19:59:05.481937 waagent[1915]: 2026-04-13T19:59:05.481904Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 13 19:59:05.919869 waagent[1915]: 2026-04-13T19:59:05.919823Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 13 19:59:05.920083 waagent[1915]: 2026-04-13T19:59:05.920046Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 13 19:59:05.926338 waagent[1915]: 2026-04-13T19:59:05.926266Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 13 19:59:05.932712 systemd[1]: Reloading requested from client PID 1928 ('systemctl') (unit waagent.service)... Apr 13 19:59:05.932726 systemd[1]: Reloading... Apr 13 19:59:06.017147 zram_generator::config[1968]: No configuration found. Apr 13 19:59:06.111209 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:59:06.185780 systemd[1]: Reloading finished in 252 ms. Apr 13 19:59:06.212995 waagent[1915]: 2026-04-13T19:59:06.212881Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Apr 13 19:59:06.218963 systemd[1]: Reloading requested from client PID 2015 ('systemctl') (unit waagent.service)... Apr 13 19:59:06.218976 systemd[1]: Reloading... Apr 13 19:59:06.281197 zram_generator::config[2048]: No configuration found. Apr 13 19:59:06.403260 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:59:06.477891 systemd[1]: Reloading finished in 258 ms. Apr 13 19:59:06.502433 waagent[1915]: 2026-04-13T19:59:06.502306Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 13 19:59:06.502684 waagent[1915]: 2026-04-13T19:59:06.502465Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 13 19:59:06.876427 waagent[1915]: 2026-04-13T19:59:06.876341Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 13 19:59:06.877033 waagent[1915]: 2026-04-13T19:59:06.876986Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 13 19:59:06.877870 waagent[1915]: 2026-04-13T19:59:06.877787Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 13 19:59:06.878293 waagent[1915]: 2026-04-13T19:59:06.878199Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 13 19:59:06.878659 waagent[1915]: 2026-04-13T19:59:06.878577Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 13 19:59:06.878880 waagent[1915]: 2026-04-13T19:59:06.878788Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 13 19:59:06.879938 waagent[1915]: 2026-04-13T19:59:06.879219Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 13 19:59:06.879938 waagent[1915]: 2026-04-13T19:59:06.879319Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 13 19:59:06.879938 waagent[1915]: 2026-04-13T19:59:06.879516Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 13 19:59:06.879938 waagent[1915]: 2026-04-13T19:59:06.879674Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 13 19:59:06.879938 waagent[1915]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 13 19:59:06.879938 waagent[1915]: eth0 00000000 0100000A 0003 0 0 1024 00000000 0 0 0 Apr 13 19:59:06.879938 waagent[1915]: eth0 0000000A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 13 19:59:06.879938 waagent[1915]: eth0 0100000A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 13 19:59:06.879938 waagent[1915]: eth0 10813FA8 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 13 19:59:06.879938 waagent[1915]: eth0 FEA9FEA9 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 13 19:59:06.880251 waagent[1915]: 2026-04-13T19:59:06.880209Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 13 19:59:06.880458 waagent[1915]: 2026-04-13T19:59:06.880390Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 13 19:59:06.880550 waagent[1915]: 2026-04-13T19:59:06.880489Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 13 19:59:06.880777 waagent[1915]: 2026-04-13T19:59:06.880737Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 13 19:59:06.881371 waagent[1915]: 2026-04-13T19:59:06.881286Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 13 19:59:06.881547 waagent[1915]: 2026-04-13T19:59:06.881509Z INFO EnvHandler ExtHandler Configure routes Apr 13 19:59:06.882883 waagent[1915]: 2026-04-13T19:59:06.882832Z INFO EnvHandler ExtHandler Gateway:None Apr 13 19:59:06.883343 waagent[1915]: 2026-04-13T19:59:06.883302Z INFO EnvHandler ExtHandler Routes:None Apr 13 19:59:06.887202 waagent[1915]: 2026-04-13T19:59:06.887150Z INFO ExtHandler ExtHandler Apr 13 19:59:06.887296 waagent[1915]: 2026-04-13T19:59:06.887262Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: e1f5182a-b668-4947-80fa-77054bc42532 correlation 1ac961dd-19b8-44cb-bd1a-b5250f74d7b1 created: 2026-04-13T19:58:06.433174Z] Apr 13 19:59:06.888180 waagent[1915]: 2026-04-13T19:59:06.888110Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 13 19:59:06.890277 waagent[1915]: 2026-04-13T19:59:06.890198Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Apr 13 19:59:06.929424 waagent[1915]: 2026-04-13T19:59:06.929365Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 4F911347-DA3C-430A-A42C-DC82A1F50AE4;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Apr 13 19:59:06.938650 waagent[1915]: 2026-04-13T19:59:06.938255Z INFO MonitorHandler ExtHandler Network interfaces: Apr 13 19:59:06.938650 waagent[1915]: Executing ['ip', '-a', '-o', 'link']: Apr 13 19:59:06.938650 waagent[1915]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 13 19:59:06.938650 waagent[1915]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:78:c6:1e brd ff:ff:ff:ff:ff:ff Apr 13 19:59:06.938650 waagent[1915]: 3: enP21249s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:78:c6:1e brd ff:ff:ff:ff:ff:ff\ altname enP21249p0s2 Apr 13 19:59:06.938650 waagent[1915]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 13 19:59:06.938650 waagent[1915]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 13 19:59:06.938650 waagent[1915]: 2: eth0 inet 10.0.0.17/24 metric 1024 brd 10.0.0.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 13 19:59:06.938650 waagent[1915]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 13 19:59:06.938650 waagent[1915]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 13 19:59:06.938650 waagent[1915]: 2: eth0 inet6 fe80::7eed:8dff:fe78:c61e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 13 19:59:06.986224 waagent[1915]: 2026-04-13T19:59:06.985849Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 13 19:59:06.986224 waagent[1915]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 13 19:59:06.986224 waagent[1915]: pkts bytes target prot opt in out source destination Apr 13 19:59:06.986224 waagent[1915]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 13 19:59:06.986224 waagent[1915]: pkts bytes target prot opt in out source destination Apr 13 19:59:06.986224 waagent[1915]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 13 19:59:06.986224 waagent[1915]: pkts bytes target prot opt in out source destination Apr 13 19:59:06.986224 waagent[1915]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 13 19:59:06.986224 waagent[1915]: 4 594 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 13 19:59:06.986224 waagent[1915]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 13 19:59:06.988889 waagent[1915]: 2026-04-13T19:59:06.988838Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 13 19:59:06.988889 waagent[1915]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 13 19:59:06.988889 waagent[1915]: pkts bytes target prot opt in out source destination Apr 13 19:59:06.988889 waagent[1915]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 13 19:59:06.988889 waagent[1915]: pkts bytes target prot opt in out source destination Apr 13 19:59:06.988889 waagent[1915]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 13 19:59:06.988889 waagent[1915]: pkts bytes target prot opt in out source destination Apr 13 19:59:06.988889 waagent[1915]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 13 19:59:06.988889 waagent[1915]: 5 646 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 13 19:59:06.988889 waagent[1915]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 13 19:59:06.989117 waagent[1915]: 2026-04-13T19:59:06.989078Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 13 19:59:12.310621 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 19:59:12.319448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:59:12.418214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:59:12.428446 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:59:12.516657 kubelet[2143]: E0413 19:59:12.516602 2143 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:59:12.520310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:59:12.520578 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:59:22.560736 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 19:59:22.567422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:59:22.873964 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:59:22.877968 (kubelet)[2158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:59:22.910503 kubelet[2158]: E0413 19:59:22.910455 2158 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:59:22.913440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:59:22.913678 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:59:24.144029 chronyd[1693]: Selected source PHC0 Apr 13 19:59:24.815277 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 19:59:24.816756 systemd[1]: Started sshd@0-10.0.0.17:22-20.229.252.112:56916.service - OpenSSH per-connection server daemon (20.229.252.112:56916). Apr 13 19:59:25.773322 sshd[2166]: Accepted publickey for core from 20.229.252.112 port 56916 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 19:59:25.774087 sshd[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:59:25.777508 systemd-logind[1714]: New session 3 of user core. Apr 13 19:59:25.785250 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 19:59:26.560011 systemd[1]: Started sshd@1-10.0.0.17:22-20.229.252.112:51402.service - OpenSSH per-connection server daemon (20.229.252.112:51402). Apr 13 19:59:27.469996 sshd[2171]: Accepted publickey for core from 20.229.252.112 port 51402 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 19:59:27.471279 sshd[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:59:27.475006 systemd-logind[1714]: New session 4 of user core. Apr 13 19:59:27.483250 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 19:59:28.105236 sshd[2171]: pam_unix(sshd:session): session closed for user core Apr 13 19:59:28.108968 systemd[1]: sshd@1-10.0.0.17:22-20.229.252.112:51402.service: Deactivated successfully. Apr 13 19:59:28.110452 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 19:59:28.111038 systemd-logind[1714]: Session 4 logged out. Waiting for processes to exit. Apr 13 19:59:28.111782 systemd-logind[1714]: Removed session 4. Apr 13 19:59:28.277354 systemd[1]: Started sshd@2-10.0.0.17:22-20.229.252.112:51412.service - OpenSSH per-connection server daemon (20.229.252.112:51412). Apr 13 19:59:29.182486 sshd[2178]: Accepted publickey for core from 20.229.252.112 port 51412 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 19:59:29.183730 sshd[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:59:29.187836 systemd-logind[1714]: New session 5 of user core. Apr 13 19:59:29.193396 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 19:59:29.815238 sshd[2178]: pam_unix(sshd:session): session closed for user core Apr 13 19:59:29.818951 systemd[1]: sshd@2-10.0.0.17:22-20.229.252.112:51412.service: Deactivated successfully. Apr 13 19:59:29.820523 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 19:59:29.821221 systemd-logind[1714]: Session 5 logged out. Waiting for processes to exit. Apr 13 19:59:29.822008 systemd-logind[1714]: Removed session 5. Apr 13 19:59:29.967550 systemd[1]: Started sshd@3-10.0.0.17:22-20.229.252.112:51424.service - OpenSSH per-connection server daemon (20.229.252.112:51424). Apr 13 19:59:30.849226 sshd[2185]: Accepted publickey for core from 20.229.252.112 port 51424 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 19:59:30.849975 sshd[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:59:30.854683 systemd-logind[1714]: New session 6 of user core. Apr 13 19:59:30.861494 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 19:59:31.463777 sshd[2185]: pam_unix(sshd:session): session closed for user core Apr 13 19:59:31.466912 systemd[1]: sshd@3-10.0.0.17:22-20.229.252.112:51424.service: Deactivated successfully. Apr 13 19:59:31.468589 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 19:59:31.469851 systemd-logind[1714]: Session 6 logged out. Waiting for processes to exit. Apr 13 19:59:31.470871 systemd-logind[1714]: Removed session 6. Apr 13 19:59:31.621657 systemd[1]: Started sshd@4-10.0.0.17:22-20.229.252.112:51438.service - OpenSSH per-connection server daemon (20.229.252.112:51438). Apr 13 19:59:32.531870 sshd[2192]: Accepted publickey for core from 20.229.252.112 port 51438 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 19:59:32.533147 sshd[2192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:59:32.536538 systemd-logind[1714]: New session 7 of user core. Apr 13 19:59:32.545285 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 19:59:33.060638 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 19:59:33.066281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:59:33.187429 sudo[2195]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 19:59:33.187703 sudo[2195]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:59:33.284883 sudo[2195]: pam_unix(sudo:session): session closed for user root Apr 13 19:59:33.413609 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:59:33.426352 (kubelet)[2205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:59:33.434040 sshd[2192]: pam_unix(sshd:session): session closed for user core Apr 13 19:59:33.438260 systemd[1]: sshd@4-10.0.0.17:22-20.229.252.112:51438.service: Deactivated successfully. Apr 13 19:59:33.440828 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 19:59:33.443717 systemd-logind[1714]: Session 7 logged out. Waiting for processes to exit. Apr 13 19:59:33.445495 systemd-logind[1714]: Removed session 7. Apr 13 19:59:33.463549 kubelet[2205]: E0413 19:59:33.463495 2205 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:59:33.466322 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:59:33.466455 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:59:33.591335 systemd[1]: Started sshd@5-10.0.0.17:22-20.229.252.112:51454.service - OpenSSH per-connection server daemon (20.229.252.112:51454). Apr 13 19:59:34.500070 sshd[2215]: Accepted publickey for core from 20.229.252.112 port 51454 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 19:59:34.500933 sshd[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:59:34.504376 systemd-logind[1714]: New session 8 of user core. Apr 13 19:59:34.511337 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 19:59:34.985748 sudo[2219]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 19:59:34.986020 sudo[2219]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:59:34.989039 sudo[2219]: pam_unix(sudo:session): session closed for user root Apr 13 19:59:34.993306 sudo[2218]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 19:59:34.993552 sudo[2218]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:59:35.005701 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 19:59:35.006744 auditctl[2222]: No rules Apr 13 19:59:35.007165 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 19:59:35.007317 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 19:59:35.010796 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:59:35.031267 augenrules[2240]: No rules Apr 13 19:59:35.032556 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:59:35.034332 sudo[2218]: pam_unix(sudo:session): session closed for user root Apr 13 19:59:35.183329 sshd[2215]: pam_unix(sshd:session): session closed for user core Apr 13 19:59:35.186157 systemd-logind[1714]: Session 8 logged out. Waiting for processes to exit. Apr 13 19:59:35.186411 systemd[1]: sshd@5-10.0.0.17:22-20.229.252.112:51454.service: Deactivated successfully. Apr 13 19:59:35.187770 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 19:59:35.189549 systemd-logind[1714]: Removed session 8. Apr 13 19:59:35.341344 systemd[1]: Started sshd@6-10.0.0.17:22-20.229.252.112:35338.service - OpenSSH per-connection server daemon (20.229.252.112:35338). Apr 13 19:59:36.257024 sshd[2248]: Accepted publickey for core from 20.229.252.112 port 35338 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 19:59:36.257819 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:59:36.261319 systemd-logind[1714]: New session 9 of user core. Apr 13 19:59:36.269306 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 19:59:36.745957 sudo[2251]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 19:59:36.746269 sudo[2251]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:59:37.641320 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 19:59:37.642601 (dockerd)[2266]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 19:59:38.320977 dockerd[2266]: time="2026-04-13T19:59:38.320926575Z" level=info msg="Starting up" Apr 13 19:59:38.673361 dockerd[2266]: time="2026-04-13T19:59:38.673089598Z" level=info msg="Loading containers: start." Apr 13 19:59:38.852147 kernel: Initializing XFRM netlink socket Apr 13 19:59:38.993842 systemd-networkd[1361]: docker0: Link UP Apr 13 19:59:39.017507 dockerd[2266]: time="2026-04-13T19:59:39.017201210Z" level=info msg="Loading containers: done." Apr 13 19:59:39.027347 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3874172612-merged.mount: Deactivated successfully. Apr 13 19:59:39.037965 dockerd[2266]: time="2026-04-13T19:59:39.037568397Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 19:59:39.037965 dockerd[2266]: time="2026-04-13T19:59:39.037668477Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 19:59:39.037965 dockerd[2266]: time="2026-04-13T19:59:39.037759517Z" level=info msg="Daemon has completed initialization" Apr 13 19:59:39.101963 dockerd[2266]: time="2026-04-13T19:59:39.101529961Z" level=info msg="API listen on /run/docker.sock" Apr 13 19:59:39.101680 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 19:59:39.570194 containerd[1763]: time="2026-04-13T19:59:39.570138177Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 13 19:59:40.497507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2815023893.mount: Deactivated successfully. Apr 13 19:59:42.326535 containerd[1763]: time="2026-04-13T19:59:42.325448875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:42.332586 containerd[1763]: time="2026-04-13T19:59:42.332555803Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=27283683" Apr 13 19:59:42.341165 containerd[1763]: time="2026-04-13T19:59:42.340233972Z" level=info msg="ImageCreate event name:\"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:42.349143 containerd[1763]: time="2026-04-13T19:59:42.347416700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:42.349976 containerd[1763]: time="2026-04-13T19:59:42.349944863Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"27280282\" in 2.779770286s" Apr 13 19:59:42.350024 containerd[1763]: time="2026-04-13T19:59:42.349983903Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\"" Apr 13 19:59:42.350792 containerd[1763]: time="2026-04-13T19:59:42.350768344Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 13 19:59:43.560574 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 13 19:59:43.572308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:59:43.679034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:59:43.682863 (kubelet)[2468]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:59:43.777504 kubelet[2468]: E0413 19:59:43.777447 2468 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:59:43.780800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:59:43.781099 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:59:44.012521 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Apr 13 19:59:44.515011 containerd[1763]: time="2026-04-13T19:59:44.514960184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:44.518382 containerd[1763]: time="2026-04-13T19:59:44.518354508Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=23551902" Apr 13 19:59:44.522174 containerd[1763]: time="2026-04-13T19:59:44.522113232Z" level=info msg="ImageCreate event name:\"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:44.533801 containerd[1763]: time="2026-04-13T19:59:44.533258045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:44.534516 containerd[1763]: time="2026-04-13T19:59:44.534485407Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"25029924\" in 2.183685783s" Apr 13 19:59:44.534581 containerd[1763]: time="2026-04-13T19:59:44.534516927Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\"" Apr 13 19:59:44.535817 containerd[1763]: time="2026-04-13T19:59:44.535787848Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 13 19:59:46.227261 update_engine[1722]: I20260413 19:59:46.227195 1722 update_attempter.cc:509] Updating boot flags... Apr 13 19:59:46.275190 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2491) Apr 13 19:59:47.100307 containerd[1763]: time="2026-04-13T19:59:47.100256729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:47.103879 containerd[1763]: time="2026-04-13T19:59:47.103844053Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=18301233" Apr 13 19:59:47.107284 containerd[1763]: time="2026-04-13T19:59:47.107253698Z" level=info msg="ImageCreate event name:\"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:47.112440 containerd[1763]: time="2026-04-13T19:59:47.112390465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:47.113750 containerd[1763]: time="2026-04-13T19:59:47.113433466Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"19779273\" in 2.577530858s" Apr 13 19:59:47.113750 containerd[1763]: time="2026-04-13T19:59:47.113466746Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\"" Apr 13 19:59:47.114652 containerd[1763]: time="2026-04-13T19:59:47.114630508Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 13 19:59:48.294415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2973883074.mount: Deactivated successfully. Apr 13 19:59:48.770156 containerd[1763]: time="2026-04-13T19:59:48.769769558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:48.773171 containerd[1763]: time="2026-04-13T19:59:48.773147323Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=28148953" Apr 13 19:59:48.777416 containerd[1763]: time="2026-04-13T19:59:48.777174049Z" level=info msg="ImageCreate event name:\"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:48.781958 containerd[1763]: time="2026-04-13T19:59:48.781927775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:48.782482 containerd[1763]: time="2026-04-13T19:59:48.782450296Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"28147972\" in 1.667718588s" Apr 13 19:59:48.782546 containerd[1763]: time="2026-04-13T19:59:48.782483576Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\"" Apr 13 19:59:48.783438 containerd[1763]: time="2026-04-13T19:59:48.783245057Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 13 19:59:49.420023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1507929406.mount: Deactivated successfully. Apr 13 19:59:50.831553 containerd[1763]: time="2026-04-13T19:59:50.831504511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:50.835820 containerd[1763]: time="2026-04-13T19:59:50.835788277Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Apr 13 19:59:50.847379 containerd[1763]: time="2026-04-13T19:59:50.847354374Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:50.852882 containerd[1763]: time="2026-04-13T19:59:50.852853942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:50.854078 containerd[1763]: time="2026-04-13T19:59:50.854053103Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.070778406s" Apr 13 19:59:50.854109 containerd[1763]: time="2026-04-13T19:59:50.854083543Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 13 19:59:50.855110 containerd[1763]: time="2026-04-13T19:59:50.855086665Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 13 19:59:51.573450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3657182506.mount: Deactivated successfully. Apr 13 19:59:51.601194 containerd[1763]: time="2026-04-13T19:59:51.601147948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:51.606079 containerd[1763]: time="2026-04-13T19:59:51.605905915Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Apr 13 19:59:51.610150 containerd[1763]: time="2026-04-13T19:59:51.609432320Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:51.616890 containerd[1763]: time="2026-04-13T19:59:51.616394890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:51.617841 containerd[1763]: time="2026-04-13T19:59:51.617801732Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 762.684147ms" Apr 13 19:59:51.617841 containerd[1763]: time="2026-04-13T19:59:51.617841652Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 13 19:59:51.618534 containerd[1763]: time="2026-04-13T19:59:51.618508653Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 13 19:59:52.328677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2349020279.mount: Deactivated successfully. Apr 13 19:59:53.644900 containerd[1763]: time="2026-04-13T19:59:53.644847555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:53.649134 containerd[1763]: time="2026-04-13T19:59:53.648910760Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Apr 13 19:59:53.655739 containerd[1763]: time="2026-04-13T19:59:53.655322490Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:53.672420 containerd[1763]: time="2026-04-13T19:59:53.672379995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:59:53.673634 containerd[1763]: time="2026-04-13T19:59:53.673600196Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.055062223s" Apr 13 19:59:53.673700 containerd[1763]: time="2026-04-13T19:59:53.673633316Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 13 19:59:53.810693 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 13 19:59:53.817462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:59:53.930317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:59:53.934017 (kubelet)[2668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:59:54.015853 kubelet[2668]: E0413 19:59:54.015801 2668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:59:54.018825 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:59:54.018974 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:00:01.253839 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:00:01.260713 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:00:01.290190 systemd[1]: Reloading requested from client PID 2692 ('systemctl') (unit session-9.scope)... Apr 13 20:00:01.290201 systemd[1]: Reloading... Apr 13 20:00:01.396159 zram_generator::config[2744]: No configuration found. Apr 13 20:00:01.474169 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:00:01.552681 systemd[1]: Reloading finished in 262 ms. Apr 13 20:00:01.761254 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 20:00:01.761362 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 20:00:01.761626 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:00:01.768429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:00:02.421728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:00:02.426406 (kubelet)[2796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:00:02.468859 kubelet[2796]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:00:02.468859 kubelet[2796]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:00:02.468859 kubelet[2796]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:00:02.469224 kubelet[2796]: I0413 20:00:02.468917 2796 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:00:03.034141 kubelet[2796]: I0413 20:00:03.032524 2796 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 20:00:03.034141 kubelet[2796]: I0413 20:00:03.032558 2796 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:00:03.034141 kubelet[2796]: I0413 20:00:03.032961 2796 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:00:03.054296 kubelet[2796]: E0413 20:00:03.054262 2796 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 20:00:03.057096 kubelet[2796]: I0413 20:00:03.057072 2796 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:00:03.062648 kubelet[2796]: E0413 20:00:03.062618 2796 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:00:03.062648 kubelet[2796]: I0413 20:00:03.062647 2796 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 20:00:03.065752 kubelet[2796]: I0413 20:00:03.065734 2796 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 20:00:03.066938 kubelet[2796]: I0413 20:00:03.066901 2796 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:00:03.067089 kubelet[2796]: I0413 20:00:03.066939 2796 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.7-a-39cd336750","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:00:03.067191 kubelet[2796]: I0413 20:00:03.067094 2796 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:00:03.067191 kubelet[2796]: I0413 20:00:03.067102 2796 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 20:00:03.067238 kubelet[2796]: I0413 20:00:03.067227 2796 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:00:03.071381 kubelet[2796]: I0413 20:00:03.071003 2796 kubelet.go:480] "Attempting to sync node with API server" Apr 13 20:00:03.071381 kubelet[2796]: I0413 20:00:03.071038 2796 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:00:03.071381 kubelet[2796]: I0413 20:00:03.071064 2796 kubelet.go:386] "Adding apiserver pod source" Apr 13 20:00:03.071381 kubelet[2796]: I0413 20:00:03.071078 2796 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:00:03.077707 kubelet[2796]: E0413 20:00:03.077686 2796 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.7-a-39cd336750&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:00:03.077891 kubelet[2796]: I0413 20:00:03.077878 2796 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:00:03.078526 kubelet[2796]: I0413 20:00:03.078506 2796 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:00:03.078641 kubelet[2796]: W0413 20:00:03.078631 2796 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 20:00:03.082032 kubelet[2796]: I0413 20:00:03.081977 2796 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 20:00:03.082032 kubelet[2796]: I0413 20:00:03.082013 2796 server.go:1289] "Started kubelet" Apr 13 20:00:03.082117 kubelet[2796]: E0413 20:00:03.082099 2796 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:00:03.083143 kubelet[2796]: I0413 20:00:03.082419 2796 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:00:03.083143 kubelet[2796]: I0413 20:00:03.082786 2796 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:00:03.087423 kubelet[2796]: I0413 20:00:03.087404 2796 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:00:03.089463 kubelet[2796]: I0413 20:00:03.089417 2796 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:00:03.090432 kubelet[2796]: I0413 20:00:03.090404 2796 server.go:317] "Adding debug handlers to kubelet server" Apr 13 20:00:03.092372 kubelet[2796]: I0413 20:00:03.092351 2796 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 20:00:03.092645 kubelet[2796]: E0413 20:00:03.092625 2796 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-39cd336750\" not found" Apr 13 20:00:03.095758 kubelet[2796]: E0413 20:00:03.094172 2796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.17:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.7-a-39cd336750.18a603005993ee89 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.7-a-39cd336750,UID:ci-4081.3.7-a-39cd336750,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.7-a-39cd336750,},FirstTimestamp:2026-04-13 20:00:03.081989769 +0000 UTC m=+0.650913799,LastTimestamp:2026-04-13 20:00:03.081989769 +0000 UTC m=+0.650913799,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.7-a-39cd336750,}" Apr 13 20:00:03.095920 kubelet[2796]: E0413 20:00:03.095821 2796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-39cd336750?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="200ms" Apr 13 20:00:03.095920 kubelet[2796]: I0413 20:00:03.095859 2796 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 20:00:03.096916 kubelet[2796]: E0413 20:00:03.096553 2796 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:00:03.097965 kubelet[2796]: I0413 20:00:03.097945 2796 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:00:03.097965 kubelet[2796]: I0413 20:00:03.097959 2796 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:00:03.098056 kubelet[2796]: I0413 20:00:03.098018 2796 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:00:03.098227 kubelet[2796]: I0413 20:00:03.092385 2796 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:00:03.098388 kubelet[2796]: I0413 20:00:03.098377 2796 reconciler.go:26] "Reconciler: start to sync state" Apr 13 20:00:03.103098 kubelet[2796]: E0413 20:00:03.103066 2796 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:00:03.148798 kubelet[2796]: I0413 20:00:03.148751 2796 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 20:00:03.151402 kubelet[2796]: I0413 20:00:03.150539 2796 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 20:00:03.151402 kubelet[2796]: I0413 20:00:03.150569 2796 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 20:00:03.151402 kubelet[2796]: I0413 20:00:03.150593 2796 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:00:03.151402 kubelet[2796]: I0413 20:00:03.150600 2796 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 20:00:03.151402 kubelet[2796]: E0413 20:00:03.150643 2796 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:00:03.151892 kubelet[2796]: E0413 20:00:03.151858 2796 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:00:03.193167 kubelet[2796]: E0413 20:00:03.193118 2796 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-39cd336750\" not found" Apr 13 20:00:03.195064 kubelet[2796]: I0413 20:00:03.195029 2796 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:00:03.195064 kubelet[2796]: I0413 20:00:03.195060 2796 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:00:03.195180 kubelet[2796]: I0413 20:00:03.195077 2796 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:00:03.206032 kubelet[2796]: I0413 20:00:03.206014 2796 policy_none.go:49] "None policy: Start" Apr 13 20:00:03.206071 kubelet[2796]: I0413 20:00:03.206037 2796 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 20:00:03.206071 kubelet[2796]: I0413 20:00:03.206048 2796 state_mem.go:35] "Initializing new in-memory state store" Apr 13 20:00:03.216420 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 20:00:03.230396 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 20:00:03.233689 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 20:00:03.248076 kubelet[2796]: E0413 20:00:03.248054 2796 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:00:03.248738 kubelet[2796]: I0413 20:00:03.248357 2796 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:00:03.248738 kubelet[2796]: I0413 20:00:03.248374 2796 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:00:03.248738 kubelet[2796]: I0413 20:00:03.248667 2796 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:00:03.250289 kubelet[2796]: E0413 20:00:03.250270 2796 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:00:03.250408 kubelet[2796]: E0413 20:00:03.250396 2796 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.7-a-39cd336750\" not found" Apr 13 20:00:03.265382 systemd[1]: Created slice kubepods-burstable-podb15fc16fda19a7117353c204b9948eb4.slice - libcontainer container kubepods-burstable-podb15fc16fda19a7117353c204b9948eb4.slice. Apr 13 20:00:03.274932 kubelet[2796]: E0413 20:00:03.274904 2796 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-39cd336750\" not found" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:03.278997 systemd[1]: Created slice kubepods-burstable-pod87317d55f1cc7e9447fb218717625dfb.slice - libcontainer container kubepods-burstable-pod87317d55f1cc7e9447fb218717625dfb.slice. Apr 13 20:00:03.281570 kubelet[2796]: E0413 20:00:03.281545 2796 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-39cd336750\" not found" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:03.284856 systemd[1]: Created slice kubepods-burstable-pod0634dc6e98ef202d06a304b9c828f17f.slice - libcontainer container kubepods-burstable-pod0634dc6e98ef202d06a304b9c828f17f.slice. Apr 13 20:00:03.286893 kubelet[2796]: E0413 20:00:03.286872 2796 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-39cd336750\" not found" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:03.296255 kubelet[2796]: E0413 20:00:03.296217 2796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-39cd336750?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="400ms" Apr 13 20:00:03.299485 kubelet[2796]: I0413 20:00:03.299415 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87317d55f1cc7e9447fb218717625dfb-ca-certs\") pod \"kube-controller-manager-ci-4081.3.7-a-39cd336750\" (UID: \"87317d55f1cc7e9447fb218717625dfb\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-39cd336750" Apr 13 20:00:03.299485 kubelet[2796]: I0413 20:00:03.299443 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87317d55f1cc7e9447fb218717625dfb-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.7-a-39cd336750\" (UID: \"87317d55f1cc7e9447fb218717625dfb\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-39cd336750" Apr 13 20:00:03.299485 kubelet[2796]: I0413 20:00:03.299461 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0634dc6e98ef202d06a304b9c828f17f-kubeconfig\") pod \"kube-scheduler-ci-4081.3.7-a-39cd336750\" (UID: \"0634dc6e98ef202d06a304b9c828f17f\") " pod="kube-system/kube-scheduler-ci-4081.3.7-a-39cd336750" Apr 13 20:00:03.299584 kubelet[2796]: I0413 20:00:03.299516 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b15fc16fda19a7117353c204b9948eb4-ca-certs\") pod \"kube-apiserver-ci-4081.3.7-a-39cd336750\" (UID: \"b15fc16fda19a7117353c204b9948eb4\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-39cd336750" Apr 13 20:00:03.299584 kubelet[2796]: I0413 20:00:03.299543 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b15fc16fda19a7117353c204b9948eb4-k8s-certs\") pod \"kube-apiserver-ci-4081.3.7-a-39cd336750\" (UID: \"b15fc16fda19a7117353c204b9948eb4\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-39cd336750" Apr 13 20:00:03.299584 kubelet[2796]: I0413 20:00:03.299558 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b15fc16fda19a7117353c204b9948eb4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.7-a-39cd336750\" (UID: \"b15fc16fda19a7117353c204b9948eb4\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-39cd336750" Apr 13 20:00:03.299654 kubelet[2796]: I0413 20:00:03.299577 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/87317d55f1cc7e9447fb218717625dfb-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.7-a-39cd336750\" (UID: \"87317d55f1cc7e9447fb218717625dfb\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-39cd336750" Apr 13 20:00:03.299654 kubelet[2796]: I0413 20:00:03.299600 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87317d55f1cc7e9447fb218717625dfb-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.7-a-39cd336750\" (UID: \"87317d55f1cc7e9447fb218717625dfb\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-39cd336750" Apr 13 20:00:03.299654 kubelet[2796]: I0413 20:00:03.299619 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87317d55f1cc7e9447fb218717625dfb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.7-a-39cd336750\" (UID: \"87317d55f1cc7e9447fb218717625dfb\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-39cd336750" Apr 13 20:00:03.350146 kubelet[2796]: I0413 20:00:03.349919 2796 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:03.350367 kubelet[2796]: E0413 20:00:03.350262 2796 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:03.552683 kubelet[2796]: I0413 20:00:03.552586 2796 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:03.553248 kubelet[2796]: E0413 20:00:03.553222 2796 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:03.576239 containerd[1763]: time="2026-04-13T20:00:03.575936545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.7-a-39cd336750,Uid:b15fc16fda19a7117353c204b9948eb4,Namespace:kube-system,Attempt:0,}" Apr 13 20:00:03.582941 containerd[1763]: time="2026-04-13T20:00:03.582911035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.7-a-39cd336750,Uid:87317d55f1cc7e9447fb218717625dfb,Namespace:kube-system,Attempt:0,}" Apr 13 20:00:03.587630 containerd[1763]: time="2026-04-13T20:00:03.587600002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.7-a-39cd336750,Uid:0634dc6e98ef202d06a304b9c828f17f,Namespace:kube-system,Attempt:0,}" Apr 13 20:00:03.696940 kubelet[2796]: E0413 20:00:03.696898 2796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-39cd336750?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="800ms" Apr 13 20:00:03.954848 kubelet[2796]: I0413 20:00:03.954811 2796 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:03.955169 kubelet[2796]: E0413 20:00:03.955108 2796 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:03.984692 kubelet[2796]: E0413 20:00:03.984652 2796 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:00:04.125178 kubelet[2796]: E0413 20:00:04.125112 2796 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:00:04.352004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3278109664.mount: Deactivated successfully. Apr 13 20:00:04.386039 containerd[1763]: time="2026-04-13T20:00:04.385957917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:00:04.389560 containerd[1763]: time="2026-04-13T20:00:04.389524601Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 13 20:00:04.393033 containerd[1763]: time="2026-04-13T20:00:04.393001365Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:00:04.400503 containerd[1763]: time="2026-04-13T20:00:04.399685573Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:00:04.403227 containerd[1763]: time="2026-04-13T20:00:04.403186657Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:00:04.408197 containerd[1763]: time="2026-04-13T20:00:04.407413821Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:00:04.410452 containerd[1763]: time="2026-04-13T20:00:04.410180145Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:00:04.415015 containerd[1763]: time="2026-04-13T20:00:04.414982790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:00:04.415829 containerd[1763]: time="2026-04-13T20:00:04.415800911Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 832.824036ms" Apr 13 20:00:04.417895 containerd[1763]: time="2026-04-13T20:00:04.417862153Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 841.852448ms" Apr 13 20:00:04.421554 containerd[1763]: time="2026-04-13T20:00:04.421523038Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 833.863036ms" Apr 13 20:00:04.497562 kubelet[2796]: E0413 20:00:04.497527 2796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.7-a-39cd336750?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="1.6s" Apr 13 20:00:04.606937 kubelet[2796]: E0413 20:00:04.606808 2796 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.7-a-39cd336750&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:00:04.707791 kubelet[2796]: E0413 20:00:04.707754 2796 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:00:04.757471 kubelet[2796]: I0413 20:00:04.757438 2796 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:04.757738 kubelet[2796]: E0413 20:00:04.757715 2796 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:05.145628 kubelet[2796]: E0413 20:00:05.145580 2796 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 20:00:05.160214 containerd[1763]: time="2026-04-13T20:00:05.159834764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:00:05.160214 containerd[1763]: time="2026-04-13T20:00:05.159887524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:00:05.160214 containerd[1763]: time="2026-04-13T20:00:05.159909404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:00:05.160214 containerd[1763]: time="2026-04-13T20:00:05.160009964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:00:05.164190 containerd[1763]: time="2026-04-13T20:00:05.163889728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:00:05.164190 containerd[1763]: time="2026-04-13T20:00:05.163958088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:00:05.164190 containerd[1763]: time="2026-04-13T20:00:05.163969648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:00:05.164190 containerd[1763]: time="2026-04-13T20:00:05.164102408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:00:05.167484 containerd[1763]: time="2026-04-13T20:00:05.167203852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:00:05.167484 containerd[1763]: time="2026-04-13T20:00:05.167253652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:00:05.167484 containerd[1763]: time="2026-04-13T20:00:05.167268772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:00:05.167484 containerd[1763]: time="2026-04-13T20:00:05.167332692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:00:05.186376 systemd[1]: Started cri-containerd-708a7cf1f316c669f3fcfbc2858854243a14be02b4b060313a4358aed3b01480.scope - libcontainer container 708a7cf1f316c669f3fcfbc2858854243a14be02b4b060313a4358aed3b01480. Apr 13 20:00:05.197477 systemd[1]: Started cri-containerd-29c108cd76784d322e7b0b16ed064379d092b6e5188cbeda0ff8742fd77dc29d.scope - libcontainer container 29c108cd76784d322e7b0b16ed064379d092b6e5188cbeda0ff8742fd77dc29d. Apr 13 20:00:05.204092 systemd[1]: Started cri-containerd-deb5490f79a747443995fa5c9bd5440996a2cab23836e25135006d179879d114.scope - libcontainer container deb5490f79a747443995fa5c9bd5440996a2cab23836e25135006d179879d114. Apr 13 20:00:05.241900 containerd[1763]: time="2026-04-13T20:00:05.241862658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.7-a-39cd336750,Uid:87317d55f1cc7e9447fb218717625dfb,Namespace:kube-system,Attempt:0,} returns sandbox id \"708a7cf1f316c669f3fcfbc2858854243a14be02b4b060313a4358aed3b01480\"" Apr 13 20:00:05.255339 containerd[1763]: time="2026-04-13T20:00:05.255299913Z" level=info msg="CreateContainer within sandbox \"708a7cf1f316c669f3fcfbc2858854243a14be02b4b060313a4358aed3b01480\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 20:00:05.256185 containerd[1763]: time="2026-04-13T20:00:05.256113434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.7-a-39cd336750,Uid:b15fc16fda19a7117353c204b9948eb4,Namespace:kube-system,Attempt:0,} returns sandbox id \"deb5490f79a747443995fa5c9bd5440996a2cab23836e25135006d179879d114\"" Apr 13 20:00:05.260964 containerd[1763]: time="2026-04-13T20:00:05.260936399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.7-a-39cd336750,Uid:0634dc6e98ef202d06a304b9c828f17f,Namespace:kube-system,Attempt:0,} returns sandbox id \"29c108cd76784d322e7b0b16ed064379d092b6e5188cbeda0ff8742fd77dc29d\"" Apr 13 20:00:05.264431 containerd[1763]: time="2026-04-13T20:00:05.264288763Z" level=info msg="CreateContainer within sandbox \"deb5490f79a747443995fa5c9bd5440996a2cab23836e25135006d179879d114\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 20:00:05.270116 containerd[1763]: time="2026-04-13T20:00:05.270073650Z" level=info msg="CreateContainer within sandbox \"29c108cd76784d322e7b0b16ed064379d092b6e5188cbeda0ff8742fd77dc29d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 20:00:05.366725 containerd[1763]: time="2026-04-13T20:00:05.366583521Z" level=info msg="CreateContainer within sandbox \"708a7cf1f316c669f3fcfbc2858854243a14be02b4b060313a4358aed3b01480\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"74bdd545d11e32949b9d592e7162aa983f69a66cca6dd3400ec4f8b6d49531da\"" Apr 13 20:00:05.367375 containerd[1763]: time="2026-04-13T20:00:05.367350561Z" level=info msg="StartContainer for \"74bdd545d11e32949b9d592e7162aa983f69a66cca6dd3400ec4f8b6d49531da\"" Apr 13 20:00:05.378825 containerd[1763]: time="2026-04-13T20:00:05.378658174Z" level=info msg="CreateContainer within sandbox \"deb5490f79a747443995fa5c9bd5440996a2cab23836e25135006d179879d114\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"daf9ae1ebeb4d1851af75ceaeaecd70a0fca856dc5201edf8a07c5965d6098ec\"" Apr 13 20:00:05.379747 containerd[1763]: time="2026-04-13T20:00:05.379578295Z" level=info msg="StartContainer for \"daf9ae1ebeb4d1851af75ceaeaecd70a0fca856dc5201edf8a07c5965d6098ec\"" Apr 13 20:00:05.392053 containerd[1763]: time="2026-04-13T20:00:05.392019270Z" level=info msg="CreateContainer within sandbox \"29c108cd76784d322e7b0b16ed064379d092b6e5188cbeda0ff8742fd77dc29d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c2baf02ade96452cf76477838a825f79c513a66523f2ac5b31824e2f6407b6f0\"" Apr 13 20:00:05.392742 containerd[1763]: time="2026-04-13T20:00:05.392707670Z" level=info msg="StartContainer for \"c2baf02ade96452cf76477838a825f79c513a66523f2ac5b31824e2f6407b6f0\"" Apr 13 20:00:05.394307 systemd[1]: Started cri-containerd-74bdd545d11e32949b9d592e7162aa983f69a66cca6dd3400ec4f8b6d49531da.scope - libcontainer container 74bdd545d11e32949b9d592e7162aa983f69a66cca6dd3400ec4f8b6d49531da. Apr 13 20:00:05.415378 systemd[1]: Started cri-containerd-daf9ae1ebeb4d1851af75ceaeaecd70a0fca856dc5201edf8a07c5965d6098ec.scope - libcontainer container daf9ae1ebeb4d1851af75ceaeaecd70a0fca856dc5201edf8a07c5965d6098ec. Apr 13 20:00:05.433278 systemd[1]: Started cri-containerd-c2baf02ade96452cf76477838a825f79c513a66523f2ac5b31824e2f6407b6f0.scope - libcontainer container c2baf02ade96452cf76477838a825f79c513a66523f2ac5b31824e2f6407b6f0. Apr 13 20:00:05.462185 containerd[1763]: time="2026-04-13T20:00:05.461651509Z" level=info msg="StartContainer for \"74bdd545d11e32949b9d592e7162aa983f69a66cca6dd3400ec4f8b6d49531da\" returns successfully" Apr 13 20:00:05.471554 containerd[1763]: time="2026-04-13T20:00:05.471515641Z" level=info msg="StartContainer for \"daf9ae1ebeb4d1851af75ceaeaecd70a0fca856dc5201edf8a07c5965d6098ec\" returns successfully" Apr 13 20:00:05.495752 containerd[1763]: time="2026-04-13T20:00:05.495655588Z" level=info msg="StartContainer for \"c2baf02ade96452cf76477838a825f79c513a66523f2ac5b31824e2f6407b6f0\" returns successfully" Apr 13 20:00:06.162036 kubelet[2796]: E0413 20:00:06.161756 2796 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-39cd336750\" not found" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:06.166492 kubelet[2796]: E0413 20:00:06.164269 2796 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-39cd336750\" not found" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:06.168365 kubelet[2796]: E0413 20:00:06.168345 2796 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-39cd336750\" not found" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:06.361188 kubelet[2796]: I0413 20:00:06.359650 2796 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:07.170717 kubelet[2796]: E0413 20:00:07.170693 2796 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-39cd336750\" not found" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:07.171282 kubelet[2796]: E0413 20:00:07.171195 2796 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.7-a-39cd336750\" not found" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:07.630385 kubelet[2796]: E0413 20:00:07.630342 2796 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.7-a-39cd336750\" not found" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:07.732666 kubelet[2796]: I0413 20:00:07.731855 2796 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:07.735861 kubelet[2796]: E0413 20:00:07.735781 2796 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.7-a-39cd336750.18a603005993ee89 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.7-a-39cd336750,UID:ci-4081.3.7-a-39cd336750,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.7-a-39cd336750,},FirstTimestamp:2026-04-13 20:00:03.081989769 +0000 UTC m=+0.650913799,LastTimestamp:2026-04-13 20:00:03.081989769 +0000 UTC m=+0.650913799,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.7-a-39cd336750,}" Apr 13 20:00:07.793989 kubelet[2796]: I0413 20:00:07.793953 2796 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.7-a-39cd336750" Apr 13 20:00:07.840388 kubelet[2796]: E0413 20:00:07.840161 2796 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.7-a-39cd336750\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.7-a-39cd336750" Apr 13 20:00:07.840388 kubelet[2796]: I0413 20:00:07.840197 2796 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-39cd336750" Apr 13 20:00:07.843591 kubelet[2796]: E0413 20:00:07.843570 2796 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.7-a-39cd336750\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-39cd336750" Apr 13 20:00:07.843858 kubelet[2796]: I0413 20:00:07.843689 2796 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.7-a-39cd336750" Apr 13 20:00:07.846139 kubelet[2796]: E0413 20:00:07.845187 2796 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.7-a-39cd336750\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.7-a-39cd336750" Apr 13 20:00:08.083481 kubelet[2796]: I0413 20:00:08.083306 2796 apiserver.go:52] "Watching apiserver" Apr 13 20:00:08.096583 kubelet[2796]: I0413 20:00:08.096552 2796 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 20:00:10.484373 systemd[1]: Reloading requested from client PID 3082 ('systemctl') (unit session-9.scope)... Apr 13 20:00:10.484390 systemd[1]: Reloading... Apr 13 20:00:10.571180 zram_generator::config[3122]: No configuration found. Apr 13 20:00:10.680609 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:00:10.738395 kubelet[2796]: I0413 20:00:10.738294 2796 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-39cd336750" Apr 13 20:00:10.747176 kubelet[2796]: I0413 20:00:10.747150 2796 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 13 20:00:10.752751 kubelet[2796]: I0413 20:00:10.752716 2796 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.7-a-39cd336750" Apr 13 20:00:10.760710 kubelet[2796]: I0413 20:00:10.760688 2796 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 13 20:00:10.776771 systemd[1]: Reloading finished in 292 ms. Apr 13 20:00:10.816664 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:00:10.831569 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 20:00:10.831754 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:00:10.831797 systemd[1]: kubelet.service: Consumed 1.004s CPU time, 129.1M memory peak, 0B memory swap peak. Apr 13 20:00:10.837348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:00:10.940848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:00:10.946094 (kubelet)[3186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:00:10.982159 kubelet[3186]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:00:10.982159 kubelet[3186]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:00:10.982159 kubelet[3186]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:00:10.982159 kubelet[3186]: I0413 20:00:10.981186 3186 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:00:10.986877 kubelet[3186]: I0413 20:00:10.986835 3186 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 20:00:10.987063 kubelet[3186]: I0413 20:00:10.987052 3186 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:00:10.990108 kubelet[3186]: I0413 20:00:10.990018 3186 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:00:10.992214 kubelet[3186]: I0413 20:00:10.992195 3186 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 20:00:10.995222 kubelet[3186]: I0413 20:00:10.995197 3186 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:00:10.998533 kubelet[3186]: E0413 20:00:10.998415 3186 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:00:10.998645 kubelet[3186]: I0413 20:00:10.998632 3186 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 20:00:11.001654 kubelet[3186]: I0413 20:00:11.001628 3186 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 20:00:11.001997 kubelet[3186]: I0413 20:00:11.001966 3186 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:00:11.002271 kubelet[3186]: I0413 20:00:11.002070 3186 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.7-a-39cd336750","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:00:11.002409 kubelet[3186]: I0413 20:00:11.002392 3186 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:00:11.002465 kubelet[3186]: I0413 20:00:11.002456 3186 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 20:00:11.002587 kubelet[3186]: I0413 20:00:11.002575 3186 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:00:11.002805 kubelet[3186]: I0413 20:00:11.002793 3186 kubelet.go:480] "Attempting to sync node with API server" Apr 13 20:00:11.002881 kubelet[3186]: I0413 20:00:11.002871 3186 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:00:11.003023 kubelet[3186]: I0413 20:00:11.002953 3186 kubelet.go:386] "Adding apiserver pod source" Apr 13 20:00:11.003023 kubelet[3186]: I0413 20:00:11.002977 3186 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:00:11.007051 kubelet[3186]: I0413 20:00:11.007019 3186 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:00:11.007745 kubelet[3186]: I0413 20:00:11.007718 3186 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:00:11.013795 kubelet[3186]: I0413 20:00:11.013715 3186 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 20:00:11.013795 kubelet[3186]: I0413 20:00:11.013750 3186 server.go:1289] "Started kubelet" Apr 13 20:00:11.018676 kubelet[3186]: I0413 20:00:11.017642 3186 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:00:11.018676 kubelet[3186]: I0413 20:00:11.018021 3186 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:00:11.018676 kubelet[3186]: I0413 20:00:11.018057 3186 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:00:11.018676 kubelet[3186]: I0413 20:00:11.018064 3186 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:00:11.027902 kubelet[3186]: I0413 20:00:11.027875 3186 server.go:317] "Adding debug handlers to kubelet server" Apr 13 20:00:11.031761 kubelet[3186]: I0413 20:00:11.031727 3186 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:00:11.033053 kubelet[3186]: I0413 20:00:11.033026 3186 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 20:00:11.035922 kubelet[3186]: E0413 20:00:11.035793 3186 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.7-a-39cd336750\" not found" Apr 13 20:00:11.037259 kubelet[3186]: I0413 20:00:11.037218 3186 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 20:00:11.046668 kubelet[3186]: I0413 20:00:11.046353 3186 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 20:00:11.046668 kubelet[3186]: I0413 20:00:11.046474 3186 reconciler.go:26] "Reconciler: start to sync state" Apr 13 20:00:11.052522 kubelet[3186]: I0413 20:00:11.052494 3186 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:00:11.053151 kubelet[3186]: I0413 20:00:11.053106 3186 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:00:11.065636 kubelet[3186]: I0413 20:00:11.065576 3186 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:00:11.066467 kubelet[3186]: I0413 20:00:11.066436 3186 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 20:00:11.067218 kubelet[3186]: I0413 20:00:11.066864 3186 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 20:00:11.067218 kubelet[3186]: I0413 20:00:11.066889 3186 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:00:11.067218 kubelet[3186]: I0413 20:00:11.066896 3186 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 20:00:11.067218 kubelet[3186]: E0413 20:00:11.066954 3186 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:00:11.071751 kubelet[3186]: E0413 20:00:11.071725 3186 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:00:11.119060 kubelet[3186]: I0413 20:00:11.119028 3186 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:00:11.119060 kubelet[3186]: I0413 20:00:11.119048 3186 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:00:11.119244 kubelet[3186]: I0413 20:00:11.119081 3186 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:00:11.119244 kubelet[3186]: I0413 20:00:11.119221 3186 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 20:00:11.119244 kubelet[3186]: I0413 20:00:11.119231 3186 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 20:00:11.119308 kubelet[3186]: I0413 20:00:11.119248 3186 policy_none.go:49] "None policy: Start" Apr 13 20:00:11.119308 kubelet[3186]: I0413 20:00:11.119257 3186 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 20:00:11.119308 kubelet[3186]: I0413 20:00:11.119265 3186 state_mem.go:35] "Initializing new in-memory state store" Apr 13 20:00:11.119369 kubelet[3186]: I0413 20:00:11.119337 3186 state_mem.go:75] "Updated machine memory state" Apr 13 20:00:11.127373 kubelet[3186]: E0413 20:00:11.126606 3186 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:00:11.127373 kubelet[3186]: I0413 20:00:11.126766 3186 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:00:11.127373 kubelet[3186]: I0413 20:00:11.126778 3186 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:00:11.127373 kubelet[3186]: I0413 20:00:11.127286 3186 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:00:11.129292 kubelet[3186]: E0413 20:00:11.129270 3186 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:00:11.168272 kubelet[3186]: I0413 20:00:11.168229 3186 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.7-a-39cd336750" Apr 13 20:00:11.168527 kubelet[3186]: I0413 20:00:11.168507 3186 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-39cd336750" Apr 13 20:00:11.168687 kubelet[3186]: I0413 20:00:11.168670 3186 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.7-a-39cd336750" Apr 13 20:00:11.177191 kubelet[3186]: I0413 20:00:11.176964 3186 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 13 20:00:11.182111 kubelet[3186]: I0413 20:00:11.182071 3186 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 13 20:00:11.182192 kubelet[3186]: E0413 20:00:11.182113 3186 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.7-a-39cd336750\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.7-a-39cd336750" Apr 13 20:00:11.182192 kubelet[3186]: I0413 20:00:11.182174 3186 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 13 20:00:11.182248 kubelet[3186]: E0413 20:00:11.182197 3186 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.7-a-39cd336750\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-39cd336750" Apr 13 20:00:11.231379 kubelet[3186]: I0413 20:00:11.231225 3186 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:11.247754 kubelet[3186]: I0413 20:00:11.247386 3186 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:11.247754 kubelet[3186]: I0413 20:00:11.247458 3186 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.7-a-39cd336750" Apr 13 20:00:11.348779 kubelet[3186]: I0413 20:00:11.348729 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b15fc16fda19a7117353c204b9948eb4-k8s-certs\") pod \"kube-apiserver-ci-4081.3.7-a-39cd336750\" (UID: \"b15fc16fda19a7117353c204b9948eb4\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-39cd336750" Apr 13 20:00:11.348933 kubelet[3186]: I0413 20:00:11.348813 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b15fc16fda19a7117353c204b9948eb4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.7-a-39cd336750\" (UID: \"b15fc16fda19a7117353c204b9948eb4\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-39cd336750" Apr 13 20:00:11.348933 kubelet[3186]: I0413 20:00:11.348845 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87317d55f1cc7e9447fb218717625dfb-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.7-a-39cd336750\" (UID: \"87317d55f1cc7e9447fb218717625dfb\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-39cd336750" Apr 13 20:00:11.348933 kubelet[3186]: I0413 20:00:11.348883 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87317d55f1cc7e9447fb218717625dfb-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.7-a-39cd336750\" (UID: \"87317d55f1cc7e9447fb218717625dfb\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-39cd336750" Apr 13 20:00:11.348933 kubelet[3186]: I0413 20:00:11.348902 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87317d55f1cc7e9447fb218717625dfb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.7-a-39cd336750\" (UID: \"87317d55f1cc7e9447fb218717625dfb\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-39cd336750" Apr 13 20:00:11.348933 kubelet[3186]: I0413 20:00:11.348918 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b15fc16fda19a7117353c204b9948eb4-ca-certs\") pod \"kube-apiserver-ci-4081.3.7-a-39cd336750\" (UID: \"b15fc16fda19a7117353c204b9948eb4\") " pod="kube-system/kube-apiserver-ci-4081.3.7-a-39cd336750" Apr 13 20:00:11.349045 kubelet[3186]: I0413 20:00:11.348955 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87317d55f1cc7e9447fb218717625dfb-ca-certs\") pod \"kube-controller-manager-ci-4081.3.7-a-39cd336750\" (UID: \"87317d55f1cc7e9447fb218717625dfb\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-39cd336750" Apr 13 20:00:11.349045 kubelet[3186]: I0413 20:00:11.348972 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/87317d55f1cc7e9447fb218717625dfb-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.7-a-39cd336750\" (UID: \"87317d55f1cc7e9447fb218717625dfb\") " pod="kube-system/kube-controller-manager-ci-4081.3.7-a-39cd336750" Apr 13 20:00:11.349045 kubelet[3186]: I0413 20:00:11.348986 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0634dc6e98ef202d06a304b9c828f17f-kubeconfig\") pod \"kube-scheduler-ci-4081.3.7-a-39cd336750\" (UID: \"0634dc6e98ef202d06a304b9c828f17f\") " pod="kube-system/kube-scheduler-ci-4081.3.7-a-39cd336750" Apr 13 20:00:12.006665 kubelet[3186]: I0413 20:00:12.006627 3186 apiserver.go:52] "Watching apiserver" Apr 13 20:00:12.047241 kubelet[3186]: I0413 20:00:12.047202 3186 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 20:00:12.097944 kubelet[3186]: I0413 20:00:12.097875 3186 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.7-a-39cd336750" Apr 13 20:00:12.106147 kubelet[3186]: I0413 20:00:12.105783 3186 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 13 20:00:12.106147 kubelet[3186]: E0413 20:00:12.105833 3186 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.7-a-39cd336750\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.7-a-39cd336750" Apr 13 20:00:12.133879 kubelet[3186]: I0413 20:00:12.133699 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.7-a-39cd336750" podStartSLOduration=2.133681157 podStartE2EDuration="2.133681157s" podCreationTimestamp="2026-04-13 20:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:00:12.122530062 +0000 UTC m=+1.172884709" watchObservedRunningTime="2026-04-13 20:00:12.133681157 +0000 UTC m=+1.184035804" Apr 13 20:00:12.148535 kubelet[3186]: I0413 20:00:12.148407 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.7-a-39cd336750" podStartSLOduration=1.148382977 podStartE2EDuration="1.148382977s" podCreationTimestamp="2026-04-13 20:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:00:12.134157197 +0000 UTC m=+1.184511884" watchObservedRunningTime="2026-04-13 20:00:12.148382977 +0000 UTC m=+1.198737624" Apr 13 20:00:12.163137 kubelet[3186]: I0413 20:00:12.161055 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.7-a-39cd336750" podStartSLOduration=2.161041994 podStartE2EDuration="2.161041994s" podCreationTimestamp="2026-04-13 20:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:00:12.148460817 +0000 UTC m=+1.198815464" watchObservedRunningTime="2026-04-13 20:00:12.161041994 +0000 UTC m=+1.211396641" Apr 13 20:00:14.854576 kubelet[3186]: I0413 20:00:14.854223 3186 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 20:00:14.854930 containerd[1763]: time="2026-04-13T20:00:14.854488941Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 20:00:14.855112 kubelet[3186]: I0413 20:00:14.854623 3186 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 20:00:15.777473 systemd[1]: Created slice kubepods-besteffort-pod86a2f8af_0c00_4d2a_89dc_e70fd7e6e3d8.slice - libcontainer container kubepods-besteffort-pod86a2f8af_0c00_4d2a_89dc_e70fd7e6e3d8.slice. Apr 13 20:00:15.871753 kubelet[3186]: I0413 20:00:15.871604 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd76j\" (UniqueName: \"kubernetes.io/projected/86a2f8af-0c00-4d2a-89dc-e70fd7e6e3d8-kube-api-access-rd76j\") pod \"kube-proxy-f2f2x\" (UID: \"86a2f8af-0c00-4d2a-89dc-e70fd7e6e3d8\") " pod="kube-system/kube-proxy-f2f2x" Apr 13 20:00:15.871753 kubelet[3186]: I0413 20:00:15.871649 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/86a2f8af-0c00-4d2a-89dc-e70fd7e6e3d8-kube-proxy\") pod \"kube-proxy-f2f2x\" (UID: \"86a2f8af-0c00-4d2a-89dc-e70fd7e6e3d8\") " pod="kube-system/kube-proxy-f2f2x" Apr 13 20:00:15.871753 kubelet[3186]: I0413 20:00:15.871667 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86a2f8af-0c00-4d2a-89dc-e70fd7e6e3d8-xtables-lock\") pod \"kube-proxy-f2f2x\" (UID: \"86a2f8af-0c00-4d2a-89dc-e70fd7e6e3d8\") " pod="kube-system/kube-proxy-f2f2x" Apr 13 20:00:15.871753 kubelet[3186]: I0413 20:00:15.871683 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86a2f8af-0c00-4d2a-89dc-e70fd7e6e3d8-lib-modules\") pod \"kube-proxy-f2f2x\" (UID: \"86a2f8af-0c00-4d2a-89dc-e70fd7e6e3d8\") " pod="kube-system/kube-proxy-f2f2x" Apr 13 20:00:16.091903 containerd[1763]: time="2026-04-13T20:00:16.090652547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f2f2x,Uid:86a2f8af-0c00-4d2a-89dc-e70fd7e6e3d8,Namespace:kube-system,Attempt:0,}" Apr 13 20:00:16.102284 systemd[1]: Created slice kubepods-besteffort-pod797b6739_291c_4874_a61c_77e52300eeab.slice - libcontainer container kubepods-besteffort-pod797b6739_291c_4874_a61c_77e52300eeab.slice. Apr 13 20:00:16.141283 containerd[1763]: time="2026-04-13T20:00:16.140941762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:00:16.141283 containerd[1763]: time="2026-04-13T20:00:16.140989722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:00:16.141283 containerd[1763]: time="2026-04-13T20:00:16.141025442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:00:16.141283 containerd[1763]: time="2026-04-13T20:00:16.141104242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:00:16.164275 systemd[1]: Started cri-containerd-a624af57a0bfcb5f3753467918aff7a3545797151163785e3054e4face292f79.scope - libcontainer container a624af57a0bfcb5f3753467918aff7a3545797151163785e3054e4face292f79. Apr 13 20:00:16.173975 kubelet[3186]: I0413 20:00:16.173914 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z6ft\" (UniqueName: \"kubernetes.io/projected/797b6739-291c-4874-a61c-77e52300eeab-kube-api-access-6z6ft\") pod \"tigera-operator-6bf85f8dd-85hhj\" (UID: \"797b6739-291c-4874-a61c-77e52300eeab\") " pod="tigera-operator/tigera-operator-6bf85f8dd-85hhj" Apr 13 20:00:16.173975 kubelet[3186]: I0413 20:00:16.173970 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/797b6739-291c-4874-a61c-77e52300eeab-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-85hhj\" (UID: \"797b6739-291c-4874-a61c-77e52300eeab\") " pod="tigera-operator/tigera-operator-6bf85f8dd-85hhj" Apr 13 20:00:16.183462 containerd[1763]: time="2026-04-13T20:00:16.183365569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f2f2x,Uid:86a2f8af-0c00-4d2a-89dc-e70fd7e6e3d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a624af57a0bfcb5f3753467918aff7a3545797151163785e3054e4face292f79\"" Apr 13 20:00:16.192655 containerd[1763]: time="2026-04-13T20:00:16.192538939Z" level=info msg="CreateContainer within sandbox \"a624af57a0bfcb5f3753467918aff7a3545797151163785e3054e4face292f79\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 20:00:16.231300 containerd[1763]: time="2026-04-13T20:00:16.231258542Z" level=info msg="CreateContainer within sandbox \"a624af57a0bfcb5f3753467918aff7a3545797151163785e3054e4face292f79\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7e7790c352dc191d1fffc6c048f8f730f0cffed539afdfbf30c7f6828b91dd72\"" Apr 13 20:00:16.232163 containerd[1763]: time="2026-04-13T20:00:16.231864303Z" level=info msg="StartContainer for \"7e7790c352dc191d1fffc6c048f8f730f0cffed539afdfbf30c7f6828b91dd72\"" Apr 13 20:00:16.256266 systemd[1]: Started cri-containerd-7e7790c352dc191d1fffc6c048f8f730f0cffed539afdfbf30c7f6828b91dd72.scope - libcontainer container 7e7790c352dc191d1fffc6c048f8f730f0cffed539afdfbf30c7f6828b91dd72. Apr 13 20:00:16.287651 containerd[1763]: time="2026-04-13T20:00:16.287607204Z" level=info msg="StartContainer for \"7e7790c352dc191d1fffc6c048f8f730f0cffed539afdfbf30c7f6828b91dd72\" returns successfully" Apr 13 20:00:16.407880 containerd[1763]: time="2026-04-13T20:00:16.407333176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-85hhj,Uid:797b6739-291c-4874-a61c-77e52300eeab,Namespace:tigera-operator,Attempt:0,}" Apr 13 20:00:16.450084 containerd[1763]: time="2026-04-13T20:00:16.449956343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:00:16.450084 containerd[1763]: time="2026-04-13T20:00:16.450000384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:00:16.450084 containerd[1763]: time="2026-04-13T20:00:16.450044544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:00:16.450477 containerd[1763]: time="2026-04-13T20:00:16.450312784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:00:16.466295 systemd[1]: Started cri-containerd-04e7d93900dd04cf912ebc91d81bbc6427fee4a7a62df6cc0c5810e64a99d4a7.scope - libcontainer container 04e7d93900dd04cf912ebc91d81bbc6427fee4a7a62df6cc0c5810e64a99d4a7. Apr 13 20:00:16.493909 containerd[1763]: time="2026-04-13T20:00:16.493868872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-85hhj,Uid:797b6739-291c-4874-a61c-77e52300eeab,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"04e7d93900dd04cf912ebc91d81bbc6427fee4a7a62df6cc0c5810e64a99d4a7\"" Apr 13 20:00:16.495845 containerd[1763]: time="2026-04-13T20:00:16.495822874Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 13 20:00:17.124808 kubelet[3186]: I0413 20:00:17.124464 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f2f2x" podStartSLOduration=2.124447329 podStartE2EDuration="2.124447329s" podCreationTimestamp="2026-04-13 20:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:00:17.123404087 +0000 UTC m=+6.173758734" watchObservedRunningTime="2026-04-13 20:00:17.124447329 +0000 UTC m=+6.174801976" Apr 13 20:00:17.967363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3778209153.mount: Deactivated successfully. Apr 13 20:00:19.467162 containerd[1763]: time="2026-04-13T20:00:19.466382197Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:00:19.470117 containerd[1763]: time="2026-04-13T20:00:19.469874681Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 13 20:00:19.474089 containerd[1763]: time="2026-04-13T20:00:19.473781007Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:00:19.480235 containerd[1763]: time="2026-04-13T20:00:19.480200655Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:00:19.481105 containerd[1763]: time="2026-04-13T20:00:19.481075256Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.985218582s" Apr 13 20:00:19.481105 containerd[1763]: time="2026-04-13T20:00:19.481104696Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 13 20:00:19.490787 containerd[1763]: time="2026-04-13T20:00:19.490759469Z" level=info msg="CreateContainer within sandbox \"04e7d93900dd04cf912ebc91d81bbc6427fee4a7a62df6cc0c5810e64a99d4a7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 13 20:00:19.536684 containerd[1763]: time="2026-04-13T20:00:19.536571130Z" level=info msg="CreateContainer within sandbox \"04e7d93900dd04cf912ebc91d81bbc6427fee4a7a62df6cc0c5810e64a99d4a7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5d94ec66bb3944dfe1911eed63e88ee89dfeb3147a736e50729bac293a3a4b26\"" Apr 13 20:00:19.537600 containerd[1763]: time="2026-04-13T20:00:19.537517771Z" level=info msg="StartContainer for \"5d94ec66bb3944dfe1911eed63e88ee89dfeb3147a736e50729bac293a3a4b26\"" Apr 13 20:00:19.570271 systemd[1]: Started cri-containerd-5d94ec66bb3944dfe1911eed63e88ee89dfeb3147a736e50729bac293a3a4b26.scope - libcontainer container 5d94ec66bb3944dfe1911eed63e88ee89dfeb3147a736e50729bac293a3a4b26. Apr 13 20:00:19.596912 containerd[1763]: time="2026-04-13T20:00:19.596759210Z" level=info msg="StartContainer for \"5d94ec66bb3944dfe1911eed63e88ee89dfeb3147a736e50729bac293a3a4b26\" returns successfully" Apr 13 20:00:24.292478 kubelet[3186]: I0413 20:00:24.292358 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-85hhj" podStartSLOduration=5.304778002 podStartE2EDuration="8.292326987s" podCreationTimestamp="2026-04-13 20:00:16 +0000 UTC" firstStartedPulling="2026-04-13 20:00:16.495327234 +0000 UTC m=+5.545681881" lastFinishedPulling="2026-04-13 20:00:19.482876259 +0000 UTC m=+8.533230866" observedRunningTime="2026-04-13 20:00:20.127437795 +0000 UTC m=+9.177792482" watchObservedRunningTime="2026-04-13 20:00:24.292326987 +0000 UTC m=+13.342681674" Apr 13 20:00:25.338820 sudo[2251]: pam_unix(sudo:session): session closed for user root Apr 13 20:00:25.489583 sshd[2248]: pam_unix(sshd:session): session closed for user core Apr 13 20:00:25.492541 systemd[1]: sshd@6-10.0.0.17:22-20.229.252.112:35338.service: Deactivated successfully. Apr 13 20:00:25.496749 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 20:00:25.497069 systemd[1]: session-9.scope: Consumed 8.792s CPU time, 151.5M memory peak, 0B memory swap peak. Apr 13 20:00:25.499599 systemd-logind[1714]: Session 9 logged out. Waiting for processes to exit. Apr 13 20:00:25.501707 systemd-logind[1714]: Removed session 9. Apr 13 20:00:30.692625 systemd[1]: Created slice kubepods-besteffort-podbde00cd3_e95c_44aa_acea_33011e0ee6b1.slice - libcontainer container kubepods-besteffort-podbde00cd3_e95c_44aa_acea_33011e0ee6b1.slice. Apr 13 20:00:30.759989 kubelet[3186]: I0413 20:00:30.759949 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bde00cd3-e95c-44aa-acea-33011e0ee6b1-tigera-ca-bundle\") pod \"calico-typha-d56695856-bbdj2\" (UID: \"bde00cd3-e95c-44aa-acea-33011e0ee6b1\") " pod="calico-system/calico-typha-d56695856-bbdj2" Apr 13 20:00:30.760585 kubelet[3186]: I0413 20:00:30.760478 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bde00cd3-e95c-44aa-acea-33011e0ee6b1-typha-certs\") pod \"calico-typha-d56695856-bbdj2\" (UID: \"bde00cd3-e95c-44aa-acea-33011e0ee6b1\") " pod="calico-system/calico-typha-d56695856-bbdj2" Apr 13 20:00:30.760585 kubelet[3186]: I0413 20:00:30.760541 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tkx2\" (UniqueName: \"kubernetes.io/projected/bde00cd3-e95c-44aa-acea-33011e0ee6b1-kube-api-access-5tkx2\") pod \"calico-typha-d56695856-bbdj2\" (UID: \"bde00cd3-e95c-44aa-acea-33011e0ee6b1\") " pod="calico-system/calico-typha-d56695856-bbdj2" Apr 13 20:00:30.846043 systemd[1]: Created slice kubepods-besteffort-poda596b730_c209_4e0f_a43b_dc2841ac8116.slice - libcontainer container kubepods-besteffort-poda596b730_c209_4e0f_a43b_dc2841ac8116.slice. Apr 13 20:00:30.862660 kubelet[3186]: I0413 20:00:30.861329 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwz5p\" (UniqueName: \"kubernetes.io/projected/a596b730-c209-4e0f-a43b-dc2841ac8116-kube-api-access-fwz5p\") pod \"calico-node-wppks\" (UID: \"a596b730-c209-4e0f-a43b-dc2841ac8116\") " pod="calico-system/calico-node-wppks" Apr 13 20:00:30.862660 kubelet[3186]: I0413 20:00:30.861392 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/a596b730-c209-4e0f-a43b-dc2841ac8116-bpffs\") pod \"calico-node-wppks\" (UID: \"a596b730-c209-4e0f-a43b-dc2841ac8116\") " pod="calico-system/calico-node-wppks" Apr 13 20:00:30.862660 kubelet[3186]: I0413 20:00:30.861411 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/a596b730-c209-4e0f-a43b-dc2841ac8116-nodeproc\") pod \"calico-node-wppks\" (UID: \"a596b730-c209-4e0f-a43b-dc2841ac8116\") " pod="calico-system/calico-node-wppks" Apr 13 20:00:30.862660 kubelet[3186]: I0413 20:00:30.861425 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a596b730-c209-4e0f-a43b-dc2841ac8116-policysync\") pod \"calico-node-wppks\" (UID: \"a596b730-c209-4e0f-a43b-dc2841ac8116\") " pod="calico-system/calico-node-wppks" Apr 13 20:00:30.862660 kubelet[3186]: I0413 20:00:30.861439 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a596b730-c209-4e0f-a43b-dc2841ac8116-var-run-calico\") pod \"calico-node-wppks\" (UID: \"a596b730-c209-4e0f-a43b-dc2841ac8116\") " pod="calico-system/calico-node-wppks" Apr 13 20:00:30.862888 kubelet[3186]: I0413 20:00:30.861456 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a596b730-c209-4e0f-a43b-dc2841ac8116-cni-log-dir\") pod \"calico-node-wppks\" (UID: \"a596b730-c209-4e0f-a43b-dc2841ac8116\") " pod="calico-system/calico-node-wppks" Apr 13 20:00:30.862888 kubelet[3186]: I0413 20:00:30.861470 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a596b730-c209-4e0f-a43b-dc2841ac8116-cni-net-dir\") pod \"calico-node-wppks\" (UID: \"a596b730-c209-4e0f-a43b-dc2841ac8116\") " pod="calico-system/calico-node-wppks" Apr 13 20:00:30.862888 kubelet[3186]: I0413 20:00:30.861484 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/a596b730-c209-4e0f-a43b-dc2841ac8116-sys-fs\") pod \"calico-node-wppks\" (UID: \"a596b730-c209-4e0f-a43b-dc2841ac8116\") " pod="calico-system/calico-node-wppks" Apr 13 20:00:30.862888 kubelet[3186]: I0413 20:00:30.861502 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a596b730-c209-4e0f-a43b-dc2841ac8116-tigera-ca-bundle\") pod \"calico-node-wppks\" (UID: \"a596b730-c209-4e0f-a43b-dc2841ac8116\") " pod="calico-system/calico-node-wppks" Apr 13 20:00:30.862888 kubelet[3186]: I0413 20:00:30.861518 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a596b730-c209-4e0f-a43b-dc2841ac8116-flexvol-driver-host\") pod \"calico-node-wppks\" (UID: \"a596b730-c209-4e0f-a43b-dc2841ac8116\") " pod="calico-system/calico-node-wppks" Apr 13 20:00:30.863002 kubelet[3186]: I0413 20:00:30.861532 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a596b730-c209-4e0f-a43b-dc2841ac8116-xtables-lock\") pod \"calico-node-wppks\" (UID: \"a596b730-c209-4e0f-a43b-dc2841ac8116\") " pod="calico-system/calico-node-wppks" Apr 13 20:00:30.863002 kubelet[3186]: I0413 20:00:30.861558 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a596b730-c209-4e0f-a43b-dc2841ac8116-cni-bin-dir\") pod \"calico-node-wppks\" (UID: \"a596b730-c209-4e0f-a43b-dc2841ac8116\") " pod="calico-system/calico-node-wppks" Apr 13 20:00:30.863002 kubelet[3186]: I0413 20:00:30.861589 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a596b730-c209-4e0f-a43b-dc2841ac8116-lib-modules\") pod \"calico-node-wppks\" (UID: \"a596b730-c209-4e0f-a43b-dc2841ac8116\") " pod="calico-system/calico-node-wppks" Apr 13 20:00:30.863002 kubelet[3186]: I0413 20:00:30.861602 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a596b730-c209-4e0f-a43b-dc2841ac8116-node-certs\") pod \"calico-node-wppks\" (UID: \"a596b730-c209-4e0f-a43b-dc2841ac8116\") " pod="calico-system/calico-node-wppks" Apr 13 20:00:30.863002 kubelet[3186]: I0413 20:00:30.861619 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a596b730-c209-4e0f-a43b-dc2841ac8116-var-lib-calico\") pod \"calico-node-wppks\" (UID: \"a596b730-c209-4e0f-a43b-dc2841ac8116\") " pod="calico-system/calico-node-wppks" Apr 13 20:00:30.945869 kubelet[3186]: E0413 20:00:30.945233 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mq4g6" podUID="18374f84-4b11-4d44-b742-59ff7eced87e" Apr 13 20:00:30.962352 kubelet[3186]: I0413 20:00:30.962317 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhqpg\" (UniqueName: \"kubernetes.io/projected/18374f84-4b11-4d44-b742-59ff7eced87e-kube-api-access-lhqpg\") pod \"csi-node-driver-mq4g6\" (UID: \"18374f84-4b11-4d44-b742-59ff7eced87e\") " pod="calico-system/csi-node-driver-mq4g6" Apr 13 20:00:30.963821 kubelet[3186]: I0413 20:00:30.963788 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/18374f84-4b11-4d44-b742-59ff7eced87e-registration-dir\") pod \"csi-node-driver-mq4g6\" (UID: \"18374f84-4b11-4d44-b742-59ff7eced87e\") " pod="calico-system/csi-node-driver-mq4g6" Apr 13 20:00:30.964395 kubelet[3186]: I0413 20:00:30.964254 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/18374f84-4b11-4d44-b742-59ff7eced87e-kubelet-dir\") pod \"csi-node-driver-mq4g6\" (UID: \"18374f84-4b11-4d44-b742-59ff7eced87e\") " pod="calico-system/csi-node-driver-mq4g6" Apr 13 20:00:30.965263 kubelet[3186]: I0413 20:00:30.965240 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/18374f84-4b11-4d44-b742-59ff7eced87e-varrun\") pod \"csi-node-driver-mq4g6\" (UID: \"18374f84-4b11-4d44-b742-59ff7eced87e\") " pod="calico-system/csi-node-driver-mq4g6" Apr 13 20:00:30.966302 kubelet[3186]: I0413 20:00:30.966193 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/18374f84-4b11-4d44-b742-59ff7eced87e-socket-dir\") pod \"csi-node-driver-mq4g6\" (UID: \"18374f84-4b11-4d44-b742-59ff7eced87e\") " pod="calico-system/csi-node-driver-mq4g6" Apr 13 20:00:30.969379 kubelet[3186]: E0413 20:00:30.969349 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:30.969379 kubelet[3186]: W0413 20:00:30.969367 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:30.969379 kubelet[3186]: E0413 20:00:30.969386 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:30.971793 kubelet[3186]: E0413 20:00:30.971730 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:30.971793 kubelet[3186]: W0413 20:00:30.971747 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:30.971793 kubelet[3186]: E0413 20:00:30.971764 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:30.974960 kubelet[3186]: E0413 20:00:30.974826 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:30.974960 kubelet[3186]: W0413 20:00:30.974843 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:30.974960 kubelet[3186]: E0413 20:00:30.974863 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:30.975316 kubelet[3186]: E0413 20:00:30.975193 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:30.975316 kubelet[3186]: W0413 20:00:30.975213 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:30.975316 kubelet[3186]: E0413 20:00:30.975227 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:30.975526 kubelet[3186]: E0413 20:00:30.975514 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:30.975589 kubelet[3186]: W0413 20:00:30.975578 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:30.975724 kubelet[3186]: E0413 20:00:30.975636 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:30.977151 kubelet[3186]: E0413 20:00:30.976926 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:30.977151 kubelet[3186]: W0413 20:00:30.976940 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:30.977151 kubelet[3186]: E0413 20:00:30.976953 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:30.977985 kubelet[3186]: E0413 20:00:30.977967 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:30.977985 kubelet[3186]: W0413 20:00:30.977982 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:30.978121 kubelet[3186]: E0413 20:00:30.978030 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:30.978406 kubelet[3186]: E0413 20:00:30.978215 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:30.978406 kubelet[3186]: W0413 20:00:30.978227 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:30.978406 kubelet[3186]: E0413 20:00:30.978236 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:30.979658 kubelet[3186]: E0413 20:00:30.978680 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:30.979658 kubelet[3186]: W0413 20:00:30.978696 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:30.979658 kubelet[3186]: E0413 20:00:30.978709 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:30.979658 kubelet[3186]: E0413 20:00:30.979382 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:30.979658 kubelet[3186]: W0413 20:00:30.979392 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:30.979658 kubelet[3186]: E0413 20:00:30.979403 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:30.981197 kubelet[3186]: E0413 20:00:30.981179 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:30.981295 kubelet[3186]: W0413 20:00:30.981282 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:30.981358 kubelet[3186]: E0413 20:00:30.981345 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:30.982159 kubelet[3186]: E0413 20:00:30.981588 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:30.982273 kubelet[3186]: W0413 20:00:30.982255 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:30.982332 kubelet[3186]: E0413 20:00:30.982321 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:30.982628 kubelet[3186]: E0413 20:00:30.982614 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:30.984518 kubelet[3186]: W0413 20:00:30.984189 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:30.984518 kubelet[3186]: E0413 20:00:30.984216 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:30.984693 kubelet[3186]: E0413 20:00:30.984679 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:30.984752 kubelet[3186]: W0413 20:00:30.984741 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:30.988153 kubelet[3186]: E0413 20:00:30.985119 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:30.988472 kubelet[3186]: E0413 20:00:30.988457 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:30.988553 kubelet[3186]: W0413 20:00:30.988539 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:30.988622 kubelet[3186]: E0413 20:00:30.988610 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:30.988974 kubelet[3186]: E0413 20:00:30.988830 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:30.988974 kubelet[3186]: W0413 20:00:30.988842 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:30.988974 kubelet[3186]: E0413 20:00:30.988853 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:30.989339 kubelet[3186]: E0413 20:00:30.989230 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:30.989339 kubelet[3186]: W0413 20:00:30.989243 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:30.989339 kubelet[3186]: E0413 20:00:30.989254 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:30.990149 kubelet[3186]: E0413 20:00:30.989604 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:30.990149 kubelet[3186]: W0413 20:00:30.989617 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:30.990149 kubelet[3186]: E0413 20:00:30.989629 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:30.993847 kubelet[3186]: E0413 20:00:30.993216 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:30.993847 kubelet[3186]: W0413 20:00:30.993232 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:30.993847 kubelet[3186]: E0413 20:00:30.993246 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.004095 containerd[1763]: time="2026-04-13T20:00:31.002794643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d56695856-bbdj2,Uid:bde00cd3-e95c-44aa-acea-33011e0ee6b1,Namespace:calico-system,Attempt:0,}" Apr 13 20:00:31.016250 kubelet[3186]: E0413 20:00:31.016226 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.016381 kubelet[3186]: W0413 20:00:31.016368 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.016441 kubelet[3186]: E0413 20:00:31.016430 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.060642 containerd[1763]: time="2026-04-13T20:00:31.060549035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:00:31.061023 containerd[1763]: time="2026-04-13T20:00:31.060800275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:00:31.061023 containerd[1763]: time="2026-04-13T20:00:31.060888275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:00:31.062330 containerd[1763]: time="2026-04-13T20:00:31.062231037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:00:31.070209 kubelet[3186]: E0413 20:00:31.070063 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.070209 kubelet[3186]: W0413 20:00:31.070083 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.070209 kubelet[3186]: E0413 20:00:31.070101 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.070626 kubelet[3186]: E0413 20:00:31.070424 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.070626 kubelet[3186]: W0413 20:00:31.070437 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.070626 kubelet[3186]: E0413 20:00:31.070448 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.070906 kubelet[3186]: E0413 20:00:31.070773 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.070906 kubelet[3186]: W0413 20:00:31.070785 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.070906 kubelet[3186]: E0413 20:00:31.070796 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.071065 kubelet[3186]: E0413 20:00:31.071053 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.071200 kubelet[3186]: W0413 20:00:31.071169 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.071200 kubelet[3186]: E0413 20:00:31.071188 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.071499 kubelet[3186]: E0413 20:00:31.071487 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.071575 kubelet[3186]: W0413 20:00:31.071563 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.071631 kubelet[3186]: E0413 20:00:31.071621 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.073166 kubelet[3186]: E0413 20:00:31.072960 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.073166 kubelet[3186]: W0413 20:00:31.072974 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.073166 kubelet[3186]: E0413 20:00:31.072985 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.074055 kubelet[3186]: E0413 20:00:31.073340 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.074055 kubelet[3186]: W0413 20:00:31.073352 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.074055 kubelet[3186]: E0413 20:00:31.073363 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.074055 kubelet[3186]: E0413 20:00:31.073560 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.074055 kubelet[3186]: W0413 20:00:31.073571 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.074055 kubelet[3186]: E0413 20:00:31.073581 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.074055 kubelet[3186]: E0413 20:00:31.073745 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.074055 kubelet[3186]: W0413 20:00:31.073753 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.074055 kubelet[3186]: E0413 20:00:31.073762 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.074055 kubelet[3186]: E0413 20:00:31.073912 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.074346 kubelet[3186]: W0413 20:00:31.073920 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.074346 kubelet[3186]: E0413 20:00:31.073928 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.074736 kubelet[3186]: E0413 20:00:31.074438 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.074736 kubelet[3186]: W0413 20:00:31.074449 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.074736 kubelet[3186]: E0413 20:00:31.074463 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.074736 kubelet[3186]: E0413 20:00:31.074636 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.074736 kubelet[3186]: W0413 20:00:31.074644 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.074736 kubelet[3186]: E0413 20:00:31.074653 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.076350 kubelet[3186]: E0413 20:00:31.074996 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.076350 kubelet[3186]: W0413 20:00:31.075009 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.076350 kubelet[3186]: E0413 20:00:31.075020 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.076679 kubelet[3186]: E0413 20:00:31.076665 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.076763 kubelet[3186]: W0413 20:00:31.076749 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.076825 kubelet[3186]: E0413 20:00:31.076814 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.077870 kubelet[3186]: E0413 20:00:31.077771 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.077870 kubelet[3186]: W0413 20:00:31.077786 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.077870 kubelet[3186]: E0413 20:00:31.077798 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.078402 kubelet[3186]: E0413 20:00:31.078380 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.078402 kubelet[3186]: W0413 20:00:31.078396 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.078494 kubelet[3186]: E0413 20:00:31.078409 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.079870 kubelet[3186]: E0413 20:00:31.079850 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.079870 kubelet[3186]: W0413 20:00:31.079867 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.079952 kubelet[3186]: E0413 20:00:31.079881 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.080361 kubelet[3186]: E0413 20:00:31.080342 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.080361 kubelet[3186]: W0413 20:00:31.080357 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.080438 kubelet[3186]: E0413 20:00:31.080369 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.080883 kubelet[3186]: E0413 20:00:31.080866 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.080883 kubelet[3186]: W0413 20:00:31.080879 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.080952 kubelet[3186]: E0413 20:00:31.080889 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.081310 kubelet[3186]: E0413 20:00:31.081172 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.081310 kubelet[3186]: W0413 20:00:31.081186 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.081310 kubelet[3186]: E0413 20:00:31.081198 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.081851 kubelet[3186]: E0413 20:00:31.081571 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.081851 kubelet[3186]: W0413 20:00:31.081583 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.081851 kubelet[3186]: E0413 20:00:31.081595 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.081851 kubelet[3186]: E0413 20:00:31.081775 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.081851 kubelet[3186]: W0413 20:00:31.081784 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.081851 kubelet[3186]: E0413 20:00:31.081792 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.082149 kubelet[3186]: E0413 20:00:31.082120 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.082681 kubelet[3186]: W0413 20:00:31.082368 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.082681 kubelet[3186]: E0413 20:00:31.082388 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.082681 kubelet[3186]: E0413 20:00:31.082586 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.082681 kubelet[3186]: W0413 20:00:31.082595 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.082681 kubelet[3186]: E0413 20:00:31.082605 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.082952 kubelet[3186]: E0413 20:00:31.082940 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.083009 kubelet[3186]: W0413 20:00:31.082998 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.083298 systemd[1]: Started cri-containerd-6e51f0c9a0da655f82e06c2c3afdb1b102658695750edcc0bff8a87aabbc9f8e.scope - libcontainer container 6e51f0c9a0da655f82e06c2c3afdb1b102658695750edcc0bff8a87aabbc9f8e. Apr 13 20:00:31.083474 kubelet[3186]: E0413 20:00:31.083426 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.096716 kubelet[3186]: E0413 20:00:31.096693 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:31.096882 kubelet[3186]: W0413 20:00:31.096868 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:31.096964 kubelet[3186]: E0413 20:00:31.096952 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:31.121020 containerd[1763]: time="2026-04-13T20:00:31.120974270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d56695856-bbdj2,Uid:bde00cd3-e95c-44aa-acea-33011e0ee6b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"6e51f0c9a0da655f82e06c2c3afdb1b102658695750edcc0bff8a87aabbc9f8e\"" Apr 13 20:00:31.125567 containerd[1763]: time="2026-04-13T20:00:31.125379076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 13 20:00:31.151415 containerd[1763]: time="2026-04-13T20:00:31.151376988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wppks,Uid:a596b730-c209-4e0f-a43b-dc2841ac8116,Namespace:calico-system,Attempt:0,}" Apr 13 20:00:31.202924 containerd[1763]: time="2026-04-13T20:00:31.202746532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:00:31.202924 containerd[1763]: time="2026-04-13T20:00:31.202816812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:00:31.202924 containerd[1763]: time="2026-04-13T20:00:31.202832292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:00:31.202924 containerd[1763]: time="2026-04-13T20:00:31.202920212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:00:31.220267 systemd[1]: Started cri-containerd-b44c34ce05ed40c71ed21dca11c9043a2f60bdbebf7fea7fa193e5bda9a66f96.scope - libcontainer container b44c34ce05ed40c71ed21dca11c9043a2f60bdbebf7fea7fa193e5bda9a66f96. Apr 13 20:00:31.239162 containerd[1763]: time="2026-04-13T20:00:31.239086297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wppks,Uid:a596b730-c209-4e0f-a43b-dc2841ac8116,Namespace:calico-system,Attempt:0,} returns sandbox id \"b44c34ce05ed40c71ed21dca11c9043a2f60bdbebf7fea7fa193e5bda9a66f96\"" Apr 13 20:00:33.068412 kubelet[3186]: E0413 20:00:33.067562 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mq4g6" podUID="18374f84-4b11-4d44-b742-59ff7eced87e" Apr 13 20:00:35.069006 kubelet[3186]: E0413 20:00:35.068163 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mq4g6" podUID="18374f84-4b11-4d44-b742-59ff7eced87e" Apr 13 20:00:37.069268 kubelet[3186]: E0413 20:00:37.068378 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mq4g6" podUID="18374f84-4b11-4d44-b742-59ff7eced87e" Apr 13 20:00:37.194959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount128714905.mount: Deactivated successfully. Apr 13 20:00:37.594162 containerd[1763]: time="2026-04-13T20:00:37.593955975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:00:37.596638 containerd[1763]: time="2026-04-13T20:00:37.596590738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 13 20:00:37.600426 containerd[1763]: time="2026-04-13T20:00:37.600375063Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:00:37.605874 containerd[1763]: time="2026-04-13T20:00:37.605086109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:00:37.605874 containerd[1763]: time="2026-04-13T20:00:37.605643510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 6.480231394s" Apr 13 20:00:37.605874 containerd[1763]: time="2026-04-13T20:00:37.605676950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 13 20:00:37.608203 containerd[1763]: time="2026-04-13T20:00:37.608176993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 13 20:00:37.628594 containerd[1763]: time="2026-04-13T20:00:37.628557180Z" level=info msg="CreateContainer within sandbox \"6e51f0c9a0da655f82e06c2c3afdb1b102658695750edcc0bff8a87aabbc9f8e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 13 20:00:37.664019 containerd[1763]: time="2026-04-13T20:00:37.663960466Z" level=info msg="CreateContainer within sandbox \"6e51f0c9a0da655f82e06c2c3afdb1b102658695750edcc0bff8a87aabbc9f8e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e6575b7cd0b3fa6409887a02fff260b5f1319caef66b92f16990862c04e7a3bf\"" Apr 13 20:00:37.665960 containerd[1763]: time="2026-04-13T20:00:37.664843427Z" level=info msg="StartContainer for \"e6575b7cd0b3fa6409887a02fff260b5f1319caef66b92f16990862c04e7a3bf\"" Apr 13 20:00:37.688252 systemd[1]: Started cri-containerd-e6575b7cd0b3fa6409887a02fff260b5f1319caef66b92f16990862c04e7a3bf.scope - libcontainer container e6575b7cd0b3fa6409887a02fff260b5f1319caef66b92f16990862c04e7a3bf. Apr 13 20:00:37.741310 containerd[1763]: time="2026-04-13T20:00:37.741259566Z" level=info msg="StartContainer for \"e6575b7cd0b3fa6409887a02fff260b5f1319caef66b92f16990862c04e7a3bf\" returns successfully" Apr 13 20:00:38.196853 kubelet[3186]: E0413 20:00:38.196824 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.197498 kubelet[3186]: W0413 20:00:38.197169 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.197498 kubelet[3186]: E0413 20:00:38.197198 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.197498 kubelet[3186]: E0413 20:00:38.197424 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.197734 kubelet[3186]: W0413 20:00:38.197435 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.197734 kubelet[3186]: E0413 20:00:38.197636 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.197887 kubelet[3186]: E0413 20:00:38.197876 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.198004 kubelet[3186]: W0413 20:00:38.197915 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.198004 kubelet[3186]: E0413 20:00:38.197928 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.198396 kubelet[3186]: E0413 20:00:38.198279 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.198396 kubelet[3186]: W0413 20:00:38.198331 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.198396 kubelet[3186]: E0413 20:00:38.198345 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.198800 kubelet[3186]: E0413 20:00:38.198692 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.198800 kubelet[3186]: W0413 20:00:38.198705 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.198800 kubelet[3186]: E0413 20:00:38.198715 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.199026 kubelet[3186]: E0413 20:00:38.198964 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.199026 kubelet[3186]: W0413 20:00:38.198975 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.199026 kubelet[3186]: E0413 20:00:38.198985 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.199354 kubelet[3186]: E0413 20:00:38.199288 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.199354 kubelet[3186]: W0413 20:00:38.199299 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.199354 kubelet[3186]: E0413 20:00:38.199309 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.199728 kubelet[3186]: E0413 20:00:38.199613 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.199728 kubelet[3186]: W0413 20:00:38.199629 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.199728 kubelet[3186]: E0413 20:00:38.199639 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.199955 kubelet[3186]: E0413 20:00:38.199900 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.199955 kubelet[3186]: W0413 20:00:38.199909 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.199955 kubelet[3186]: E0413 20:00:38.199919 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.200324 kubelet[3186]: E0413 20:00:38.200259 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.200324 kubelet[3186]: W0413 20:00:38.200275 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.200324 kubelet[3186]: E0413 20:00:38.200287 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.200705 kubelet[3186]: E0413 20:00:38.200589 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.200705 kubelet[3186]: W0413 20:00:38.200604 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.200705 kubelet[3186]: E0413 20:00:38.200618 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.200953 kubelet[3186]: E0413 20:00:38.200873 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.200953 kubelet[3186]: W0413 20:00:38.200884 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.200953 kubelet[3186]: E0413 20:00:38.200893 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.201317 kubelet[3186]: E0413 20:00:38.201208 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.201317 kubelet[3186]: W0413 20:00:38.201221 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.201317 kubelet[3186]: E0413 20:00:38.201231 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.201529 kubelet[3186]: E0413 20:00:38.201496 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.201529 kubelet[3186]: W0413 20:00:38.201507 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.201641 kubelet[3186]: E0413 20:00:38.201517 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.201843 kubelet[3186]: E0413 20:00:38.201826 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.201989 kubelet[3186]: W0413 20:00:38.201909 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.201989 kubelet[3186]: E0413 20:00:38.201934 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.214344 kubelet[3186]: E0413 20:00:38.214260 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.214344 kubelet[3186]: W0413 20:00:38.214274 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.214344 kubelet[3186]: E0413 20:00:38.214285 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.214709 kubelet[3186]: E0413 20:00:38.214633 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.214709 kubelet[3186]: W0413 20:00:38.214645 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.214709 kubelet[3186]: E0413 20:00:38.214656 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.215164 kubelet[3186]: E0413 20:00:38.214873 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.215164 kubelet[3186]: W0413 20:00:38.214891 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.215164 kubelet[3186]: E0413 20:00:38.214904 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.215164 kubelet[3186]: E0413 20:00:38.215076 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.215164 kubelet[3186]: W0413 20:00:38.215085 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.215164 kubelet[3186]: E0413 20:00:38.215093 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.216007 kubelet[3186]: E0413 20:00:38.215239 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.216007 kubelet[3186]: W0413 20:00:38.215246 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.216007 kubelet[3186]: E0413 20:00:38.215256 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.216007 kubelet[3186]: E0413 20:00:38.215422 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.216007 kubelet[3186]: W0413 20:00:38.215430 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.216007 kubelet[3186]: E0413 20:00:38.215438 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.216654 kubelet[3186]: E0413 20:00:38.216630 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.216654 kubelet[3186]: W0413 20:00:38.216646 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.216654 kubelet[3186]: E0413 20:00:38.216657 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.217812 kubelet[3186]: E0413 20:00:38.217698 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.217812 kubelet[3186]: W0413 20:00:38.217712 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.217812 kubelet[3186]: E0413 20:00:38.217723 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.218711 kubelet[3186]: E0413 20:00:38.218475 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.218711 kubelet[3186]: W0413 20:00:38.218487 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.218711 kubelet[3186]: E0413 20:00:38.218498 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.219479 kubelet[3186]: E0413 20:00:38.219336 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.219479 kubelet[3186]: W0413 20:00:38.219349 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.219479 kubelet[3186]: E0413 20:00:38.219361 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.219853 kubelet[3186]: E0413 20:00:38.219764 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.219853 kubelet[3186]: W0413 20:00:38.219779 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.219853 kubelet[3186]: E0413 20:00:38.219790 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.220368 kubelet[3186]: E0413 20:00:38.220259 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.220368 kubelet[3186]: W0413 20:00:38.220272 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.220368 kubelet[3186]: E0413 20:00:38.220286 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.220723 kubelet[3186]: E0413 20:00:38.220606 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.220723 kubelet[3186]: W0413 20:00:38.220618 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.220723 kubelet[3186]: E0413 20:00:38.220628 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.221214 kubelet[3186]: E0413 20:00:38.221049 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.221214 kubelet[3186]: W0413 20:00:38.221077 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.221214 kubelet[3186]: E0413 20:00:38.221089 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.222280 kubelet[3186]: E0413 20:00:38.222065 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.222280 kubelet[3186]: W0413 20:00:38.222078 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.222280 kubelet[3186]: E0413 20:00:38.222090 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.222845 kubelet[3186]: E0413 20:00:38.222671 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.222845 kubelet[3186]: W0413 20:00:38.222688 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.222845 kubelet[3186]: E0413 20:00:38.222699 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.223050 kubelet[3186]: E0413 20:00:38.222911 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.223050 kubelet[3186]: W0413 20:00:38.222921 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.223050 kubelet[3186]: E0413 20:00:38.222932 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:38.223173 kubelet[3186]: E0413 20:00:38.223076 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:38.223173 kubelet[3186]: W0413 20:00:38.223083 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:38.223173 kubelet[3186]: E0413 20:00:38.223091 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.069113 kubelet[3186]: E0413 20:00:39.067293 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mq4g6" podUID="18374f84-4b11-4d44-b742-59ff7eced87e" Apr 13 20:00:39.145769 kubelet[3186]: I0413 20:00:39.145740 3186 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:00:39.209584 kubelet[3186]: E0413 20:00:39.209553 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.209584 kubelet[3186]: W0413 20:00:39.209578 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.209937 kubelet[3186]: E0413 20:00:39.209599 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.209937 kubelet[3186]: E0413 20:00:39.209754 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.209937 kubelet[3186]: W0413 20:00:39.209761 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.209937 kubelet[3186]: E0413 20:00:39.209770 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.209937 kubelet[3186]: E0413 20:00:39.209902 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.209937 kubelet[3186]: W0413 20:00:39.209910 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.209937 kubelet[3186]: E0413 20:00:39.209918 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.210084 kubelet[3186]: E0413 20:00:39.210046 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.210084 kubelet[3186]: W0413 20:00:39.210053 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.210084 kubelet[3186]: E0413 20:00:39.210059 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.210248 kubelet[3186]: E0413 20:00:39.210232 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.210248 kubelet[3186]: W0413 20:00:39.210244 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.210298 kubelet[3186]: E0413 20:00:39.210253 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.210393 kubelet[3186]: E0413 20:00:39.210380 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.210393 kubelet[3186]: W0413 20:00:39.210392 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.210436 kubelet[3186]: E0413 20:00:39.210399 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.210535 kubelet[3186]: E0413 20:00:39.210523 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.210561 kubelet[3186]: W0413 20:00:39.210534 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.210561 kubelet[3186]: E0413 20:00:39.210542 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.210680 kubelet[3186]: E0413 20:00:39.210668 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.210705 kubelet[3186]: W0413 20:00:39.210679 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.210705 kubelet[3186]: E0413 20:00:39.210687 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.210887 kubelet[3186]: E0413 20:00:39.210872 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.210887 kubelet[3186]: W0413 20:00:39.210883 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.210950 kubelet[3186]: E0413 20:00:39.210891 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.211048 kubelet[3186]: E0413 20:00:39.211035 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.211048 kubelet[3186]: W0413 20:00:39.211046 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.211094 kubelet[3186]: E0413 20:00:39.211054 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.211214 kubelet[3186]: E0413 20:00:39.211202 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.211214 kubelet[3186]: W0413 20:00:39.211213 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.211268 kubelet[3186]: E0413 20:00:39.211220 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.211370 kubelet[3186]: E0413 20:00:39.211357 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.211370 kubelet[3186]: W0413 20:00:39.211367 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.211415 kubelet[3186]: E0413 20:00:39.211375 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.211522 kubelet[3186]: E0413 20:00:39.211510 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.211547 kubelet[3186]: W0413 20:00:39.211522 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.211574 kubelet[3186]: E0413 20:00:39.211530 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.211701 kubelet[3186]: E0413 20:00:39.211689 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.211739 kubelet[3186]: W0413 20:00:39.211702 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.211739 kubelet[3186]: E0413 20:00:39.211711 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.211892 kubelet[3186]: E0413 20:00:39.211880 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.211892 kubelet[3186]: W0413 20:00:39.211889 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.211956 kubelet[3186]: E0413 20:00:39.211897 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.222353 kubelet[3186]: E0413 20:00:39.222333 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.222353 kubelet[3186]: W0413 20:00:39.222349 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.222477 kubelet[3186]: E0413 20:00:39.222360 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.222557 kubelet[3186]: E0413 20:00:39.222545 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.222557 kubelet[3186]: W0413 20:00:39.222556 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.222638 kubelet[3186]: E0413 20:00:39.222565 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.222754 kubelet[3186]: E0413 20:00:39.222743 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.222754 kubelet[3186]: W0413 20:00:39.222753 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.222821 kubelet[3186]: E0413 20:00:39.222761 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.222951 kubelet[3186]: E0413 20:00:39.222940 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.222951 kubelet[3186]: W0413 20:00:39.222949 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.223026 kubelet[3186]: E0413 20:00:39.222958 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.223098 kubelet[3186]: E0413 20:00:39.223088 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.223098 kubelet[3186]: W0413 20:00:39.223097 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.223188 kubelet[3186]: E0413 20:00:39.223105 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.223251 kubelet[3186]: E0413 20:00:39.223240 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.223251 kubelet[3186]: W0413 20:00:39.223249 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.223327 kubelet[3186]: E0413 20:00:39.223257 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.223410 kubelet[3186]: E0413 20:00:39.223400 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.223410 kubelet[3186]: W0413 20:00:39.223408 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.223468 kubelet[3186]: E0413 20:00:39.223416 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.223779 kubelet[3186]: E0413 20:00:39.223678 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.223779 kubelet[3186]: W0413 20:00:39.223691 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.223779 kubelet[3186]: E0413 20:00:39.223703 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.224022 kubelet[3186]: E0413 20:00:39.223945 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.224022 kubelet[3186]: W0413 20:00:39.223957 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.224022 kubelet[3186]: E0413 20:00:39.223967 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.224324 kubelet[3186]: E0413 20:00:39.224258 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.224324 kubelet[3186]: W0413 20:00:39.224269 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.224324 kubelet[3186]: E0413 20:00:39.224280 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.224646 kubelet[3186]: E0413 20:00:39.224567 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.224646 kubelet[3186]: W0413 20:00:39.224579 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.224646 kubelet[3186]: E0413 20:00:39.224589 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.224953 kubelet[3186]: E0413 20:00:39.224869 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.224953 kubelet[3186]: W0413 20:00:39.224880 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.224953 kubelet[3186]: E0413 20:00:39.224890 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.225317 kubelet[3186]: E0413 20:00:39.225187 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.225317 kubelet[3186]: W0413 20:00:39.225199 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.225317 kubelet[3186]: E0413 20:00:39.225209 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.225417 kubelet[3186]: E0413 20:00:39.225407 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.225444 kubelet[3186]: W0413 20:00:39.225417 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.225444 kubelet[3186]: E0413 20:00:39.225427 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.225599 kubelet[3186]: E0413 20:00:39.225543 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.225599 kubelet[3186]: W0413 20:00:39.225555 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.225599 kubelet[3186]: E0413 20:00:39.225563 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.225725 kubelet[3186]: E0413 20:00:39.225710 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.225725 kubelet[3186]: W0413 20:00:39.225723 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.225785 kubelet[3186]: E0413 20:00:39.225733 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.226140 kubelet[3186]: E0413 20:00:39.226075 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.226140 kubelet[3186]: W0413 20:00:39.226090 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.226140 kubelet[3186]: E0413 20:00:39.226103 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:39.226481 kubelet[3186]: E0413 20:00:39.226436 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:39.226481 kubelet[3186]: W0413 20:00:39.226449 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:39.226481 kubelet[3186]: E0413 20:00:39.226461 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:41.072155 kubelet[3186]: E0413 20:00:41.071214 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mq4g6" podUID="18374f84-4b11-4d44-b742-59ff7eced87e" Apr 13 20:00:43.069662 kubelet[3186]: E0413 20:00:43.068265 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mq4g6" podUID="18374f84-4b11-4d44-b742-59ff7eced87e" Apr 13 20:00:45.068483 kubelet[3186]: E0413 20:00:45.068383 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mq4g6" podUID="18374f84-4b11-4d44-b742-59ff7eced87e" Apr 13 20:00:45.608569 kubelet[3186]: I0413 20:00:45.608485 3186 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:00:45.622702 kubelet[3186]: I0413 20:00:45.622634 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d56695856-bbdj2" podStartSLOduration=9.137433476 podStartE2EDuration="15.622619236s" podCreationTimestamp="2026-04-13 20:00:30 +0000 UTC" firstStartedPulling="2026-04-13 20:00:31.122884313 +0000 UTC m=+20.173238920" lastFinishedPulling="2026-04-13 20:00:37.608070033 +0000 UTC m=+26.658424680" observedRunningTime="2026-04-13 20:00:38.158253029 +0000 UTC m=+27.208607636" watchObservedRunningTime="2026-04-13 20:00:45.622619236 +0000 UTC m=+34.672973883" Apr 13 20:00:45.646836 kubelet[3186]: E0413 20:00:45.646803 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.646836 kubelet[3186]: W0413 20:00:45.646825 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.647105 kubelet[3186]: E0413 20:00:45.646844 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.647352 kubelet[3186]: E0413 20:00:45.647313 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.647399 kubelet[3186]: W0413 20:00:45.647330 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.647399 kubelet[3186]: E0413 20:00:45.647377 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.647819 kubelet[3186]: E0413 20:00:45.647799 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.647819 kubelet[3186]: W0413 20:00:45.647819 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.647901 kubelet[3186]: E0413 20:00:45.647832 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.648517 kubelet[3186]: E0413 20:00:45.648493 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.648517 kubelet[3186]: W0413 20:00:45.648511 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.648517 kubelet[3186]: E0413 20:00:45.648524 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.648753 kubelet[3186]: E0413 20:00:45.648738 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.648753 kubelet[3186]: W0413 20:00:45.648750 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.648817 kubelet[3186]: E0413 20:00:45.648761 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.649050 kubelet[3186]: E0413 20:00:45.649031 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.649050 kubelet[3186]: W0413 20:00:45.649046 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.649165 kubelet[3186]: E0413 20:00:45.649057 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.649361 kubelet[3186]: E0413 20:00:45.649334 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.649361 kubelet[3186]: W0413 20:00:45.649349 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.649439 kubelet[3186]: E0413 20:00:45.649359 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.649597 kubelet[3186]: E0413 20:00:45.649580 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.649597 kubelet[3186]: W0413 20:00:45.649593 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.649660 kubelet[3186]: E0413 20:00:45.649602 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.649835 kubelet[3186]: E0413 20:00:45.649818 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.649835 kubelet[3186]: W0413 20:00:45.649832 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.649896 kubelet[3186]: E0413 20:00:45.649842 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.650054 kubelet[3186]: E0413 20:00:45.650039 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.650054 kubelet[3186]: W0413 20:00:45.650050 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.650157 kubelet[3186]: E0413 20:00:45.650058 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.650331 kubelet[3186]: E0413 20:00:45.650295 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.650331 kubelet[3186]: W0413 20:00:45.650326 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.650331 kubelet[3186]: E0413 20:00:45.650337 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.650556 kubelet[3186]: E0413 20:00:45.650538 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.650556 kubelet[3186]: W0413 20:00:45.650552 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.650616 kubelet[3186]: E0413 20:00:45.650561 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.650769 kubelet[3186]: E0413 20:00:45.650754 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.650769 kubelet[3186]: W0413 20:00:45.650765 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.650830 kubelet[3186]: E0413 20:00:45.650773 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.650953 kubelet[3186]: E0413 20:00:45.650939 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.650953 kubelet[3186]: W0413 20:00:45.650950 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.651017 kubelet[3186]: E0413 20:00:45.650957 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.651192 kubelet[3186]: E0413 20:00:45.651176 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.651192 kubelet[3186]: W0413 20:00:45.651191 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.651255 kubelet[3186]: E0413 20:00:45.651201 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.656500 kubelet[3186]: E0413 20:00:45.656481 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.656500 kubelet[3186]: W0413 20:00:45.656496 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.656678 kubelet[3186]: E0413 20:00:45.656506 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.656753 kubelet[3186]: E0413 20:00:45.656740 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.656753 kubelet[3186]: W0413 20:00:45.656750 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.656812 kubelet[3186]: E0413 20:00:45.656759 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.656945 kubelet[3186]: E0413 20:00:45.656933 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.656945 kubelet[3186]: W0413 20:00:45.656943 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.657152 kubelet[3186]: E0413 20:00:45.656951 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.657234 kubelet[3186]: E0413 20:00:45.657221 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.657234 kubelet[3186]: W0413 20:00:45.657232 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.657301 kubelet[3186]: E0413 20:00:45.657240 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.657438 kubelet[3186]: E0413 20:00:45.657426 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.657438 kubelet[3186]: W0413 20:00:45.657436 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.657563 kubelet[3186]: E0413 20:00:45.657443 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.657634 kubelet[3186]: E0413 20:00:45.657623 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.657634 kubelet[3186]: W0413 20:00:45.657632 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.657692 kubelet[3186]: E0413 20:00:45.657639 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.657842 kubelet[3186]: E0413 20:00:45.657829 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.657842 kubelet[3186]: W0413 20:00:45.657839 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.658072 kubelet[3186]: E0413 20:00:45.657847 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.658230 kubelet[3186]: E0413 20:00:45.658218 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.658230 kubelet[3186]: W0413 20:00:45.658228 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.658369 kubelet[3186]: E0413 20:00:45.658236 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.658455 kubelet[3186]: E0413 20:00:45.658440 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.658455 kubelet[3186]: W0413 20:00:45.658453 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.658512 kubelet[3186]: E0413 20:00:45.658474 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.658657 kubelet[3186]: E0413 20:00:45.658645 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.658657 kubelet[3186]: W0413 20:00:45.658656 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.658727 kubelet[3186]: E0413 20:00:45.658664 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.658881 kubelet[3186]: E0413 20:00:45.658869 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.658881 kubelet[3186]: W0413 20:00:45.658881 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.658959 kubelet[3186]: E0413 20:00:45.658891 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.659098 kubelet[3186]: E0413 20:00:45.659087 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.659098 kubelet[3186]: W0413 20:00:45.659096 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.659178 kubelet[3186]: E0413 20:00:45.659105 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.659319 kubelet[3186]: E0413 20:00:45.659307 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.659319 kubelet[3186]: W0413 20:00:45.659317 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.659381 kubelet[3186]: E0413 20:00:45.659325 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.659629 kubelet[3186]: E0413 20:00:45.659615 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.659629 kubelet[3186]: W0413 20:00:45.659627 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.659690 kubelet[3186]: E0413 20:00:45.659636 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.659801 kubelet[3186]: E0413 20:00:45.659788 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.659801 kubelet[3186]: W0413 20:00:45.659798 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.659861 kubelet[3186]: E0413 20:00:45.659806 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.659974 kubelet[3186]: E0413 20:00:45.659961 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.659974 kubelet[3186]: W0413 20:00:45.659971 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.660032 kubelet[3186]: E0413 20:00:45.659978 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.660170 kubelet[3186]: E0413 20:00:45.660156 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.660170 kubelet[3186]: W0413 20:00:45.660167 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.660242 kubelet[3186]: E0413 20:00:45.660176 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:45.660581 kubelet[3186]: E0413 20:00:45.660566 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:45.660581 kubelet[3186]: W0413 20:00:45.660578 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:45.660640 kubelet[3186]: E0413 20:00:45.660588 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.254116 kubelet[3186]: E0413 20:00:46.254087 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.254116 kubelet[3186]: W0413 20:00:46.254109 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.254497 kubelet[3186]: E0413 20:00:46.254141 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.254497 kubelet[3186]: E0413 20:00:46.254287 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.254497 kubelet[3186]: W0413 20:00:46.254295 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.254497 kubelet[3186]: E0413 20:00:46.254303 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.254497 kubelet[3186]: E0413 20:00:46.254430 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.254497 kubelet[3186]: W0413 20:00:46.254437 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.254497 kubelet[3186]: E0413 20:00:46.254445 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.254647 kubelet[3186]: E0413 20:00:46.254562 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.254647 kubelet[3186]: W0413 20:00:46.254569 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.254647 kubelet[3186]: E0413 20:00:46.254575 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.254706 kubelet[3186]: E0413 20:00:46.254698 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.254706 kubelet[3186]: W0413 20:00:46.254704 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.254748 kubelet[3186]: E0413 20:00:46.254711 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.254834 kubelet[3186]: E0413 20:00:46.254823 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.254834 kubelet[3186]: W0413 20:00:46.254833 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.254899 kubelet[3186]: E0413 20:00:46.254843 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.254975 kubelet[3186]: E0413 20:00:46.254965 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.254975 kubelet[3186]: W0413 20:00:46.254974 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.255035 kubelet[3186]: E0413 20:00:46.254981 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.255110 kubelet[3186]: E0413 20:00:46.255100 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.255110 kubelet[3186]: W0413 20:00:46.255109 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.255183 kubelet[3186]: E0413 20:00:46.255116 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.255270 kubelet[3186]: E0413 20:00:46.255259 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.255270 kubelet[3186]: W0413 20:00:46.255269 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.255331 kubelet[3186]: E0413 20:00:46.255277 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.255409 kubelet[3186]: E0413 20:00:46.255399 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.255409 kubelet[3186]: W0413 20:00:46.255407 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.255464 kubelet[3186]: E0413 20:00:46.255415 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.255534 kubelet[3186]: E0413 20:00:46.255524 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.255564 kubelet[3186]: W0413 20:00:46.255534 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.255564 kubelet[3186]: E0413 20:00:46.255541 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.255663 kubelet[3186]: E0413 20:00:46.255654 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.255663 kubelet[3186]: W0413 20:00:46.255662 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.255719 kubelet[3186]: E0413 20:00:46.255669 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.255799 kubelet[3186]: E0413 20:00:46.255789 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.255799 kubelet[3186]: W0413 20:00:46.255798 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.255852 kubelet[3186]: E0413 20:00:46.255805 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.255935 kubelet[3186]: E0413 20:00:46.255925 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.255935 kubelet[3186]: W0413 20:00:46.255933 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.255988 kubelet[3186]: E0413 20:00:46.255941 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.256067 kubelet[3186]: E0413 20:00:46.256057 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.256067 kubelet[3186]: W0413 20:00:46.256066 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.256119 kubelet[3186]: E0413 20:00:46.256073 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.261577 kubelet[3186]: E0413 20:00:46.261471 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.261577 kubelet[3186]: W0413 20:00:46.261484 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.261577 kubelet[3186]: E0413 20:00:46.261496 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.261920 kubelet[3186]: E0413 20:00:46.261786 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.261920 kubelet[3186]: W0413 20:00:46.261798 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.261920 kubelet[3186]: E0413 20:00:46.261808 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.262301 kubelet[3186]: E0413 20:00:46.262181 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.262301 kubelet[3186]: W0413 20:00:46.262195 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.262301 kubelet[3186]: E0413 20:00:46.262206 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.262588 kubelet[3186]: E0413 20:00:46.262576 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.262738 kubelet[3186]: W0413 20:00:46.262619 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.262738 kubelet[3186]: E0413 20:00:46.262633 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.263065 kubelet[3186]: E0413 20:00:46.263032 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.263065 kubelet[3186]: W0413 20:00:46.263043 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.263065 kubelet[3186]: E0413 20:00:46.263054 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.263517 kubelet[3186]: E0413 20:00:46.263408 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.263517 kubelet[3186]: W0413 20:00:46.263419 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.263517 kubelet[3186]: E0413 20:00:46.263430 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.263843 kubelet[3186]: E0413 20:00:46.263832 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.263988 kubelet[3186]: W0413 20:00:46.263909 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.263988 kubelet[3186]: E0413 20:00:46.263925 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.264512 kubelet[3186]: E0413 20:00:46.264408 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.264512 kubelet[3186]: W0413 20:00:46.264419 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.264512 kubelet[3186]: E0413 20:00:46.264429 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.264844 kubelet[3186]: E0413 20:00:46.264791 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.264844 kubelet[3186]: W0413 20:00:46.264801 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.264844 kubelet[3186]: E0413 20:00:46.264812 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.265281 kubelet[3186]: E0413 20:00:46.265173 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.265281 kubelet[3186]: W0413 20:00:46.265184 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.265281 kubelet[3186]: E0413 20:00:46.265195 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.265678 kubelet[3186]: E0413 20:00:46.265601 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.265678 kubelet[3186]: W0413 20:00:46.265626 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.265678 kubelet[3186]: E0413 20:00:46.265638 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.266034 kubelet[3186]: E0413 20:00:46.265940 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.266034 kubelet[3186]: W0413 20:00:46.265962 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.266034 kubelet[3186]: E0413 20:00:46.265972 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.266450 kubelet[3186]: E0413 20:00:46.266303 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.266450 kubelet[3186]: W0413 20:00:46.266329 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.266450 kubelet[3186]: E0413 20:00:46.266339 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.266713 kubelet[3186]: E0413 20:00:46.266618 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.266713 kubelet[3186]: W0413 20:00:46.266628 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.266713 kubelet[3186]: E0413 20:00:46.266638 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.267112 kubelet[3186]: E0413 20:00:46.267001 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.267112 kubelet[3186]: W0413 20:00:46.267013 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.267112 kubelet[3186]: E0413 20:00:46.267023 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.267978 kubelet[3186]: E0413 20:00:46.267956 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.268035 kubelet[3186]: W0413 20:00:46.268022 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.268109 kubelet[3186]: E0413 20:00:46.268098 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.268799 kubelet[3186]: E0413 20:00:46.268767 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.268799 kubelet[3186]: W0413 20:00:46.268787 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.268886 kubelet[3186]: E0413 20:00:46.268806 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:46.268994 kubelet[3186]: E0413 20:00:46.268969 3186 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:00:46.268994 kubelet[3186]: W0413 20:00:46.268990 3186 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:00:46.269053 kubelet[3186]: E0413 20:00:46.269001 3186 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:00:47.068506 kubelet[3186]: E0413 20:00:47.068453 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mq4g6" podUID="18374f84-4b11-4d44-b742-59ff7eced87e" Apr 13 20:00:48.582267 containerd[1763]: time="2026-04-13T20:00:48.581742168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:00:48.585339 containerd[1763]: time="2026-04-13T20:00:48.585313213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 13 20:00:48.589238 containerd[1763]: time="2026-04-13T20:00:48.589210138Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:00:48.595704 containerd[1763]: time="2026-04-13T20:00:48.595671986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:00:48.596487 containerd[1763]: time="2026-04-13T20:00:48.596453667Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 10.988247114s" Apr 13 20:00:48.596655 containerd[1763]: time="2026-04-13T20:00:48.596573467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 13 20:00:48.605194 containerd[1763]: time="2026-04-13T20:00:48.605160918Z" level=info msg="CreateContainer within sandbox \"b44c34ce05ed40c71ed21dca11c9043a2f60bdbebf7fea7fa193e5bda9a66f96\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 13 20:00:48.641161 containerd[1763]: time="2026-04-13T20:00:48.641103925Z" level=info msg="CreateContainer within sandbox \"b44c34ce05ed40c71ed21dca11c9043a2f60bdbebf7fea7fa193e5bda9a66f96\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b902fd7148cb9b6dc6a435de9d58e8c3189ba7e75691816a731a4037656229ae\"" Apr 13 20:00:48.641868 containerd[1763]: time="2026-04-13T20:00:48.641843366Z" level=info msg="StartContainer for \"b902fd7148cb9b6dc6a435de9d58e8c3189ba7e75691816a731a4037656229ae\"" Apr 13 20:00:48.670285 systemd[1]: Started cri-containerd-b902fd7148cb9b6dc6a435de9d58e8c3189ba7e75691816a731a4037656229ae.scope - libcontainer container b902fd7148cb9b6dc6a435de9d58e8c3189ba7e75691816a731a4037656229ae. Apr 13 20:00:48.699518 containerd[1763]: time="2026-04-13T20:00:48.699474560Z" level=info msg="StartContainer for \"b902fd7148cb9b6dc6a435de9d58e8c3189ba7e75691816a731a4037656229ae\" returns successfully" Apr 13 20:00:48.708938 systemd[1]: cri-containerd-b902fd7148cb9b6dc6a435de9d58e8c3189ba7e75691816a731a4037656229ae.scope: Deactivated successfully. Apr 13 20:00:48.729112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b902fd7148cb9b6dc6a435de9d58e8c3189ba7e75691816a731a4037656229ae-rootfs.mount: Deactivated successfully. Apr 13 20:00:49.198977 kubelet[3186]: E0413 20:00:49.068348 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mq4g6" podUID="18374f84-4b11-4d44-b742-59ff7eced87e" Apr 13 20:00:49.766006 containerd[1763]: time="2026-04-13T20:00:49.765948534Z" level=info msg="shim disconnected" id=b902fd7148cb9b6dc6a435de9d58e8c3189ba7e75691816a731a4037656229ae namespace=k8s.io Apr 13 20:00:49.766006 containerd[1763]: time="2026-04-13T20:00:49.765998974Z" level=warning msg="cleaning up after shim disconnected" id=b902fd7148cb9b6dc6a435de9d58e8c3189ba7e75691816a731a4037656229ae namespace=k8s.io Apr 13 20:00:49.766006 containerd[1763]: time="2026-04-13T20:00:49.766006814Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:00:50.167354 containerd[1763]: time="2026-04-13T20:00:50.167275291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 13 20:00:51.068425 kubelet[3186]: E0413 20:00:51.067472 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mq4g6" podUID="18374f84-4b11-4d44-b742-59ff7eced87e" Apr 13 20:00:53.069291 kubelet[3186]: E0413 20:00:53.068312 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mq4g6" podUID="18374f84-4b11-4d44-b742-59ff7eced87e" Apr 13 20:00:55.068154 kubelet[3186]: E0413 20:00:55.067849 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mq4g6" podUID="18374f84-4b11-4d44-b742-59ff7eced87e" Apr 13 20:00:57.067491 kubelet[3186]: E0413 20:00:57.067451 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mq4g6" podUID="18374f84-4b11-4d44-b742-59ff7eced87e" Apr 13 20:00:58.309687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1728308902.mount: Deactivated successfully. Apr 13 20:00:58.361563 containerd[1763]: time="2026-04-13T20:00:58.361522391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:00:58.366738 containerd[1763]: time="2026-04-13T20:00:58.366711678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 13 20:00:58.375787 containerd[1763]: time="2026-04-13T20:00:58.375745649Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:00:58.382056 containerd[1763]: time="2026-04-13T20:00:58.382019018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:00:58.382917 containerd[1763]: time="2026-04-13T20:00:58.382813819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 8.215498168s" Apr 13 20:00:58.382917 containerd[1763]: time="2026-04-13T20:00:58.382852259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 13 20:00:58.392397 containerd[1763]: time="2026-04-13T20:00:58.392265711Z" level=info msg="CreateContainer within sandbox \"b44c34ce05ed40c71ed21dca11c9043a2f60bdbebf7fea7fa193e5bda9a66f96\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 13 20:00:58.439823 containerd[1763]: time="2026-04-13T20:00:58.439776133Z" level=info msg="CreateContainer within sandbox \"b44c34ce05ed40c71ed21dca11c9043a2f60bdbebf7fea7fa193e5bda9a66f96\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"fffc79f541861ee7161ce7999fe71118f8d5d5cef5eb9cf679ead68de4db3099\"" Apr 13 20:00:58.440615 containerd[1763]: time="2026-04-13T20:00:58.440588014Z" level=info msg="StartContainer for \"fffc79f541861ee7161ce7999fe71118f8d5d5cef5eb9cf679ead68de4db3099\"" Apr 13 20:00:58.472321 systemd[1]: Started cri-containerd-fffc79f541861ee7161ce7999fe71118f8d5d5cef5eb9cf679ead68de4db3099.scope - libcontainer container fffc79f541861ee7161ce7999fe71118f8d5d5cef5eb9cf679ead68de4db3099. Apr 13 20:00:58.498914 containerd[1763]: time="2026-04-13T20:00:58.498863049Z" level=info msg="StartContainer for \"fffc79f541861ee7161ce7999fe71118f8d5d5cef5eb9cf679ead68de4db3099\" returns successfully" Apr 13 20:00:58.535710 systemd[1]: cri-containerd-fffc79f541861ee7161ce7999fe71118f8d5d5cef5eb9cf679ead68de4db3099.scope: Deactivated successfully. Apr 13 20:00:59.067709 kubelet[3186]: E0413 20:00:59.067388 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mq4g6" podUID="18374f84-4b11-4d44-b742-59ff7eced87e" Apr 13 20:00:59.309938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fffc79f541861ee7161ce7999fe71118f8d5d5cef5eb9cf679ead68de4db3099-rootfs.mount: Deactivated successfully. Apr 13 20:01:00.166936 containerd[1763]: time="2026-04-13T20:01:00.166736335Z" level=info msg="shim disconnected" id=fffc79f541861ee7161ce7999fe71118f8d5d5cef5eb9cf679ead68de4db3099 namespace=k8s.io Apr 13 20:01:00.166936 containerd[1763]: time="2026-04-13T20:01:00.166786495Z" level=warning msg="cleaning up after shim disconnected" id=fffc79f541861ee7161ce7999fe71118f8d5d5cef5eb9cf679ead68de4db3099 namespace=k8s.io Apr 13 20:01:00.166936 containerd[1763]: time="2026-04-13T20:01:00.166794375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:01:00.176147 containerd[1763]: time="2026-04-13T20:01:00.175751147Z" level=warning msg="cleanup warnings time=\"2026-04-13T20:01:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 20:01:00.186945 containerd[1763]: time="2026-04-13T20:01:00.186913082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 13 20:01:01.071154 kubelet[3186]: E0413 20:01:01.070633 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mq4g6" podUID="18374f84-4b11-4d44-b742-59ff7eced87e" Apr 13 20:01:03.068142 kubelet[3186]: E0413 20:01:03.067588 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mq4g6" podUID="18374f84-4b11-4d44-b742-59ff7eced87e" Apr 13 20:01:03.666769 containerd[1763]: time="2026-04-13T20:01:03.666717845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:03.670728 containerd[1763]: time="2026-04-13T20:01:03.670478569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 13 20:01:03.676331 containerd[1763]: time="2026-04-13T20:01:03.674892535Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:03.681161 containerd[1763]: time="2026-04-13T20:01:03.680865343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:03.681760 containerd[1763]: time="2026-04-13T20:01:03.681587944Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.494522062s" Apr 13 20:01:03.681760 containerd[1763]: time="2026-04-13T20:01:03.681621864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 13 20:01:03.693649 containerd[1763]: time="2026-04-13T20:01:03.693605479Z" level=info msg="CreateContainer within sandbox \"b44c34ce05ed40c71ed21dca11c9043a2f60bdbebf7fea7fa193e5bda9a66f96\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 13 20:01:03.728007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3010046064.mount: Deactivated successfully. Apr 13 20:01:03.751400 containerd[1763]: time="2026-04-13T20:01:03.751355755Z" level=info msg="CreateContainer within sandbox \"b44c34ce05ed40c71ed21dca11c9043a2f60bdbebf7fea7fa193e5bda9a66f96\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b83f55cf2ae64e49bcaeb0a05e8591ec54ccd67960f5f7370b534ae0ce6aa38d\"" Apr 13 20:01:03.753169 containerd[1763]: time="2026-04-13T20:01:03.752008355Z" level=info msg="StartContainer for \"b83f55cf2ae64e49bcaeb0a05e8591ec54ccd67960f5f7370b534ae0ce6aa38d\"" Apr 13 20:01:03.778460 systemd[1]: run-containerd-runc-k8s.io-b83f55cf2ae64e49bcaeb0a05e8591ec54ccd67960f5f7370b534ae0ce6aa38d-runc.UmQ3SX.mount: Deactivated successfully. Apr 13 20:01:03.783398 systemd[1]: Started cri-containerd-b83f55cf2ae64e49bcaeb0a05e8591ec54ccd67960f5f7370b534ae0ce6aa38d.scope - libcontainer container b83f55cf2ae64e49bcaeb0a05e8591ec54ccd67960f5f7370b534ae0ce6aa38d. Apr 13 20:01:03.812144 containerd[1763]: time="2026-04-13T20:01:03.812081073Z" level=info msg="StartContainer for \"b83f55cf2ae64e49bcaeb0a05e8591ec54ccd67960f5f7370b534ae0ce6aa38d\" returns successfully" Apr 13 20:01:05.068229 kubelet[3186]: E0413 20:01:05.068188 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mq4g6" podUID="18374f84-4b11-4d44-b742-59ff7eced87e" Apr 13 20:01:05.069591 systemd[1]: cri-containerd-b83f55cf2ae64e49bcaeb0a05e8591ec54ccd67960f5f7370b534ae0ce6aa38d.scope: Deactivated successfully. Apr 13 20:01:05.095448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b83f55cf2ae64e49bcaeb0a05e8591ec54ccd67960f5f7370b534ae0ce6aa38d-rootfs.mount: Deactivated successfully. Apr 13 20:01:05.158069 kubelet[3186]: I0413 20:01:05.157080 3186 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 13 20:01:05.924080 systemd[1]: Created slice kubepods-burstable-podf11b10eb_dcb3_4727_9ed1_4a266865a659.slice - libcontainer container kubepods-burstable-podf11b10eb_dcb3_4727_9ed1_4a266865a659.slice. Apr 13 20:01:05.930488 containerd[1763]: time="2026-04-13T20:01:05.930108146Z" level=info msg="shim disconnected" id=b83f55cf2ae64e49bcaeb0a05e8591ec54ccd67960f5f7370b534ae0ce6aa38d namespace=k8s.io Apr 13 20:01:05.930488 containerd[1763]: time="2026-04-13T20:01:05.930248667Z" level=warning msg="cleaning up after shim disconnected" id=b83f55cf2ae64e49bcaeb0a05e8591ec54ccd67960f5f7370b534ae0ce6aa38d namespace=k8s.io Apr 13 20:01:05.930488 containerd[1763]: time="2026-04-13T20:01:05.930258667Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:01:05.939880 systemd[1]: Created slice kubepods-besteffort-pod7c7a94b2_32bd_4e41_8a24_ff2f686a46d4.slice - libcontainer container kubepods-besteffort-pod7c7a94b2_32bd_4e41_8a24_ff2f686a46d4.slice. Apr 13 20:01:05.958983 containerd[1763]: time="2026-04-13T20:01:05.958873264Z" level=warning msg="cleanup warnings time=\"2026-04-13T20:01:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 20:01:05.961727 systemd[1]: Created slice kubepods-besteffort-pod09de3e5c_92f2_42e0_9bc5_3a7569d4f803.slice - libcontainer container kubepods-besteffort-pod09de3e5c_92f2_42e0_9bc5_3a7569d4f803.slice. Apr 13 20:01:05.970611 systemd[1]: Created slice kubepods-burstable-podc02c5010_47a5_4036_b455_4b637140354f.slice - libcontainer container kubepods-burstable-podc02c5010_47a5_4036_b455_4b637140354f.slice. Apr 13 20:01:05.982035 systemd[1]: Created slice kubepods-besteffort-podd4838596_14a2_436f_a221_2dc154b6f727.slice - libcontainer container kubepods-besteffort-podd4838596_14a2_436f_a221_2dc154b6f727.slice. Apr 13 20:01:05.986992 systemd[1]: Created slice kubepods-besteffort-pod6a611f4d_2ffb_4de8_a21c_ba7debfd9296.slice - libcontainer container kubepods-besteffort-pod6a611f4d_2ffb_4de8_a21c_ba7debfd9296.slice. Apr 13 20:01:05.994614 systemd[1]: Created slice kubepods-besteffort-poda73b8919_6d7b_42b2_aeb3_8bd24e7f26d3.slice - libcontainer container kubepods-besteffort-poda73b8919_6d7b_42b2_aeb3_8bd24e7f26d3.slice. Apr 13 20:01:05.998472 kubelet[3186]: I0413 20:01:05.997802 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f11b10eb-dcb3-4727-9ed1-4a266865a659-config-volume\") pod \"coredns-674b8bbfcf-7624w\" (UID: \"f11b10eb-dcb3-4727-9ed1-4a266865a659\") " pod="kube-system/coredns-674b8bbfcf-7624w" Apr 13 20:01:05.998472 kubelet[3186]: I0413 20:01:05.997843 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq5bx\" (UniqueName: \"kubernetes.io/projected/c02c5010-47a5-4036-b455-4b637140354f-kube-api-access-tq5bx\") pod \"coredns-674b8bbfcf-bh6fw\" (UID: \"c02c5010-47a5-4036-b455-4b637140354f\") " pod="kube-system/coredns-674b8bbfcf-bh6fw" Apr 13 20:01:05.998472 kubelet[3186]: I0413 20:01:05.997864 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4838596-14a2-436f-a221-2dc154b6f727-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-t6m5k\" (UID: \"d4838596-14a2-436f-a221-2dc154b6f727\") " pod="calico-system/goldmane-5b85766d88-t6m5k" Apr 13 20:01:05.998472 kubelet[3186]: I0413 20:01:05.997879 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6slj\" (UniqueName: \"kubernetes.io/projected/d4838596-14a2-436f-a221-2dc154b6f727-kube-api-access-g6slj\") pod \"goldmane-5b85766d88-t6m5k\" (UID: \"d4838596-14a2-436f-a221-2dc154b6f727\") " pod="calico-system/goldmane-5b85766d88-t6m5k" Apr 13 20:01:05.998472 kubelet[3186]: I0413 20:01:05.997895 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09de3e5c-92f2-42e0-9bc5-3a7569d4f803-tigera-ca-bundle\") pod \"calico-kube-controllers-5bfd86f5bd-q2wp4\" (UID: \"09de3e5c-92f2-42e0-9bc5-3a7569d4f803\") " pod="calico-system/calico-kube-controllers-5bfd86f5bd-q2wp4" Apr 13 20:01:05.998680 kubelet[3186]: I0413 20:01:05.997928 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7c7a94b2-32bd-4e41-8a24-ff2f686a46d4-whisker-backend-key-pair\") pod \"whisker-6fb7d689cd-z5r7b\" (UID: \"7c7a94b2-32bd-4e41-8a24-ff2f686a46d4\") " pod="calico-system/whisker-6fb7d689cd-z5r7b" Apr 13 20:01:05.998680 kubelet[3186]: I0413 20:01:05.997976 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c7a94b2-32bd-4e41-8a24-ff2f686a46d4-whisker-ca-bundle\") pod \"whisker-6fb7d689cd-z5r7b\" (UID: \"7c7a94b2-32bd-4e41-8a24-ff2f686a46d4\") " pod="calico-system/whisker-6fb7d689cd-z5r7b" Apr 13 20:01:05.998680 kubelet[3186]: I0413 20:01:05.998022 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-286jk\" (UniqueName: \"kubernetes.io/projected/7c7a94b2-32bd-4e41-8a24-ff2f686a46d4-kube-api-access-286jk\") pod \"whisker-6fb7d689cd-z5r7b\" (UID: \"7c7a94b2-32bd-4e41-8a24-ff2f686a46d4\") " pod="calico-system/whisker-6fb7d689cd-z5r7b" Apr 13 20:01:05.998680 kubelet[3186]: I0413 20:01:05.998044 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c02c5010-47a5-4036-b455-4b637140354f-config-volume\") pod \"coredns-674b8bbfcf-bh6fw\" (UID: \"c02c5010-47a5-4036-b455-4b637140354f\") " pod="kube-system/coredns-674b8bbfcf-bh6fw" Apr 13 20:01:05.998680 kubelet[3186]: I0413 20:01:05.998062 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4838596-14a2-436f-a221-2dc154b6f727-config\") pod \"goldmane-5b85766d88-t6m5k\" (UID: \"d4838596-14a2-436f-a221-2dc154b6f727\") " pod="calico-system/goldmane-5b85766d88-t6m5k" Apr 13 20:01:05.998798 kubelet[3186]: I0413 20:01:05.998098 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d4838596-14a2-436f-a221-2dc154b6f727-goldmane-key-pair\") pod \"goldmane-5b85766d88-t6m5k\" (UID: \"d4838596-14a2-436f-a221-2dc154b6f727\") " pod="calico-system/goldmane-5b85766d88-t6m5k" Apr 13 20:01:05.998798 kubelet[3186]: I0413 20:01:05.998117 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a73b8919-6d7b-42b2-aeb3-8bd24e7f26d3-calico-apiserver-certs\") pod \"calico-apiserver-8477974547-fl6tk\" (UID: \"a73b8919-6d7b-42b2-aeb3-8bd24e7f26d3\") " pod="calico-system/calico-apiserver-8477974547-fl6tk" Apr 13 20:01:05.998798 kubelet[3186]: I0413 20:01:05.998153 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbqtx\" (UniqueName: \"kubernetes.io/projected/6a611f4d-2ffb-4de8-a21c-ba7debfd9296-kube-api-access-hbqtx\") pod \"calico-apiserver-8477974547-xp2gk\" (UID: \"6a611f4d-2ffb-4de8-a21c-ba7debfd9296\") " pod="calico-system/calico-apiserver-8477974547-xp2gk" Apr 13 20:01:05.998798 kubelet[3186]: I0413 20:01:05.998171 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9j96\" (UniqueName: \"kubernetes.io/projected/f11b10eb-dcb3-4727-9ed1-4a266865a659-kube-api-access-t9j96\") pod \"coredns-674b8bbfcf-7624w\" (UID: \"f11b10eb-dcb3-4727-9ed1-4a266865a659\") " pod="kube-system/coredns-674b8bbfcf-7624w" Apr 13 20:01:05.998798 kubelet[3186]: I0413 20:01:05.998226 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqdfz\" (UniqueName: \"kubernetes.io/projected/09de3e5c-92f2-42e0-9bc5-3a7569d4f803-kube-api-access-qqdfz\") pod \"calico-kube-controllers-5bfd86f5bd-q2wp4\" (UID: \"09de3e5c-92f2-42e0-9bc5-3a7569d4f803\") " pod="calico-system/calico-kube-controllers-5bfd86f5bd-q2wp4" Apr 13 20:01:05.998910 kubelet[3186]: I0413 20:01:05.998247 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qjdd\" (UniqueName: \"kubernetes.io/projected/a73b8919-6d7b-42b2-aeb3-8bd24e7f26d3-kube-api-access-4qjdd\") pod \"calico-apiserver-8477974547-fl6tk\" (UID: \"a73b8919-6d7b-42b2-aeb3-8bd24e7f26d3\") " pod="calico-system/calico-apiserver-8477974547-fl6tk" Apr 13 20:01:05.998910 kubelet[3186]: I0413 20:01:05.998266 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7c7a94b2-32bd-4e41-8a24-ff2f686a46d4-nginx-config\") pod \"whisker-6fb7d689cd-z5r7b\" (UID: \"7c7a94b2-32bd-4e41-8a24-ff2f686a46d4\") " pod="calico-system/whisker-6fb7d689cd-z5r7b" Apr 13 20:01:05.998910 kubelet[3186]: I0413 20:01:05.998295 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6a611f4d-2ffb-4de8-a21c-ba7debfd9296-calico-apiserver-certs\") pod \"calico-apiserver-8477974547-xp2gk\" (UID: \"6a611f4d-2ffb-4de8-a21c-ba7debfd9296\") " pod="calico-system/calico-apiserver-8477974547-xp2gk" Apr 13 20:01:06.218899 containerd[1763]: time="2026-04-13T20:01:06.218733882Z" level=info msg="CreateContainer within sandbox \"b44c34ce05ed40c71ed21dca11c9043a2f60bdbebf7fea7fa193e5bda9a66f96\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 13 20:01:06.236158 containerd[1763]: time="2026-04-13T20:01:06.235705864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7624w,Uid:f11b10eb-dcb3-4727-9ed1-4a266865a659,Namespace:kube-system,Attempt:0,}" Apr 13 20:01:06.261800 containerd[1763]: time="2026-04-13T20:01:06.261760858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fb7d689cd-z5r7b,Uid:7c7a94b2-32bd-4e41-8a24-ff2f686a46d4,Namespace:calico-system,Attempt:0,}" Apr 13 20:01:06.276576 containerd[1763]: time="2026-04-13T20:01:06.276059796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bfd86f5bd-q2wp4,Uid:09de3e5c-92f2-42e0-9bc5-3a7569d4f803,Namespace:calico-system,Attempt:0,}" Apr 13 20:01:06.277039 containerd[1763]: time="2026-04-13T20:01:06.276968677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bh6fw,Uid:c02c5010-47a5-4036-b455-4b637140354f,Namespace:kube-system,Attempt:0,}" Apr 13 20:01:06.286914 containerd[1763]: time="2026-04-13T20:01:06.286873170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-t6m5k,Uid:d4838596-14a2-436f-a221-2dc154b6f727,Namespace:calico-system,Attempt:0,}" Apr 13 20:01:06.293116 containerd[1763]: time="2026-04-13T20:01:06.293072458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8477974547-xp2gk,Uid:6a611f4d-2ffb-4de8-a21c-ba7debfd9296,Namespace:calico-system,Attempt:0,}" Apr 13 20:01:06.298062 containerd[1763]: time="2026-04-13T20:01:06.298025225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8477974547-fl6tk,Uid:a73b8919-6d7b-42b2-aeb3-8bd24e7f26d3,Namespace:calico-system,Attempt:0,}" Apr 13 20:01:06.333696 containerd[1763]: time="2026-04-13T20:01:06.333572631Z" level=info msg="CreateContainer within sandbox \"b44c34ce05ed40c71ed21dca11c9043a2f60bdbebf7fea7fa193e5bda9a66f96\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"eb487b3aa91b1a47e4ae2ceb600daf8854c5385d608a1880e6188549ac1a5cce\"" Apr 13 20:01:06.334990 containerd[1763]: time="2026-04-13T20:01:06.334971553Z" level=info msg="StartContainer for \"eb487b3aa91b1a47e4ae2ceb600daf8854c5385d608a1880e6188549ac1a5cce\"" Apr 13 20:01:06.364301 systemd[1]: Started cri-containerd-eb487b3aa91b1a47e4ae2ceb600daf8854c5385d608a1880e6188549ac1a5cce.scope - libcontainer container eb487b3aa91b1a47e4ae2ceb600daf8854c5385d608a1880e6188549ac1a5cce. Apr 13 20:01:06.398801 containerd[1763]: time="2026-04-13T20:01:06.398750876Z" level=info msg="StartContainer for \"eb487b3aa91b1a47e4ae2ceb600daf8854c5385d608a1880e6188549ac1a5cce\" returns successfully" Apr 13 20:01:06.586079 containerd[1763]: time="2026-04-13T20:01:06.585867239Z" level=error msg="Failed to destroy network for sandbox \"fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:01:06.589067 containerd[1763]: time="2026-04-13T20:01:06.589028243Z" level=error msg="encountered an error cleaning up failed sandbox \"fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:01:06.592372 containerd[1763]: time="2026-04-13T20:01:06.592328567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7624w,Uid:f11b10eb-dcb3-4727-9ed1-4a266865a659,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:01:06.594606 kubelet[3186]: E0413 20:01:06.594264 3186 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:01:06.594606 kubelet[3186]: E0413 20:01:06.594324 3186 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7624w" Apr 13 20:01:06.594606 kubelet[3186]: E0413 20:01:06.594343 3186 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7624w" Apr 13 20:01:06.594992 kubelet[3186]: E0413 20:01:06.594386 3186 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7624w_kube-system(f11b10eb-dcb3-4727-9ed1-4a266865a659)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7624w_kube-system(f11b10eb-dcb3-4727-9ed1-4a266865a659)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7624w" podUID="f11b10eb-dcb3-4727-9ed1-4a266865a659" Apr 13 20:01:07.027573 systemd-networkd[1361]: cali6998814bad2: Link UP Apr 13 20:01:07.029706 systemd-networkd[1361]: cali6998814bad2: Gained carrier Apr 13 20:01:07.033672 systemd-networkd[1361]: cali3c654aaf199: Link UP Apr 13 20:01:07.035683 systemd-networkd[1361]: cali3c654aaf199: Gained carrier Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.707 [ERROR][4256] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.752 [INFO][4256] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--fl6tk-eth0 calico-apiserver-8477974547- calico-system a73b8919-6d7b-42b2-aeb3-8bd24e7f26d3 918 0 2026-04-13 20:00:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8477974547 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.7-a-39cd336750 calico-apiserver-8477974547-fl6tk eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali6998814bad2 [] [] }} ContainerID="0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e" Namespace="calico-system" Pod="calico-apiserver-8477974547-fl6tk" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--fl6tk-" Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.752 [INFO][4256] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e" Namespace="calico-system" Pod="calico-apiserver-8477974547-fl6tk" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--fl6tk-eth0" Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.852 [INFO][4300] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e" HandleID="k8s-pod-network.0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e" Workload="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--fl6tk-eth0" Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.862 [INFO][4300] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e" HandleID="k8s-pod-network.0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e" Workload="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--fl6tk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb730), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-39cd336750", "pod":"calico-apiserver-8477974547-fl6tk", "timestamp":"2026-04-13 20:01:06.852020745 +0000 UTC"}, Hostname:"ci-4081.3.7-a-39cd336750", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002c8580)} Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.862 [INFO][4300] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.862 [INFO][4300] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.862 [INFO][4300] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-39cd336750' Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.865 [INFO][4300] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.872 [INFO][4300] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.903 [INFO][4300] ipam/ipam.go 558: Ran out of existing affine blocks for host host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.909 [INFO][4300] ipam/ipam.go 575: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.912 [INFO][4300] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.74.0/26 Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.912 [INFO][4300] ipam/ipam.go 588: Found unclaimed block in 3.486645ms host="ci-4081.3.7-a-39cd336750" subnet=192.168.74.0/26 Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.912 [INFO][4300] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="ci-4081.3.7-a-39cd336750" subnet=192.168.74.0/26 Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.918 [INFO][4300] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="ci-4081.3.7-a-39cd336750" subnet=192.168.74.0/26 Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.918 [INFO][4300] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.921 [INFO][4300] ipam/ipam.go 165: The referenced block doesn't exist, trying to create it cidr=192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.924 [INFO][4300] ipam/ipam.go 172: Wrote affinity as pending cidr=192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.926 [INFO][4300] ipam/ipam.go 181: Attempting to claim the block cidr=192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.926 [INFO][4300] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="ci-4081.3.7-a-39cd336750" subnet=192.168.74.0/26 Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.931 [INFO][4300] ipam/ipam_block_reader_writer.go 267: Successfully created block Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.931 [INFO][4300] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="ci-4081.3.7-a-39cd336750" subnet=192.168.74.0/26 Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.938 [INFO][4300] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="ci-4081.3.7-a-39cd336750" subnet=192.168.74.0/26 Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.938 [INFO][4300] ipam/ipam.go 623: Block '192.168.74.0/26' has 64 free ips which is more than 1 ips required. host="ci-4081.3.7-a-39cd336750" subnet=192.168.74.0/26 Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.938 [INFO][4300] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.056003 containerd[1763]: 2026-04-13 20:01:06.940 [INFO][4300] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e Apr 13 20:01:07.056864 containerd[1763]: 2026-04-13 20:01:06.947 [INFO][4300] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.056864 containerd[1763]: 2026-04-13 20:01:06.957 [INFO][4300] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.0/26] block=192.168.74.0/26 handle="k8s-pod-network.0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.056864 containerd[1763]: 2026-04-13 20:01:06.957 [INFO][4300] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.0/26] handle="k8s-pod-network.0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.056864 containerd[1763]: 2026-04-13 20:01:06.957 [INFO][4300] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:01:07.056864 containerd[1763]: 2026-04-13 20:01:06.957 [INFO][4300] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.0/26] IPv6=[] ContainerID="0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e" HandleID="k8s-pod-network.0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e" Workload="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--fl6tk-eth0" Apr 13 20:01:07.056864 containerd[1763]: 2026-04-13 20:01:06.960 [INFO][4256] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e" Namespace="calico-system" Pod="calico-apiserver-8477974547-fl6tk" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--fl6tk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--fl6tk-eth0", GenerateName:"calico-apiserver-8477974547-", Namespace:"calico-system", SelfLink:"", UID:"a73b8919-6d7b-42b2-aeb3-8bd24e7f26d3", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8477974547", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-39cd336750", ContainerID:"", Pod:"calico-apiserver-8477974547-fl6tk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.0/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6998814bad2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:01:07.056864 containerd[1763]: 2026-04-13 20:01:06.960 [INFO][4256] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.0/32] ContainerID="0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e" Namespace="calico-system" Pod="calico-apiserver-8477974547-fl6tk" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--fl6tk-eth0" Apr 13 20:01:07.056864 containerd[1763]: 2026-04-13 20:01:06.960 [INFO][4256] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6998814bad2 ContainerID="0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e" Namespace="calico-system" Pod="calico-apiserver-8477974547-fl6tk" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--fl6tk-eth0" Apr 13 20:01:07.056864 containerd[1763]: 2026-04-13 20:01:07.028 [INFO][4256] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e" Namespace="calico-system" Pod="calico-apiserver-8477974547-fl6tk" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--fl6tk-eth0" Apr 13 20:01:07.056864 containerd[1763]: 2026-04-13 20:01:07.029 [INFO][4256] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e" Namespace="calico-system" Pod="calico-apiserver-8477974547-fl6tk" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--fl6tk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--fl6tk-eth0", GenerateName:"calico-apiserver-8477974547-", Namespace:"calico-system", SelfLink:"", UID:"a73b8919-6d7b-42b2-aeb3-8bd24e7f26d3", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8477974547", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-39cd336750", ContainerID:"0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e", Pod:"calico-apiserver-8477974547-fl6tk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.0/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6998814bad2", MAC:"76:5c:fd:e0:6a:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:01:07.057121 containerd[1763]: 2026-04-13 20:01:07.052 [INFO][4256] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e" Namespace="calico-system" Pod="calico-apiserver-8477974547-fl6tk" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--fl6tk-eth0" Apr 13 20:01:07.063012 containerd[1763]: 2026-04-13 20:01:06.664 [ERROR][4228] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:01:07.063012 containerd[1763]: 2026-04-13 20:01:06.745 [INFO][4228] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--bh6fw-eth0 coredns-674b8bbfcf- kube-system c02c5010-47a5-4036-b455-4b637140354f 917 0 2026-04-13 20:00:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.7-a-39cd336750 coredns-674b8bbfcf-bh6fw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3c654aaf199 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0" Namespace="kube-system" Pod="coredns-674b8bbfcf-bh6fw" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--bh6fw-" Apr 13 20:01:07.063012 containerd[1763]: 2026-04-13 20:01:06.745 [INFO][4228] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0" Namespace="kube-system" Pod="coredns-674b8bbfcf-bh6fw" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--bh6fw-eth0" Apr 13 20:01:07.063012 containerd[1763]: 2026-04-13 20:01:06.884 [INFO][4289] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0" HandleID="k8s-pod-network.6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0" Workload="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--bh6fw-eth0" Apr 13 20:01:07.063012 containerd[1763]: 2026-04-13 20:01:06.907 [INFO][4289] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0" HandleID="k8s-pod-network.6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0" Workload="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--bh6fw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039a350), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.7-a-39cd336750", "pod":"coredns-674b8bbfcf-bh6fw", "timestamp":"2026-04-13 20:01:06.884855827 +0000 UTC"}, Hostname:"ci-4081.3.7-a-39cd336750", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000187600)} Apr 13 20:01:07.063012 containerd[1763]: 2026-04-13 20:01:06.907 [INFO][4289] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:01:07.063012 containerd[1763]: 2026-04-13 20:01:06.957 [INFO][4289] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:01:07.063012 containerd[1763]: 2026-04-13 20:01:06.957 [INFO][4289] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-39cd336750' Apr 13 20:01:07.063012 containerd[1763]: 2026-04-13 20:01:06.965 [INFO][4289] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.063012 containerd[1763]: 2026-04-13 20:01:06.972 [INFO][4289] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.063012 containerd[1763]: 2026-04-13 20:01:06.985 [INFO][4289] ipam/ipam.go 526: Trying affinity for 192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.063012 containerd[1763]: 2026-04-13 20:01:06.987 [INFO][4289] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.063012 containerd[1763]: 2026-04-13 20:01:06.990 [INFO][4289] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.063012 containerd[1763]: 2026-04-13 20:01:06.990 [INFO][4289] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.063012 containerd[1763]: 2026-04-13 20:01:06.992 [INFO][4289] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0 Apr 13 20:01:07.063012 containerd[1763]: 2026-04-13 20:01:06.997 [INFO][4289] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.063012 containerd[1763]: 2026-04-13 20:01:07.010 [INFO][4289] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.1/26] block=192.168.74.0/26 handle="k8s-pod-network.6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.063012 containerd[1763]: 2026-04-13 20:01:07.010 [INFO][4289] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.1/26] handle="k8s-pod-network.6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.063012 containerd[1763]: 2026-04-13 20:01:07.010 [INFO][4289] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:01:07.063012 containerd[1763]: 2026-04-13 20:01:07.010 [INFO][4289] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.1/26] IPv6=[] ContainerID="6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0" HandleID="k8s-pod-network.6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0" Workload="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--bh6fw-eth0" Apr 13 20:01:07.063546 containerd[1763]: 2026-04-13 20:01:07.015 [INFO][4228] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0" Namespace="kube-system" Pod="coredns-674b8bbfcf-bh6fw" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--bh6fw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--bh6fw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c02c5010-47a5-4036-b455-4b637140354f", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 0, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-39cd336750", ContainerID:"", Pod:"coredns-674b8bbfcf-bh6fw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c654aaf199", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:01:07.063546 containerd[1763]: 2026-04-13 20:01:07.015 [INFO][4228] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.1/32] ContainerID="6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0" Namespace="kube-system" Pod="coredns-674b8bbfcf-bh6fw" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--bh6fw-eth0" Apr 13 20:01:07.063546 containerd[1763]: 2026-04-13 20:01:07.015 [INFO][4228] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c654aaf199 ContainerID="6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0" Namespace="kube-system" Pod="coredns-674b8bbfcf-bh6fw" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--bh6fw-eth0" Apr 13 20:01:07.063546 containerd[1763]: 2026-04-13 20:01:07.035 [INFO][4228] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0" Namespace="kube-system" Pod="coredns-674b8bbfcf-bh6fw" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--bh6fw-eth0" Apr 13 20:01:07.063546 containerd[1763]: 2026-04-13 20:01:07.039 [INFO][4228] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0" Namespace="kube-system" Pod="coredns-674b8bbfcf-bh6fw" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--bh6fw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--bh6fw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c02c5010-47a5-4036-b455-4b637140354f", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 0, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-39cd336750", ContainerID:"6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0", Pod:"coredns-674b8bbfcf-bh6fw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c654aaf199", MAC:"7e:9b:c9:ee:fc:c5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:01:07.063546 containerd[1763]: 2026-04-13 20:01:07.058 [INFO][4228] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0" Namespace="kube-system" Pod="coredns-674b8bbfcf-bh6fw" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--bh6fw-eth0" Apr 13 20:01:07.076142 systemd[1]: Created slice kubepods-besteffort-pod18374f84_4b11_4d44_b742_59ff7eced87e.slice - libcontainer container kubepods-besteffort-pod18374f84_4b11_4d44_b742_59ff7eced87e.slice. Apr 13 20:01:07.084640 containerd[1763]: time="2026-04-13T20:01:07.084392847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mq4g6,Uid:18374f84-4b11-4d44-b742-59ff7eced87e,Namespace:calico-system,Attempt:0,}" Apr 13 20:01:07.110606 containerd[1763]: time="2026-04-13T20:01:07.109863240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:01:07.110606 containerd[1763]: time="2026-04-13T20:01:07.109917600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:01:07.110606 containerd[1763]: time="2026-04-13T20:01:07.109936160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:01:07.110606 containerd[1763]: time="2026-04-13T20:01:07.110076760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:01:07.111892 containerd[1763]: time="2026-04-13T20:01:07.111822802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:01:07.112149 containerd[1763]: time="2026-04-13T20:01:07.112061523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:01:07.112421 containerd[1763]: time="2026-04-13T20:01:07.112114963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:01:07.112421 containerd[1763]: time="2026-04-13T20:01:07.112373283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:01:07.176318 systemd[1]: Started cri-containerd-0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e.scope - libcontainer container 0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e. Apr 13 20:01:07.178383 systemd[1]: Started cri-containerd-6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0.scope - libcontainer container 6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0. Apr 13 20:01:07.185262 systemd-networkd[1361]: cali75f89d2ca89: Link UP Apr 13 20:01:07.187183 systemd-networkd[1361]: cali75f89d2ca89: Gained carrier Apr 13 20:01:07.218781 kubelet[3186]: I0413 20:01:07.218551 3186 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Apr 13 20:01:07.221523 containerd[1763]: time="2026-04-13T20:01:07.221492585Z" level=info msg="StopPodSandbox for \"fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78\"" Apr 13 20:01:07.221788 containerd[1763]: time="2026-04-13T20:01:07.221769105Z" level=info msg="Ensure that sandbox fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78 in task-service has been cleanup successfully" Apr 13 20:01:07.222741 containerd[1763]: 2026-04-13 20:01:06.664 [ERROR][4214] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:01:07.222741 containerd[1763]: 2026-04-13 20:01:06.751 [INFO][4214] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--39cd336750-k8s-calico--kube--controllers--5bfd86f5bd--q2wp4-eth0 calico-kube-controllers-5bfd86f5bd- calico-system 09de3e5c-92f2-42e0-9bc5-3a7569d4f803 916 0 2026-04-13 20:00:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5bfd86f5bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.7-a-39cd336750 calico-kube-controllers-5bfd86f5bd-q2wp4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali75f89d2ca89 [] [] }} ContainerID="11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81" Namespace="calico-system" Pod="calico-kube-controllers-5bfd86f5bd-q2wp4" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--kube--controllers--5bfd86f5bd--q2wp4-" Apr 13 20:01:07.222741 containerd[1763]: 2026-04-13 20:01:06.751 [INFO][4214] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81" Namespace="calico-system" Pod="calico-kube-controllers-5bfd86f5bd-q2wp4" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--kube--controllers--5bfd86f5bd--q2wp4-eth0" Apr 13 20:01:07.222741 containerd[1763]: 2026-04-13 20:01:06.856 [INFO][4308] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81" HandleID="k8s-pod-network.11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81" Workload="ci--4081.3.7--a--39cd336750-k8s-calico--kube--controllers--5bfd86f5bd--q2wp4-eth0" Apr 13 20:01:07.222741 containerd[1763]: 2026-04-13 20:01:06.911 [INFO][4308] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81" HandleID="k8s-pod-network.11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81" Workload="ci--4081.3.7--a--39cd336750-k8s-calico--kube--controllers--5bfd86f5bd--q2wp4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000364130), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-39cd336750", "pod":"calico-kube-controllers-5bfd86f5bd-q2wp4", "timestamp":"2026-04-13 20:01:06.856502031 +0000 UTC"}, Hostname:"ci-4081.3.7-a-39cd336750", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003c5b80)} Apr 13 20:01:07.222741 containerd[1763]: 2026-04-13 20:01:06.911 [INFO][4308] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:01:07.222741 containerd[1763]: 2026-04-13 20:01:07.010 [INFO][4308] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:01:07.222741 containerd[1763]: 2026-04-13 20:01:07.010 [INFO][4308] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-39cd336750' Apr 13 20:01:07.222741 containerd[1763]: 2026-04-13 20:01:07.067 [INFO][4308] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.222741 containerd[1763]: 2026-04-13 20:01:07.085 [INFO][4308] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.222741 containerd[1763]: 2026-04-13 20:01:07.092 [INFO][4308] ipam/ipam.go 526: Trying affinity for 192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.222741 containerd[1763]: 2026-04-13 20:01:07.094 [INFO][4308] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.222741 containerd[1763]: 2026-04-13 20:01:07.098 [INFO][4308] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.222741 containerd[1763]: 2026-04-13 20:01:07.098 [INFO][4308] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.222741 containerd[1763]: 2026-04-13 20:01:07.101 [INFO][4308] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81 Apr 13 20:01:07.222741 containerd[1763]: 2026-04-13 20:01:07.123 [INFO][4308] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.222741 containerd[1763]: 2026-04-13 20:01:07.165 [INFO][4308] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.3/26] block=192.168.74.0/26 handle="k8s-pod-network.11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.222741 containerd[1763]: 2026-04-13 20:01:07.165 [INFO][4308] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.3/26] handle="k8s-pod-network.11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.222741 containerd[1763]: 2026-04-13 20:01:07.165 [INFO][4308] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:01:07.222741 containerd[1763]: 2026-04-13 20:01:07.165 [INFO][4308] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.3/26] IPv6=[] ContainerID="11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81" HandleID="k8s-pod-network.11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81" Workload="ci--4081.3.7--a--39cd336750-k8s-calico--kube--controllers--5bfd86f5bd--q2wp4-eth0" Apr 13 20:01:07.224391 containerd[1763]: 2026-04-13 20:01:07.178 [INFO][4214] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81" Namespace="calico-system" Pod="calico-kube-controllers-5bfd86f5bd-q2wp4" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--kube--controllers--5bfd86f5bd--q2wp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--39cd336750-k8s-calico--kube--controllers--5bfd86f5bd--q2wp4-eth0", GenerateName:"calico-kube-controllers-5bfd86f5bd-", Namespace:"calico-system", SelfLink:"", UID:"09de3e5c-92f2-42e0-9bc5-3a7569d4f803", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 0, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bfd86f5bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-39cd336750", ContainerID:"", Pod:"calico-kube-controllers-5bfd86f5bd-q2wp4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali75f89d2ca89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:01:07.224391 containerd[1763]: 2026-04-13 20:01:07.180 [INFO][4214] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.3/32] ContainerID="11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81" Namespace="calico-system" Pod="calico-kube-controllers-5bfd86f5bd-q2wp4" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--kube--controllers--5bfd86f5bd--q2wp4-eth0" Apr 13 20:01:07.224391 containerd[1763]: 2026-04-13 20:01:07.180 [INFO][4214] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75f89d2ca89 ContainerID="11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81" Namespace="calico-system" Pod="calico-kube-controllers-5bfd86f5bd-q2wp4" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--kube--controllers--5bfd86f5bd--q2wp4-eth0" Apr 13 20:01:07.224391 containerd[1763]: 2026-04-13 20:01:07.189 [INFO][4214] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81" Namespace="calico-system" Pod="calico-kube-controllers-5bfd86f5bd-q2wp4" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--kube--controllers--5bfd86f5bd--q2wp4-eth0" Apr 13 20:01:07.224391 containerd[1763]: 2026-04-13 20:01:07.190 [INFO][4214] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81" Namespace="calico-system" Pod="calico-kube-controllers-5bfd86f5bd-q2wp4" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--kube--controllers--5bfd86f5bd--q2wp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--39cd336750-k8s-calico--kube--controllers--5bfd86f5bd--q2wp4-eth0", GenerateName:"calico-kube-controllers-5bfd86f5bd-", Namespace:"calico-system", SelfLink:"", UID:"09de3e5c-92f2-42e0-9bc5-3a7569d4f803", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 0, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bfd86f5bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-39cd336750", ContainerID:"11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81", Pod:"calico-kube-controllers-5bfd86f5bd-q2wp4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali75f89d2ca89", MAC:"16:90:b3:08:c9:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:01:07.224391 containerd[1763]: 2026-04-13 20:01:07.216 [INFO][4214] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81" Namespace="calico-system" Pod="calico-kube-controllers-5bfd86f5bd-q2wp4" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--kube--controllers--5bfd86f5bd--q2wp4-eth0" Apr 13 20:01:07.283696 kubelet[3186]: I0413 20:01:07.283556 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wppks" podStartSLOduration=4.8416377409999996 podStartE2EDuration="37.283538506s" podCreationTimestamp="2026-04-13 20:00:30 +0000 UTC" firstStartedPulling="2026-04-13 20:00:31.2407279 +0000 UTC m=+20.291082547" lastFinishedPulling="2026-04-13 20:01:03.682628665 +0000 UTC m=+52.732983312" observedRunningTime="2026-04-13 20:01:07.282333224 +0000 UTC m=+56.332687951" watchObservedRunningTime="2026-04-13 20:01:07.283538506 +0000 UTC m=+56.333893113" Apr 13 20:01:07.322728 containerd[1763]: time="2026-04-13T20:01:07.313383304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:01:07.322728 containerd[1763]: time="2026-04-13T20:01:07.313444224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:01:07.322728 containerd[1763]: time="2026-04-13T20:01:07.313464065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:01:07.322728 containerd[1763]: time="2026-04-13T20:01:07.313542105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:01:07.352612 containerd[1763]: time="2026-04-13T20:01:07.352567995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bh6fw,Uid:c02c5010-47a5-4036-b455-4b637140354f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0\"" Apr 13 20:01:07.375159 containerd[1763]: time="2026-04-13T20:01:07.374364864Z" level=info msg="CreateContainer within sandbox \"6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:01:07.420119 systemd-networkd[1361]: cali43a38b7fd50: Link UP Apr 13 20:01:07.420528 systemd-networkd[1361]: cali43a38b7fd50: Gained carrier Apr 13 20:01:07.457467 containerd[1763]: 2026-04-13 20:01:06.682 [ERROR][4235] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:01:07.457467 containerd[1763]: 2026-04-13 20:01:06.746 [INFO][4235] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--39cd336750-k8s-goldmane--5b85766d88--t6m5k-eth0 goldmane-5b85766d88- calico-system d4838596-14a2-436f-a221-2dc154b6f727 919 0 2026-04-13 20:00:29 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.7-a-39cd336750 goldmane-5b85766d88-t6m5k eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali43a38b7fd50 [] [] }} ContainerID="57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3" Namespace="calico-system" Pod="goldmane-5b85766d88-t6m5k" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-goldmane--5b85766d88--t6m5k-" Apr 13 20:01:07.457467 containerd[1763]: 2026-04-13 20:01:06.747 [INFO][4235] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3" Namespace="calico-system" Pod="goldmane-5b85766d88-t6m5k" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-goldmane--5b85766d88--t6m5k-eth0" Apr 13 20:01:07.457467 containerd[1763]: 2026-04-13 20:01:06.885 [INFO][4293] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3" HandleID="k8s-pod-network.57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3" Workload="ci--4081.3.7--a--39cd336750-k8s-goldmane--5b85766d88--t6m5k-eth0" Apr 13 20:01:07.457467 containerd[1763]: 2026-04-13 20:01:06.919 [INFO][4293] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3" HandleID="k8s-pod-network.57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3" Workload="ci--4081.3.7--a--39cd336750-k8s-goldmane--5b85766d88--t6m5k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038fd20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-39cd336750", "pod":"goldmane-5b85766d88-t6m5k", "timestamp":"2026-04-13 20:01:06.885619468 +0000 UTC"}, Hostname:"ci-4081.3.7-a-39cd336750", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000398f20)} Apr 13 20:01:07.457467 containerd[1763]: 2026-04-13 20:01:06.919 [INFO][4293] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:01:07.457467 containerd[1763]: 2026-04-13 20:01:07.166 [INFO][4293] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:01:07.457467 containerd[1763]: 2026-04-13 20:01:07.166 [INFO][4293] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-39cd336750' Apr 13 20:01:07.457467 containerd[1763]: 2026-04-13 20:01:07.184 [INFO][4293] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.457467 containerd[1763]: 2026-04-13 20:01:07.196 [INFO][4293] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.457467 containerd[1763]: 2026-04-13 20:01:07.240 [INFO][4293] ipam/ipam.go 526: Trying affinity for 192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.457467 containerd[1763]: 2026-04-13 20:01:07.271 [INFO][4293] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.457467 containerd[1763]: 2026-04-13 20:01:07.285 [INFO][4293] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.457467 containerd[1763]: 2026-04-13 20:01:07.285 [INFO][4293] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.457467 containerd[1763]: 2026-04-13 20:01:07.297 [INFO][4293] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3 Apr 13 20:01:07.457467 containerd[1763]: 2026-04-13 20:01:07.324 [INFO][4293] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.457467 containerd[1763]: 2026-04-13 20:01:07.362 [INFO][4293] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.4/26] block=192.168.74.0/26 handle="k8s-pod-network.57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.457467 containerd[1763]: 2026-04-13 20:01:07.364 [INFO][4293] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.4/26] handle="k8s-pod-network.57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.457467 containerd[1763]: 2026-04-13 20:01:07.364 [INFO][4293] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:01:07.457467 containerd[1763]: 2026-04-13 20:01:07.364 [INFO][4293] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.4/26] IPv6=[] ContainerID="57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3" HandleID="k8s-pod-network.57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3" Workload="ci--4081.3.7--a--39cd336750-k8s-goldmane--5b85766d88--t6m5k-eth0" Apr 13 20:01:07.458066 containerd[1763]: 2026-04-13 20:01:07.378 [INFO][4235] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3" Namespace="calico-system" Pod="goldmane-5b85766d88-t6m5k" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-goldmane--5b85766d88--t6m5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--39cd336750-k8s-goldmane--5b85766d88--t6m5k-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"d4838596-14a2-436f-a221-2dc154b6f727", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-39cd336750", ContainerID:"", Pod:"goldmane-5b85766d88-t6m5k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.74.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali43a38b7fd50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:01:07.458066 containerd[1763]: 2026-04-13 20:01:07.378 [INFO][4235] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.4/32] ContainerID="57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3" Namespace="calico-system" Pod="goldmane-5b85766d88-t6m5k" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-goldmane--5b85766d88--t6m5k-eth0" Apr 13 20:01:07.458066 containerd[1763]: 2026-04-13 20:01:07.378 [INFO][4235] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43a38b7fd50 ContainerID="57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3" Namespace="calico-system" Pod="goldmane-5b85766d88-t6m5k" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-goldmane--5b85766d88--t6m5k-eth0" Apr 13 20:01:07.458066 containerd[1763]: 2026-04-13 20:01:07.422 [INFO][4235] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3" Namespace="calico-system" Pod="goldmane-5b85766d88-t6m5k" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-goldmane--5b85766d88--t6m5k-eth0" Apr 13 20:01:07.458066 containerd[1763]: 2026-04-13 20:01:07.432 [INFO][4235] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3" Namespace="calico-system" Pod="goldmane-5b85766d88-t6m5k" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-goldmane--5b85766d88--t6m5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--39cd336750-k8s-goldmane--5b85766d88--t6m5k-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"d4838596-14a2-436f-a221-2dc154b6f727", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-39cd336750", ContainerID:"57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3", Pod:"goldmane-5b85766d88-t6m5k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.74.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali43a38b7fd50", MAC:"12:aa:14:6b:04:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:01:07.458066 containerd[1763]: 2026-04-13 20:01:07.455 [INFO][4235] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3" Namespace="calico-system" Pod="goldmane-5b85766d88-t6m5k" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-goldmane--5b85766d88--t6m5k-eth0" Apr 13 20:01:07.470492 systemd[1]: Started cri-containerd-11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81.scope - libcontainer container 11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81. Apr 13 20:01:07.527206 systemd-networkd[1361]: cali6337027c7e4: Link UP Apr 13 20:01:07.528016 systemd-networkd[1361]: cali6337027c7e4: Gained carrier Apr 13 20:01:07.564987 containerd[1763]: 2026-04-13 20:01:06.691 [ERROR][4248] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:01:07.564987 containerd[1763]: 2026-04-13 20:01:06.746 [INFO][4248] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--xp2gk-eth0 calico-apiserver-8477974547- calico-system 6a611f4d-2ffb-4de8-a21c-ba7debfd9296 920 0 2026-04-13 20:00:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8477974547 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.7-a-39cd336750 calico-apiserver-8477974547-xp2gk eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali6337027c7e4 [] [] }} ContainerID="23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38" Namespace="calico-system" Pod="calico-apiserver-8477974547-xp2gk" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--xp2gk-" Apr 13 20:01:07.564987 containerd[1763]: 2026-04-13 20:01:06.746 [INFO][4248] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38" Namespace="calico-system" Pod="calico-apiserver-8477974547-xp2gk" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--xp2gk-eth0" Apr 13 20:01:07.564987 containerd[1763]: 2026-04-13 20:01:06.897 [INFO][4290] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38" HandleID="k8s-pod-network.23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38" Workload="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--xp2gk-eth0" Apr 13 20:01:07.564987 containerd[1763]: 2026-04-13 20:01:06.922 [INFO][4290] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38" HandleID="k8s-pod-network.23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38" Workload="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--xp2gk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003466d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-39cd336750", "pod":"calico-apiserver-8477974547-xp2gk", "timestamp":"2026-04-13 20:01:06.897720924 +0000 UTC"}, Hostname:"ci-4081.3.7-a-39cd336750", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400034e580)} Apr 13 20:01:07.564987 containerd[1763]: 2026-04-13 20:01:06.922 [INFO][4290] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:01:07.564987 containerd[1763]: 2026-04-13 20:01:07.364 [INFO][4290] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:01:07.564987 containerd[1763]: 2026-04-13 20:01:07.364 [INFO][4290] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-39cd336750' Apr 13 20:01:07.564987 containerd[1763]: 2026-04-13 20:01:07.378 [INFO][4290] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.564987 containerd[1763]: 2026-04-13 20:01:07.397 [INFO][4290] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.564987 containerd[1763]: 2026-04-13 20:01:07.439 [INFO][4290] ipam/ipam.go 526: Trying affinity for 192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.564987 containerd[1763]: 2026-04-13 20:01:07.453 [INFO][4290] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.564987 containerd[1763]: 2026-04-13 20:01:07.467 [INFO][4290] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.564987 containerd[1763]: 2026-04-13 20:01:07.467 [INFO][4290] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.564987 containerd[1763]: 2026-04-13 20:01:07.477 [INFO][4290] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38 Apr 13 20:01:07.564987 containerd[1763]: 2026-04-13 20:01:07.488 [INFO][4290] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.564987 containerd[1763]: 2026-04-13 20:01:07.513 [INFO][4290] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.5/26] block=192.168.74.0/26 handle="k8s-pod-network.23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.564987 containerd[1763]: 2026-04-13 20:01:07.513 [INFO][4290] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.5/26] handle="k8s-pod-network.23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.564987 containerd[1763]: 2026-04-13 20:01:07.513 [INFO][4290] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:01:07.564987 containerd[1763]: 2026-04-13 20:01:07.513 [INFO][4290] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.5/26] IPv6=[] ContainerID="23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38" HandleID="k8s-pod-network.23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38" Workload="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--xp2gk-eth0" Apr 13 20:01:07.565549 containerd[1763]: 2026-04-13 20:01:07.524 [INFO][4248] cni-plugin/k8s.go 418: Populated endpoint ContainerID="23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38" Namespace="calico-system" Pod="calico-apiserver-8477974547-xp2gk" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--xp2gk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--xp2gk-eth0", GenerateName:"calico-apiserver-8477974547-", Namespace:"calico-system", SelfLink:"", UID:"6a611f4d-2ffb-4de8-a21c-ba7debfd9296", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8477974547", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-39cd336750", ContainerID:"", Pod:"calico-apiserver-8477974547-xp2gk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6337027c7e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:01:07.565549 containerd[1763]: 2026-04-13 20:01:07.524 [INFO][4248] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.5/32] ContainerID="23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38" Namespace="calico-system" Pod="calico-apiserver-8477974547-xp2gk" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--xp2gk-eth0" Apr 13 20:01:07.565549 containerd[1763]: 2026-04-13 20:01:07.524 [INFO][4248] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6337027c7e4 ContainerID="23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38" Namespace="calico-system" Pod="calico-apiserver-8477974547-xp2gk" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--xp2gk-eth0" Apr 13 20:01:07.565549 containerd[1763]: 2026-04-13 20:01:07.528 [INFO][4248] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38" Namespace="calico-system" Pod="calico-apiserver-8477974547-xp2gk" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--xp2gk-eth0" Apr 13 20:01:07.565549 containerd[1763]: 2026-04-13 20:01:07.528 [INFO][4248] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38" Namespace="calico-system" Pod="calico-apiserver-8477974547-xp2gk" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--xp2gk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--xp2gk-eth0", GenerateName:"calico-apiserver-8477974547-", Namespace:"calico-system", SelfLink:"", UID:"6a611f4d-2ffb-4de8-a21c-ba7debfd9296", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8477974547", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-39cd336750", ContainerID:"23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38", Pod:"calico-apiserver-8477974547-xp2gk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6337027c7e4", MAC:"d2:4c:b5:17:dc:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:01:07.565549 containerd[1763]: 2026-04-13 20:01:07.548 [INFO][4248] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38" Namespace="calico-system" Pod="calico-apiserver-8477974547-xp2gk" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-calico--apiserver--8477974547--xp2gk-eth0" Apr 13 20:01:07.579110 containerd[1763]: 2026-04-13 20:01:06.787 [INFO][4213] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b76e4465ac78c7b0990d9f53b2594db57f09608fa3f789a613353974de0b77bf" Apr 13 20:01:07.579110 containerd[1763]: 2026-04-13 20:01:06.788 [INFO][4213] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b76e4465ac78c7b0990d9f53b2594db57f09608fa3f789a613353974de0b77bf" iface="eth0" netns="/var/run/netns/cni-e87f1388-297c-484d-a178-d9c5fa25991f" Apr 13 20:01:07.579110 containerd[1763]: 2026-04-13 20:01:06.796 [INFO][4213] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b76e4465ac78c7b0990d9f53b2594db57f09608fa3f789a613353974de0b77bf" iface="eth0" netns="/var/run/netns/cni-e87f1388-297c-484d-a178-d9c5fa25991f" Apr 13 20:01:07.579110 containerd[1763]: 2026-04-13 20:01:06.798 [INFO][4213] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b76e4465ac78c7b0990d9f53b2594db57f09608fa3f789a613353974de0b77bf" iface="eth0" netns="/var/run/netns/cni-e87f1388-297c-484d-a178-d9c5fa25991f" Apr 13 20:01:07.579110 containerd[1763]: 2026-04-13 20:01:06.798 [INFO][4213] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b76e4465ac78c7b0990d9f53b2594db57f09608fa3f789a613353974de0b77bf" Apr 13 20:01:07.579110 containerd[1763]: 2026-04-13 20:01:06.798 [INFO][4213] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b76e4465ac78c7b0990d9f53b2594db57f09608fa3f789a613353974de0b77bf" Apr 13 20:01:07.579110 containerd[1763]: 2026-04-13 20:01:06.922 [INFO][4314] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b76e4465ac78c7b0990d9f53b2594db57f09608fa3f789a613353974de0b77bf" HandleID="k8s-pod-network.b76e4465ac78c7b0990d9f53b2594db57f09608fa3f789a613353974de0b77bf" Workload="ci--4081.3.7--a--39cd336750-k8s-whisker--6fb7d689cd--z5r7b-eth0" Apr 13 20:01:07.579110 containerd[1763]: 2026-04-13 20:01:06.922 [INFO][4314] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:01:07.579110 containerd[1763]: 2026-04-13 20:01:07.516 [INFO][4314] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:01:07.579110 containerd[1763]: 2026-04-13 20:01:07.547 [WARNING][4314] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b76e4465ac78c7b0990d9f53b2594db57f09608fa3f789a613353974de0b77bf" HandleID="k8s-pod-network.b76e4465ac78c7b0990d9f53b2594db57f09608fa3f789a613353974de0b77bf" Workload="ci--4081.3.7--a--39cd336750-k8s-whisker--6fb7d689cd--z5r7b-eth0" Apr 13 20:01:07.579110 containerd[1763]: 2026-04-13 20:01:07.548 [INFO][4314] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b76e4465ac78c7b0990d9f53b2594db57f09608fa3f789a613353974de0b77bf" HandleID="k8s-pod-network.b76e4465ac78c7b0990d9f53b2594db57f09608fa3f789a613353974de0b77bf" Workload="ci--4081.3.7--a--39cd336750-k8s-whisker--6fb7d689cd--z5r7b-eth0" Apr 13 20:01:07.579110 containerd[1763]: 2026-04-13 20:01:07.563 [INFO][4314] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:01:07.579110 containerd[1763]: 2026-04-13 20:01:07.568 [INFO][4213] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b76e4465ac78c7b0990d9f53b2594db57f09608fa3f789a613353974de0b77bf" Apr 13 20:01:07.590389 containerd[1763]: time="2026-04-13T20:01:07.590215944Z" level=info msg="CreateContainer within sandbox \"6c3a4e06ce21efa7926910fce81cdbe453d8cb0faa18f6536c875cfae7740aa0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"48f4749988250c505144991ebf7d8fbb19a8018871e30059169aba027fa354dd\"" Apr 13 20:01:07.593474 containerd[1763]: time="2026-04-13T20:01:07.593439108Z" level=info msg="StartContainer for \"48f4749988250c505144991ebf7d8fbb19a8018871e30059169aba027fa354dd\"" Apr 13 20:01:07.598660 containerd[1763]: time="2026-04-13T20:01:07.597568674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fb7d689cd-z5r7b,Uid:7c7a94b2-32bd-4e41-8a24-ff2f686a46d4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b76e4465ac78c7b0990d9f53b2594db57f09608fa3f789a613353974de0b77bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:01:07.598793 kubelet[3186]: E0413 20:01:07.597803 3186 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b76e4465ac78c7b0990d9f53b2594db57f09608fa3f789a613353974de0b77bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:01:07.598793 kubelet[3186]: E0413 20:01:07.597858 3186 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b76e4465ac78c7b0990d9f53b2594db57f09608fa3f789a613353974de0b77bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fb7d689cd-z5r7b" Apr 13 20:01:07.599777 containerd[1763]: time="2026-04-13T20:01:07.597331313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:01:07.599777 containerd[1763]: time="2026-04-13T20:01:07.597401554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:01:07.599777 containerd[1763]: time="2026-04-13T20:01:07.597416114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:01:07.599777 containerd[1763]: time="2026-04-13T20:01:07.598600435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:01:07.623706 containerd[1763]: time="2026-04-13T20:01:07.623672948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8477974547-fl6tk,Uid:a73b8919-6d7b-42b2-aeb3-8bd24e7f26d3,Namespace:calico-system,Attempt:0,} returns sandbox id \"0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e\"" Apr 13 20:01:07.630609 containerd[1763]: time="2026-04-13T20:01:07.630281356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:01:07.640271 systemd-networkd[1361]: cali82c5df50ac5: Link UP Apr 13 20:01:07.642589 systemd-networkd[1361]: cali82c5df50ac5: Gained carrier Apr 13 20:01:07.667616 containerd[1763]: time="2026-04-13T20:01:07.667534245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:01:07.667786 containerd[1763]: time="2026-04-13T20:01:07.667763085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:01:07.667967 containerd[1763]: time="2026-04-13T20:01:07.667929645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:01:07.668236 containerd[1763]: time="2026-04-13T20:01:07.668196646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:01:07.668283 systemd[1]: Started cri-containerd-57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3.scope - libcontainer container 57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3. Apr 13 20:01:07.691209 containerd[1763]: 2026-04-13 20:01:07.477 [INFO][4434] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Apr 13 20:01:07.691209 containerd[1763]: 2026-04-13 20:01:07.478 [INFO][4434] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" iface="eth0" netns="/var/run/netns/cni-56ea491c-4f75-b658-3958-759a9f6e751b" Apr 13 20:01:07.691209 containerd[1763]: 2026-04-13 20:01:07.478 [INFO][4434] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" iface="eth0" netns="/var/run/netns/cni-56ea491c-4f75-b658-3958-759a9f6e751b" Apr 13 20:01:07.691209 containerd[1763]: 2026-04-13 20:01:07.478 [INFO][4434] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" iface="eth0" netns="/var/run/netns/cni-56ea491c-4f75-b658-3958-759a9f6e751b" Apr 13 20:01:07.691209 containerd[1763]: 2026-04-13 20:01:07.478 [INFO][4434] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Apr 13 20:01:07.691209 containerd[1763]: 2026-04-13 20:01:07.478 [INFO][4434] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Apr 13 20:01:07.691209 containerd[1763]: 2026-04-13 20:01:07.516 [INFO][4513] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" HandleID="k8s-pod-network.fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Workload="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0" Apr 13 20:01:07.691209 containerd[1763]: 2026-04-13 20:01:07.516 [INFO][4513] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:01:07.691209 containerd[1763]: 2026-04-13 20:01:07.628 [INFO][4513] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:01:07.691209 containerd[1763]: 2026-04-13 20:01:07.668 [WARNING][4513] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" HandleID="k8s-pod-network.fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Workload="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0" Apr 13 20:01:07.691209 containerd[1763]: 2026-04-13 20:01:07.669 [INFO][4513] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" HandleID="k8s-pod-network.fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Workload="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0" Apr 13 20:01:07.691209 containerd[1763]: 2026-04-13 20:01:07.675 [INFO][4513] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:01:07.691209 containerd[1763]: 2026-04-13 20:01:07.682 [INFO][4434] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Apr 13 20:01:07.692029 containerd[1763]: 2026-04-13 20:01:07.229 [ERROR][4397] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:01:07.692029 containerd[1763]: 2026-04-13 20:01:07.285 [INFO][4397] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--39cd336750-k8s-csi--node--driver--mq4g6-eth0 csi-node-driver- calico-system 18374f84-4b11-4d44-b742-59ff7eced87e 739 0 2026-04-13 20:00:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.7-a-39cd336750 csi-node-driver-mq4g6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali82c5df50ac5 [] [] }} ContainerID="affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1" Namespace="calico-system" Pod="csi-node-driver-mq4g6" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-csi--node--driver--mq4g6-" Apr 13 20:01:07.692029 containerd[1763]: 2026-04-13 20:01:07.285 [INFO][4397] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1" Namespace="calico-system" Pod="csi-node-driver-mq4g6" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-csi--node--driver--mq4g6-eth0" Apr 13 20:01:07.692029 containerd[1763]: 2026-04-13 20:01:07.435 [INFO][4462] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1" HandleID="k8s-pod-network.affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1" Workload="ci--4081.3.7--a--39cd336750-k8s-csi--node--driver--mq4g6-eth0" Apr 13 20:01:07.692029 containerd[1763]: 2026-04-13 20:01:07.480 [INFO][4462] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1" HandleID="k8s-pod-network.affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1" Workload="ci--4081.3.7--a--39cd336750-k8s-csi--node--driver--mq4g6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000380fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-39cd336750", "pod":"csi-node-driver-mq4g6", "timestamp":"2026-04-13 20:01:07.435583783 +0000 UTC"}, Hostname:"ci-4081.3.7-a-39cd336750", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000674160)} Apr 13 20:01:07.692029 containerd[1763]: 2026-04-13 20:01:07.481 [INFO][4462] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:01:07.692029 containerd[1763]: 2026-04-13 20:01:07.563 [INFO][4462] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:01:07.692029 containerd[1763]: 2026-04-13 20:01:07.563 [INFO][4462] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-39cd336750' Apr 13 20:01:07.692029 containerd[1763]: 2026-04-13 20:01:07.569 [INFO][4462] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.692029 containerd[1763]: 2026-04-13 20:01:07.577 [INFO][4462] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.692029 containerd[1763]: 2026-04-13 20:01:07.583 [INFO][4462] ipam/ipam.go 526: Trying affinity for 192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.692029 containerd[1763]: 2026-04-13 20:01:07.589 [INFO][4462] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.692029 containerd[1763]: 2026-04-13 20:01:07.596 [INFO][4462] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.692029 containerd[1763]: 2026-04-13 20:01:07.596 [INFO][4462] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.692029 containerd[1763]: 2026-04-13 20:01:07.601 [INFO][4462] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1 Apr 13 20:01:07.692029 containerd[1763]: 2026-04-13 20:01:07.609 [INFO][4462] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.692029 containerd[1763]: 2026-04-13 20:01:07.627 [INFO][4462] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.6/26] block=192.168.74.0/26 handle="k8s-pod-network.affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.692029 containerd[1763]: 2026-04-13 20:01:07.627 [INFO][4462] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.6/26] handle="k8s-pod-network.affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.692029 containerd[1763]: 2026-04-13 20:01:07.627 [INFO][4462] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:01:07.692029 containerd[1763]: 2026-04-13 20:01:07.627 [INFO][4462] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.6/26] IPv6=[] ContainerID="affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1" HandleID="k8s-pod-network.affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1" Workload="ci--4081.3.7--a--39cd336750-k8s-csi--node--driver--mq4g6-eth0" Apr 13 20:01:07.693849 containerd[1763]: 2026-04-13 20:01:07.634 [INFO][4397] cni-plugin/k8s.go 418: Populated endpoint ContainerID="affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1" Namespace="calico-system" Pod="csi-node-driver-mq4g6" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-csi--node--driver--mq4g6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--39cd336750-k8s-csi--node--driver--mq4g6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"18374f84-4b11-4d44-b742-59ff7eced87e", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 0, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-39cd336750", ContainerID:"", Pod:"csi-node-driver-mq4g6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali82c5df50ac5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:01:07.693849 containerd[1763]: 2026-04-13 20:01:07.635 [INFO][4397] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.6/32] ContainerID="affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1" Namespace="calico-system" Pod="csi-node-driver-mq4g6" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-csi--node--driver--mq4g6-eth0" Apr 13 20:01:07.693849 containerd[1763]: 2026-04-13 20:01:07.635 [INFO][4397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82c5df50ac5 ContainerID="affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1" Namespace="calico-system" Pod="csi-node-driver-mq4g6" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-csi--node--driver--mq4g6-eth0" Apr 13 20:01:07.693849 containerd[1763]: 2026-04-13 20:01:07.653 [INFO][4397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1" Namespace="calico-system" Pod="csi-node-driver-mq4g6" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-csi--node--driver--mq4g6-eth0" Apr 13 20:01:07.693849 containerd[1763]: 2026-04-13 20:01:07.659 [INFO][4397] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1" Namespace="calico-system" Pod="csi-node-driver-mq4g6" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-csi--node--driver--mq4g6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--39cd336750-k8s-csi--node--driver--mq4g6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"18374f84-4b11-4d44-b742-59ff7eced87e", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 0, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-39cd336750", ContainerID:"affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1", Pod:"csi-node-driver-mq4g6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali82c5df50ac5", MAC:"3a:c5:3a:97:61:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:01:07.693849 containerd[1763]: 2026-04-13 20:01:07.684 [INFO][4397] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1" Namespace="calico-system" Pod="csi-node-driver-mq4g6" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-csi--node--driver--mq4g6-eth0" Apr 13 20:01:07.694230 containerd[1763]: time="2026-04-13T20:01:07.694188239Z" level=info msg="TearDown network for sandbox \"fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78\" successfully" Apr 13 20:01:07.694891 containerd[1763]: time="2026-04-13T20:01:07.694874720Z" level=info msg="StopPodSandbox for \"fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78\" returns successfully" Apr 13 20:01:07.696772 containerd[1763]: time="2026-04-13T20:01:07.696741563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7624w,Uid:f11b10eb-dcb3-4727-9ed1-4a266865a659,Namespace:kube-system,Attempt:1,}" Apr 13 20:01:07.707517 systemd[1]: Started cri-containerd-48f4749988250c505144991ebf7d8fbb19a8018871e30059169aba027fa354dd.scope - libcontainer container 48f4749988250c505144991ebf7d8fbb19a8018871e30059169aba027fa354dd. Apr 13 20:01:07.735323 systemd[1]: Started cri-containerd-23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38.scope - libcontainer container 23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38. Apr 13 20:01:07.742855 containerd[1763]: time="2026-04-13T20:01:07.742821263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bfd86f5bd-q2wp4,Uid:09de3e5c-92f2-42e0-9bc5-3a7569d4f803,Namespace:calico-system,Attempt:0,} returns sandbox id \"11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81\"" Apr 13 20:01:07.755313 containerd[1763]: time="2026-04-13T20:01:07.755101559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:01:07.755313 containerd[1763]: time="2026-04-13T20:01:07.755177079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:01:07.755313 containerd[1763]: time="2026-04-13T20:01:07.755192479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:01:07.755313 containerd[1763]: time="2026-04-13T20:01:07.755271719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:01:07.797778 systemd[1]: Started cri-containerd-affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1.scope - libcontainer container affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1. Apr 13 20:01:07.813896 containerd[1763]: time="2026-04-13T20:01:07.813570795Z" level=info msg="StartContainer for \"48f4749988250c505144991ebf7d8fbb19a8018871e30059169aba027fa354dd\" returns successfully" Apr 13 20:01:07.825089 containerd[1763]: time="2026-04-13T20:01:07.823923368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-t6m5k,Uid:d4838596-14a2-436f-a221-2dc154b6f727,Namespace:calico-system,Attempt:0,} returns sandbox id \"57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3\"" Apr 13 20:01:07.846309 containerd[1763]: time="2026-04-13T20:01:07.845356596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8477974547-xp2gk,Uid:6a611f4d-2ffb-4de8-a21c-ba7debfd9296,Namespace:calico-system,Attempt:0,} returns sandbox id \"23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38\"" Apr 13 20:01:07.873165 containerd[1763]: time="2026-04-13T20:01:07.872276591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mq4g6,Uid:18374f84-4b11-4d44-b742-59ff7eced87e,Namespace:calico-system,Attempt:0,} returns sandbox id \"affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1\"" Apr 13 20:01:07.953581 systemd-networkd[1361]: cali9b7485e2615: Link UP Apr 13 20:01:07.953868 systemd-networkd[1361]: cali9b7485e2615: Gained carrier Apr 13 20:01:07.975647 containerd[1763]: 2026-04-13 20:01:07.853 [ERROR][4678] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:01:07.975647 containerd[1763]: 2026-04-13 20:01:07.887 [INFO][4678] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0 coredns-674b8bbfcf- kube-system f11b10eb-dcb3-4727-9ed1-4a266865a659 967 0 2026-04-13 20:00:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.7-a-39cd336750 coredns-674b8bbfcf-7624w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9b7485e2615 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be" Namespace="kube-system" Pod="coredns-674b8bbfcf-7624w" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-" Apr 13 20:01:07.975647 containerd[1763]: 2026-04-13 20:01:07.887 [INFO][4678] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be" Namespace="kube-system" Pod="coredns-674b8bbfcf-7624w" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0" Apr 13 20:01:07.975647 containerd[1763]: 2026-04-13 20:01:07.909 [INFO][4744] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be" HandleID="k8s-pod-network.6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be" Workload="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0" Apr 13 20:01:07.975647 containerd[1763]: 2026-04-13 20:01:07.918 [INFO][4744] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be" HandleID="k8s-pod-network.6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be" Workload="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273940), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.7-a-39cd336750", "pod":"coredns-674b8bbfcf-7624w", "timestamp":"2026-04-13 20:01:07.90999168 +0000 UTC"}, Hostname:"ci-4081.3.7-a-39cd336750", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000540f20)} Apr 13 20:01:07.975647 containerd[1763]: 2026-04-13 20:01:07.919 [INFO][4744] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:01:07.975647 containerd[1763]: 2026-04-13 20:01:07.919 [INFO][4744] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:01:07.975647 containerd[1763]: 2026-04-13 20:01:07.919 [INFO][4744] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-39cd336750' Apr 13 20:01:07.975647 containerd[1763]: 2026-04-13 20:01:07.920 [INFO][4744] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.975647 containerd[1763]: 2026-04-13 20:01:07.924 [INFO][4744] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.975647 containerd[1763]: 2026-04-13 20:01:07.927 [INFO][4744] ipam/ipam.go 526: Trying affinity for 192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.975647 containerd[1763]: 2026-04-13 20:01:07.929 [INFO][4744] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.975647 containerd[1763]: 2026-04-13 20:01:07.931 [INFO][4744] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.975647 containerd[1763]: 2026-04-13 20:01:07.931 [INFO][4744] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.975647 containerd[1763]: 2026-04-13 20:01:07.934 [INFO][4744] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be Apr 13 20:01:07.975647 containerd[1763]: 2026-04-13 20:01:07.938 [INFO][4744] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.975647 containerd[1763]: 2026-04-13 20:01:07.948 [INFO][4744] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.7/26] block=192.168.74.0/26 handle="k8s-pod-network.6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.975647 containerd[1763]: 2026-04-13 20:01:07.948 [INFO][4744] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.7/26] handle="k8s-pod-network.6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:07.975647 containerd[1763]: 2026-04-13 20:01:07.948 [INFO][4744] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:01:07.975647 containerd[1763]: 2026-04-13 20:01:07.948 [INFO][4744] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.7/26] IPv6=[] ContainerID="6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be" HandleID="k8s-pod-network.6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be" Workload="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0" Apr 13 20:01:07.976569 containerd[1763]: 2026-04-13 20:01:07.950 [INFO][4678] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be" Namespace="kube-system" Pod="coredns-674b8bbfcf-7624w" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f11b10eb-dcb3-4727-9ed1-4a266865a659", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 0, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-39cd336750", ContainerID:"", Pod:"coredns-674b8bbfcf-7624w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b7485e2615", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:01:07.976569 containerd[1763]: 2026-04-13 20:01:07.951 [INFO][4678] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.7/32] ContainerID="6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be" Namespace="kube-system" Pod="coredns-674b8bbfcf-7624w" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0" Apr 13 20:01:07.976569 containerd[1763]: 2026-04-13 20:01:07.951 [INFO][4678] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b7485e2615 ContainerID="6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be" Namespace="kube-system" Pod="coredns-674b8bbfcf-7624w" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0" Apr 13 20:01:07.976569 containerd[1763]: 2026-04-13 20:01:07.954 [INFO][4678] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be" Namespace="kube-system" Pod="coredns-674b8bbfcf-7624w" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0" Apr 13 20:01:07.976569 containerd[1763]: 2026-04-13 20:01:07.954 [INFO][4678] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be" Namespace="kube-system" Pod="coredns-674b8bbfcf-7624w" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f11b10eb-dcb3-4727-9ed1-4a266865a659", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 0, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-39cd336750", ContainerID:"6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be", Pod:"coredns-674b8bbfcf-7624w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b7485e2615", MAC:"02:67:55:12:bb:a4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:01:07.976569 containerd[1763]: 2026-04-13 20:01:07.972 [INFO][4678] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be" Namespace="kube-system" Pod="coredns-674b8bbfcf-7624w" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0" Apr 13 20:01:08.005243 containerd[1763]: time="2026-04-13T20:01:08.005091403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:01:08.005243 containerd[1763]: time="2026-04-13T20:01:08.005176404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:01:08.005243 containerd[1763]: time="2026-04-13T20:01:08.005187764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:01:08.005571 containerd[1763]: time="2026-04-13T20:01:08.005511804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:01:08.022273 systemd[1]: Started cri-containerd-6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be.scope - libcontainer container 6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be. Apr 13 20:01:08.054146 containerd[1763]: time="2026-04-13T20:01:08.054051347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7624w,Uid:f11b10eb-dcb3-4727-9ed1-4a266865a659,Namespace:kube-system,Attempt:1,} returns sandbox id \"6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be\"" Apr 13 20:01:08.064959 containerd[1763]: time="2026-04-13T20:01:08.064928401Z" level=info msg="CreateContainer within sandbox \"6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:01:08.110613 systemd[1]: run-netns-cni\x2de87f1388\x2d297c\x2d484d\x2da178\x2dd9c5fa25991f.mount: Deactivated successfully. Apr 13 20:01:08.111306 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b76e4465ac78c7b0990d9f53b2594db57f09608fa3f789a613353974de0b77bf-shm.mount: Deactivated successfully. Apr 13 20:01:08.111609 systemd[1]: run-netns-cni\x2d56ea491c\x2d4f75\x2db658\x2d3958\x2d759a9f6e751b.mount: Deactivated successfully. Apr 13 20:01:08.121157 containerd[1763]: time="2026-04-13T20:01:08.121078914Z" level=info msg="CreateContainer within sandbox \"6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a8c51cde92a1e6b8de2694ae261f395a291a87cc2b1e92558aac96585cfe30c0\"" Apr 13 20:01:08.122861 containerd[1763]: time="2026-04-13T20:01:08.122113316Z" level=info msg="StartContainer for \"a8c51cde92a1e6b8de2694ae261f395a291a87cc2b1e92558aac96585cfe30c0\"" Apr 13 20:01:08.154362 systemd[1]: Started cri-containerd-a8c51cde92a1e6b8de2694ae261f395a291a87cc2b1e92558aac96585cfe30c0.scope - libcontainer container a8c51cde92a1e6b8de2694ae261f395a291a87cc2b1e92558aac96585cfe30c0. Apr 13 20:01:08.188779 containerd[1763]: time="2026-04-13T20:01:08.188666642Z" level=info msg="StartContainer for \"a8c51cde92a1e6b8de2694ae261f395a291a87cc2b1e92558aac96585cfe30c0\" returns successfully" Apr 13 20:01:08.249738 kubelet[3186]: I0413 20:01:08.249532 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bh6fw" podStartSLOduration=52.249515161 podStartE2EDuration="52.249515161s" podCreationTimestamp="2026-04-13 20:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:01:08.249441281 +0000 UTC m=+57.299795928" watchObservedRunningTime="2026-04-13 20:01:08.249515161 +0000 UTC m=+57.299869808" Apr 13 20:01:08.303146 kubelet[3186]: I0413 20:01:08.302372 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7624w" podStartSLOduration=52.302356031 podStartE2EDuration="52.302356031s" podCreationTimestamp="2026-04-13 20:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:01:08.268311706 +0000 UTC m=+57.318666393" watchObservedRunningTime="2026-04-13 20:01:08.302356031 +0000 UTC m=+57.352710638" Apr 13 20:01:08.418149 kubelet[3186]: I0413 20:01:08.418035 3186 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7c7a94b2-32bd-4e41-8a24-ff2f686a46d4-whisker-backend-key-pair\") pod \"7c7a94b2-32bd-4e41-8a24-ff2f686a46d4\" (UID: \"7c7a94b2-32bd-4e41-8a24-ff2f686a46d4\") " Apr 13 20:01:08.418149 kubelet[3186]: I0413 20:01:08.418100 3186 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c7a94b2-32bd-4e41-8a24-ff2f686a46d4-whisker-ca-bundle\") pod \"7c7a94b2-32bd-4e41-8a24-ff2f686a46d4\" (UID: \"7c7a94b2-32bd-4e41-8a24-ff2f686a46d4\") " Apr 13 20:01:08.418473 kubelet[3186]: I0413 20:01:08.418118 3186 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-286jk\" (UniqueName: \"kubernetes.io/projected/7c7a94b2-32bd-4e41-8a24-ff2f686a46d4-kube-api-access-286jk\") pod \"7c7a94b2-32bd-4e41-8a24-ff2f686a46d4\" (UID: \"7c7a94b2-32bd-4e41-8a24-ff2f686a46d4\") " Apr 13 20:01:08.418503 kubelet[3186]: I0413 20:01:08.418491 3186 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7c7a94b2-32bd-4e41-8a24-ff2f686a46d4-nginx-config\") pod \"7c7a94b2-32bd-4e41-8a24-ff2f686a46d4\" (UID: \"7c7a94b2-32bd-4e41-8a24-ff2f686a46d4\") " Apr 13 20:01:08.418911 kubelet[3186]: I0413 20:01:08.418885 3186 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c7a94b2-32bd-4e41-8a24-ff2f686a46d4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7c7a94b2-32bd-4e41-8a24-ff2f686a46d4" (UID: "7c7a94b2-32bd-4e41-8a24-ff2f686a46d4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:01:08.420548 kubelet[3186]: I0413 20:01:08.419411 3186 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c7a94b2-32bd-4e41-8a24-ff2f686a46d4-whisker-ca-bundle\") on node \"ci-4081.3.7-a-39cd336750\" DevicePath \"\"" Apr 13 20:01:08.424526 systemd[1]: var-lib-kubelet-pods-7c7a94b2\x2d32bd\x2d4e41\x2d8a24\x2dff2f686a46d4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 13 20:01:08.425273 kubelet[3186]: I0413 20:01:08.425200 3186 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c7a94b2-32bd-4e41-8a24-ff2f686a46d4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7c7a94b2-32bd-4e41-8a24-ff2f686a46d4" (UID: "7c7a94b2-32bd-4e41-8a24-ff2f686a46d4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 20:01:08.426400 kubelet[3186]: I0413 20:01:08.426347 3186 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c7a94b2-32bd-4e41-8a24-ff2f686a46d4-kube-api-access-286jk" (OuterVolumeSpecName: "kube-api-access-286jk") pod "7c7a94b2-32bd-4e41-8a24-ff2f686a46d4" (UID: "7c7a94b2-32bd-4e41-8a24-ff2f686a46d4"). InnerVolumeSpecName "kube-api-access-286jk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:01:08.426400 kubelet[3186]: I0413 20:01:08.426351 3186 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c7a94b2-32bd-4e41-8a24-ff2f686a46d4-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "7c7a94b2-32bd-4e41-8a24-ff2f686a46d4" (UID: "7c7a94b2-32bd-4e41-8a24-ff2f686a46d4"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:01:08.520181 kubelet[3186]: I0413 20:01:08.520139 3186 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7c7a94b2-32bd-4e41-8a24-ff2f686a46d4-whisker-backend-key-pair\") on node \"ci-4081.3.7-a-39cd336750\" DevicePath \"\"" Apr 13 20:01:08.520181 kubelet[3186]: I0413 20:01:08.520174 3186 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-286jk\" (UniqueName: \"kubernetes.io/projected/7c7a94b2-32bd-4e41-8a24-ff2f686a46d4-kube-api-access-286jk\") on node \"ci-4081.3.7-a-39cd336750\" DevicePath \"\"" Apr 13 20:01:08.520181 kubelet[3186]: I0413 20:01:08.520186 3186 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7c7a94b2-32bd-4e41-8a24-ff2f686a46d4-nginx-config\") on node \"ci-4081.3.7-a-39cd336750\" DevicePath \"\"" Apr 13 20:01:08.634297 systemd-networkd[1361]: cali3c654aaf199: Gained IPv6LL Apr 13 20:01:08.762284 systemd-networkd[1361]: cali6337027c7e4: Gained IPv6LL Apr 13 20:01:08.762562 systemd-networkd[1361]: cali43a38b7fd50: Gained IPv6LL Apr 13 20:01:08.826228 systemd-networkd[1361]: cali6998814bad2: Gained IPv6LL Apr 13 20:01:09.018288 systemd-networkd[1361]: cali75f89d2ca89: Gained IPv6LL Apr 13 20:01:09.077783 systemd[1]: Removed slice kubepods-besteffort-pod7c7a94b2_32bd_4e41_8a24_ff2f686a46d4.slice - libcontainer container kubepods-besteffort-pod7c7a94b2_32bd_4e41_8a24_ff2f686a46d4.slice. Apr 13 20:01:09.106067 systemd[1]: var-lib-kubelet-pods-7c7a94b2\x2d32bd\x2d4e41\x2d8a24\x2dff2f686a46d4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d286jk.mount: Deactivated successfully. Apr 13 20:01:09.216190 kernel: calico-node[4893]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 13 20:01:09.369826 systemd[1]: Created slice kubepods-besteffort-pod0db45b82_9373_409c_ae1c_a6ee15c817fc.slice - libcontainer container kubepods-besteffort-pod0db45b82_9373_409c_ae1c_a6ee15c817fc.slice. Apr 13 20:01:09.427256 kubelet[3186]: I0413 20:01:09.427065 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0db45b82-9373-409c-ae1c-a6ee15c817fc-whisker-backend-key-pair\") pod \"whisker-55c4dbdfb7-4kblx\" (UID: \"0db45b82-9373-409c-ae1c-a6ee15c817fc\") " pod="calico-system/whisker-55c4dbdfb7-4kblx" Apr 13 20:01:09.427256 kubelet[3186]: I0413 20:01:09.427111 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5lt2\" (UniqueName: \"kubernetes.io/projected/0db45b82-9373-409c-ae1c-a6ee15c817fc-kube-api-access-q5lt2\") pod \"whisker-55c4dbdfb7-4kblx\" (UID: \"0db45b82-9373-409c-ae1c-a6ee15c817fc\") " pod="calico-system/whisker-55c4dbdfb7-4kblx" Apr 13 20:01:09.427256 kubelet[3186]: I0413 20:01:09.427153 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/0db45b82-9373-409c-ae1c-a6ee15c817fc-nginx-config\") pod \"whisker-55c4dbdfb7-4kblx\" (UID: \"0db45b82-9373-409c-ae1c-a6ee15c817fc\") " pod="calico-system/whisker-55c4dbdfb7-4kblx" Apr 13 20:01:09.427256 kubelet[3186]: I0413 20:01:09.427184 3186 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0db45b82-9373-409c-ae1c-a6ee15c817fc-whisker-ca-bundle\") pod \"whisker-55c4dbdfb7-4kblx\" (UID: \"0db45b82-9373-409c-ae1c-a6ee15c817fc\") " pod="calico-system/whisker-55c4dbdfb7-4kblx" Apr 13 20:01:09.594236 systemd-networkd[1361]: cali82c5df50ac5: Gained IPv6LL Apr 13 20:01:09.658242 systemd-networkd[1361]: cali9b7485e2615: Gained IPv6LL Apr 13 20:01:09.678776 containerd[1763]: time="2026-04-13T20:01:09.678559462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55c4dbdfb7-4kblx,Uid:0db45b82-9373-409c-ae1c-a6ee15c817fc,Namespace:calico-system,Attempt:0,}" Apr 13 20:01:09.802121 systemd-networkd[1361]: vxlan.calico: Link UP Apr 13 20:01:09.802140 systemd-networkd[1361]: vxlan.calico: Gained carrier Apr 13 20:01:09.877170 systemd-networkd[1361]: cali08a4c42dfbb: Link UP Apr 13 20:01:09.878606 systemd-networkd[1361]: cali08a4c42dfbb: Gained carrier Apr 13 20:01:09.904030 containerd[1763]: 2026-04-13 20:01:09.773 [INFO][5011] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.7--a--39cd336750-k8s-whisker--55c4dbdfb7--4kblx-eth0 whisker-55c4dbdfb7- calico-system 0db45b82-9373-409c-ae1c-a6ee15c817fc 1019 0 2026-04-13 20:01:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:55c4dbdfb7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.7-a-39cd336750 whisker-55c4dbdfb7-4kblx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali08a4c42dfbb [] [] }} ContainerID="45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039" Namespace="calico-system" Pod="whisker-55c4dbdfb7-4kblx" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-whisker--55c4dbdfb7--4kblx-" Apr 13 20:01:09.904030 containerd[1763]: 2026-04-13 20:01:09.773 [INFO][5011] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039" Namespace="calico-system" Pod="whisker-55c4dbdfb7-4kblx" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-whisker--55c4dbdfb7--4kblx-eth0" Apr 13 20:01:09.904030 containerd[1763]: 2026-04-13 20:01:09.794 [INFO][5020] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039" HandleID="k8s-pod-network.45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039" Workload="ci--4081.3.7--a--39cd336750-k8s-whisker--55c4dbdfb7--4kblx-eth0" Apr 13 20:01:09.904030 containerd[1763]: 2026-04-13 20:01:09.818 [INFO][5020] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039" HandleID="k8s-pod-network.45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039" Workload="ci--4081.3.7--a--39cd336750-k8s-whisker--55c4dbdfb7--4kblx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.7-a-39cd336750", "pod":"whisker-55c4dbdfb7-4kblx", "timestamp":"2026-04-13 20:01:09.794196856 +0000 UTC"}, Hostname:"ci-4081.3.7-a-39cd336750", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000446160)} Apr 13 20:01:09.904030 containerd[1763]: 2026-04-13 20:01:09.818 [INFO][5020] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:01:09.904030 containerd[1763]: 2026-04-13 20:01:09.819 [INFO][5020] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:01:09.904030 containerd[1763]: 2026-04-13 20:01:09.819 [INFO][5020] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.7-a-39cd336750' Apr 13 20:01:09.904030 containerd[1763]: 2026-04-13 20:01:09.821 [INFO][5020] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:09.904030 containerd[1763]: 2026-04-13 20:01:09.824 [INFO][5020] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:09.904030 containerd[1763]: 2026-04-13 20:01:09.828 [INFO][5020] ipam/ipam.go 526: Trying affinity for 192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:09.904030 containerd[1763]: 2026-04-13 20:01:09.830 [INFO][5020] ipam/ipam.go 160: Attempting to load block cidr=192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:09.904030 containerd[1763]: 2026-04-13 20:01:09.833 [INFO][5020] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.74.0/26 host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:09.904030 containerd[1763]: 2026-04-13 20:01:09.833 [INFO][5020] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.74.0/26 handle="k8s-pod-network.45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:09.904030 containerd[1763]: 2026-04-13 20:01:09.835 [INFO][5020] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039 Apr 13 20:01:09.904030 containerd[1763]: 2026-04-13 20:01:09.845 [INFO][5020] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.74.0/26 handle="k8s-pod-network.45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:09.904030 containerd[1763]: 2026-04-13 20:01:09.857 [INFO][5020] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.74.8/26] block=192.168.74.0/26 handle="k8s-pod-network.45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:09.904030 containerd[1763]: 2026-04-13 20:01:09.857 [INFO][5020] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.74.8/26] handle="k8s-pod-network.45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039" host="ci-4081.3.7-a-39cd336750" Apr 13 20:01:09.904030 containerd[1763]: 2026-04-13 20:01:09.857 [INFO][5020] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:01:09.904030 containerd[1763]: 2026-04-13 20:01:09.857 [INFO][5020] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.74.8/26] IPv6=[] ContainerID="45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039" HandleID="k8s-pod-network.45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039" Workload="ci--4081.3.7--a--39cd336750-k8s-whisker--55c4dbdfb7--4kblx-eth0" Apr 13 20:01:09.904582 containerd[1763]: 2026-04-13 20:01:09.871 [INFO][5011] cni-plugin/k8s.go 418: Populated endpoint ContainerID="45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039" Namespace="calico-system" Pod="whisker-55c4dbdfb7-4kblx" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-whisker--55c4dbdfb7--4kblx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--39cd336750-k8s-whisker--55c4dbdfb7--4kblx-eth0", GenerateName:"whisker-55c4dbdfb7-", Namespace:"calico-system", SelfLink:"", UID:"0db45b82-9373-409c-ae1c-a6ee15c817fc", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 1, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55c4dbdfb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-39cd336750", ContainerID:"", Pod:"whisker-55c4dbdfb7-4kblx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.74.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali08a4c42dfbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:01:09.904582 containerd[1763]: 2026-04-13 20:01:09.871 [INFO][5011] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.8/32] ContainerID="45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039" Namespace="calico-system" Pod="whisker-55c4dbdfb7-4kblx" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-whisker--55c4dbdfb7--4kblx-eth0" Apr 13 20:01:09.904582 containerd[1763]: 2026-04-13 20:01:09.871 [INFO][5011] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08a4c42dfbb ContainerID="45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039" Namespace="calico-system" Pod="whisker-55c4dbdfb7-4kblx" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-whisker--55c4dbdfb7--4kblx-eth0" Apr 13 20:01:09.904582 containerd[1763]: 2026-04-13 20:01:09.878 [INFO][5011] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039" Namespace="calico-system" Pod="whisker-55c4dbdfb7-4kblx" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-whisker--55c4dbdfb7--4kblx-eth0" Apr 13 20:01:09.904582 containerd[1763]: 2026-04-13 20:01:09.879 [INFO][5011] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039" Namespace="calico-system" Pod="whisker-55c4dbdfb7-4kblx" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-whisker--55c4dbdfb7--4kblx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--39cd336750-k8s-whisker--55c4dbdfb7--4kblx-eth0", GenerateName:"whisker-55c4dbdfb7-", Namespace:"calico-system", SelfLink:"", UID:"0db45b82-9373-409c-ae1c-a6ee15c817fc", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 1, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55c4dbdfb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-39cd336750", ContainerID:"45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039", Pod:"whisker-55c4dbdfb7-4kblx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.74.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali08a4c42dfbb", MAC:"ea:cd:21:c3:81:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:01:09.904582 containerd[1763]: 2026-04-13 20:01:09.900 [INFO][5011] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039" Namespace="calico-system" Pod="whisker-55c4dbdfb7-4kblx" WorkloadEndpoint="ci--4081.3.7--a--39cd336750-k8s-whisker--55c4dbdfb7--4kblx-eth0" Apr 13 20:01:09.923701 containerd[1763]: time="2026-04-13T20:01:09.923012627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:01:09.923701 containerd[1763]: time="2026-04-13T20:01:09.923064587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:01:09.923701 containerd[1763]: time="2026-04-13T20:01:09.923079667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:01:09.923701 containerd[1763]: time="2026-04-13T20:01:09.923163307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:01:09.942284 systemd[1]: Started cri-containerd-45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039.scope - libcontainer container 45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039. Apr 13 20:01:09.977978 containerd[1763]: time="2026-04-13T20:01:09.977945300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55c4dbdfb7-4kblx,Uid:0db45b82-9373-409c-ae1c-a6ee15c817fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039\"" Apr 13 20:01:11.066980 systemd-networkd[1361]: cali08a4c42dfbb: Gained IPv6LL Apr 13 20:01:11.071248 containerd[1763]: time="2026-04-13T20:01:11.070922074Z" level=info msg="StopPodSandbox for \"fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78\"" Apr 13 20:01:11.076009 kubelet[3186]: I0413 20:01:11.075726 3186 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c7a94b2-32bd-4e41-8a24-ff2f686a46d4" path="/var/lib/kubelet/pods/7c7a94b2-32bd-4e41-8a24-ff2f686a46d4/volumes" Apr 13 20:01:11.203983 containerd[1763]: 2026-04-13 20:01:11.149 [WARNING][5172] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f11b10eb-dcb3-4727-9ed1-4a266865a659", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 0, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-39cd336750", ContainerID:"6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be", Pod:"coredns-674b8bbfcf-7624w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b7485e2615", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:01:11.203983 containerd[1763]: 2026-04-13 20:01:11.149 [INFO][5172] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Apr 13 20:01:11.203983 containerd[1763]: 2026-04-13 20:01:11.149 [INFO][5172] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" iface="eth0" netns="" Apr 13 20:01:11.203983 containerd[1763]: 2026-04-13 20:01:11.149 [INFO][5172] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Apr 13 20:01:11.203983 containerd[1763]: 2026-04-13 20:01:11.149 [INFO][5172] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Apr 13 20:01:11.203983 containerd[1763]: 2026-04-13 20:01:11.179 [INFO][5180] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" HandleID="k8s-pod-network.fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Workload="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0" Apr 13 20:01:11.203983 containerd[1763]: 2026-04-13 20:01:11.179 [INFO][5180] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:01:11.203983 containerd[1763]: 2026-04-13 20:01:11.179 [INFO][5180] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:01:11.203983 containerd[1763]: 2026-04-13 20:01:11.194 [WARNING][5180] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" HandleID="k8s-pod-network.fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Workload="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0" Apr 13 20:01:11.203983 containerd[1763]: 2026-04-13 20:01:11.194 [INFO][5180] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" HandleID="k8s-pod-network.fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Workload="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0" Apr 13 20:01:11.203983 containerd[1763]: 2026-04-13 20:01:11.195 [INFO][5180] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:01:11.203983 containerd[1763]: 2026-04-13 20:01:11.198 [INFO][5172] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Apr 13 20:01:11.203983 containerd[1763]: time="2026-04-13T20:01:11.203589571Z" level=info msg="TearDown network for sandbox \"fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78\" successfully" Apr 13 20:01:11.203983 containerd[1763]: time="2026-04-13T20:01:11.203614411Z" level=info msg="StopPodSandbox for \"fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78\" returns successfully" Apr 13 20:01:11.204888 containerd[1763]: time="2026-04-13T20:01:11.204156972Z" level=info msg="RemovePodSandbox for \"fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78\"" Apr 13 20:01:11.204888 containerd[1763]: time="2026-04-13T20:01:11.204186292Z" level=info msg="Forcibly stopping sandbox \"fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78\"" Apr 13 20:01:11.325890 containerd[1763]: 2026-04-13 20:01:11.260 [WARNING][5194] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f11b10eb-dcb3-4727-9ed1-4a266865a659", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 0, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.7-a-39cd336750", ContainerID:"6ee68a966c4c610200ac739f7da05cdfbbb20d3e9144cc802e6aed0e5df234be", Pod:"coredns-674b8bbfcf-7624w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b7485e2615", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:01:11.325890 containerd[1763]: 2026-04-13 20:01:11.261 [INFO][5194] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Apr 13 20:01:11.325890 containerd[1763]: 2026-04-13 20:01:11.261 [INFO][5194] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" iface="eth0" netns="" Apr 13 20:01:11.325890 containerd[1763]: 2026-04-13 20:01:11.261 [INFO][5194] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Apr 13 20:01:11.325890 containerd[1763]: 2026-04-13 20:01:11.261 [INFO][5194] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Apr 13 20:01:11.325890 containerd[1763]: 2026-04-13 20:01:11.302 [INFO][5205] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" HandleID="k8s-pod-network.fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Workload="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0" Apr 13 20:01:11.325890 containerd[1763]: 2026-04-13 20:01:11.302 [INFO][5205] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:01:11.325890 containerd[1763]: 2026-04-13 20:01:11.302 [INFO][5205] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:01:11.325890 containerd[1763]: 2026-04-13 20:01:11.317 [WARNING][5205] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" HandleID="k8s-pod-network.fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Workload="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0" Apr 13 20:01:11.325890 containerd[1763]: 2026-04-13 20:01:11.317 [INFO][5205] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" HandleID="k8s-pod-network.fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Workload="ci--4081.3.7--a--39cd336750-k8s-coredns--674b8bbfcf--7624w-eth0" Apr 13 20:01:11.325890 containerd[1763]: 2026-04-13 20:01:11.318 [INFO][5205] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:01:11.325890 containerd[1763]: 2026-04-13 20:01:11.321 [INFO][5194] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78" Apr 13 20:01:11.325890 containerd[1763]: time="2026-04-13T20:01:11.324843612Z" level=info msg="TearDown network for sandbox \"fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78\" successfully" Apr 13 20:01:11.336858 containerd[1763]: time="2026-04-13T20:01:11.336821308Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:01:11.337072 containerd[1763]: time="2026-04-13T20:01:11.336936788Z" level=info msg="RemovePodSandbox \"fbc067efb6985de63c812b428f302318463876b52fcebb6d714d62738399bc78\" returns successfully" Apr 13 20:01:11.393182 containerd[1763]: time="2026-04-13T20:01:11.392833343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:11.397610 containerd[1763]: time="2026-04-13T20:01:11.397577229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 13 20:01:11.401900 containerd[1763]: time="2026-04-13T20:01:11.401872715Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:11.410734 containerd[1763]: time="2026-04-13T20:01:11.410689966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:11.411722 containerd[1763]: time="2026-04-13T20:01:11.411117767Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 3.780795291s" Apr 13 20:01:11.411722 containerd[1763]: time="2026-04-13T20:01:11.411169287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 13 20:01:11.412091 containerd[1763]: time="2026-04-13T20:01:11.412065728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 13 20:01:11.422964 containerd[1763]: time="2026-04-13T20:01:11.422928543Z" level=info msg="CreateContainer within sandbox \"0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:01:11.463002 containerd[1763]: time="2026-04-13T20:01:11.462958996Z" level=info msg="CreateContainer within sandbox \"0fdf09c68769244010c4cd372d3bb034672c36079b5bfa399d62ecf8d93e536e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"10b064976e5ec0f5c1cd9f2e27dd986aefc48d03aeab1a475bccef4657d58abc\"" Apr 13 20:01:11.463698 containerd[1763]: time="2026-04-13T20:01:11.463405557Z" level=info msg="StartContainer for \"10b064976e5ec0f5c1cd9f2e27dd986aefc48d03aeab1a475bccef4657d58abc\"" Apr 13 20:01:11.494319 systemd[1]: Started cri-containerd-10b064976e5ec0f5c1cd9f2e27dd986aefc48d03aeab1a475bccef4657d58abc.scope - libcontainer container 10b064976e5ec0f5c1cd9f2e27dd986aefc48d03aeab1a475bccef4657d58abc. Apr 13 20:01:11.514456 systemd-networkd[1361]: vxlan.calico: Gained IPv6LL Apr 13 20:01:11.530870 containerd[1763]: time="2026-04-13T20:01:11.530832206Z" level=info msg="StartContainer for \"10b064976e5ec0f5c1cd9f2e27dd986aefc48d03aeab1a475bccef4657d58abc\" returns successfully" Apr 13 20:01:13.250436 kubelet[3186]: I0413 20:01:13.250265 3186 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:01:14.825814 containerd[1763]: time="2026-04-13T20:01:14.825763670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:14.829904 containerd[1763]: time="2026-04-13T20:01:14.829746676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 13 20:01:14.835155 containerd[1763]: time="2026-04-13T20:01:14.834485962Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:14.841293 containerd[1763]: time="2026-04-13T20:01:14.841247051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:14.842290 containerd[1763]: time="2026-04-13T20:01:14.841963172Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.429781844s" Apr 13 20:01:14.842290 containerd[1763]: time="2026-04-13T20:01:14.841995772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 13 20:01:14.843299 containerd[1763]: time="2026-04-13T20:01:14.843274814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 13 20:01:14.863353 containerd[1763]: time="2026-04-13T20:01:14.863315640Z" level=info msg="CreateContainer within sandbox \"11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 13 20:01:14.904479 containerd[1763]: time="2026-04-13T20:01:14.904376655Z" level=info msg="CreateContainer within sandbox \"11588a2f05fea1e2c4d94514c2bf62be4b09fcbdbe4429c980c504936621cb81\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ffb5ae265694316c28045ce0ba8fe8bc32b164ae781bfbe4b075eae56f455349\"" Apr 13 20:01:14.905225 containerd[1763]: time="2026-04-13T20:01:14.905192256Z" level=info msg="StartContainer for \"ffb5ae265694316c28045ce0ba8fe8bc32b164ae781bfbe4b075eae56f455349\"" Apr 13 20:01:14.941270 systemd[1]: Started cri-containerd-ffb5ae265694316c28045ce0ba8fe8bc32b164ae781bfbe4b075eae56f455349.scope - libcontainer container ffb5ae265694316c28045ce0ba8fe8bc32b164ae781bfbe4b075eae56f455349. Apr 13 20:01:14.978431 containerd[1763]: time="2026-04-13T20:01:14.978389873Z" level=info msg="StartContainer for \"ffb5ae265694316c28045ce0ba8fe8bc32b164ae781bfbe4b075eae56f455349\" returns successfully" Apr 13 20:01:15.281389 kubelet[3186]: I0413 20:01:15.280079 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-8477974547-fl6tk" podStartSLOduration=42.494607458 podStartE2EDuration="46.280061955s" podCreationTimestamp="2026-04-13 20:00:29 +0000 UTC" firstStartedPulling="2026-04-13 20:01:07.626446871 +0000 UTC m=+56.676801478" lastFinishedPulling="2026-04-13 20:01:11.411901328 +0000 UTC m=+60.462255975" observedRunningTime="2026-04-13 20:01:12.267172186 +0000 UTC m=+61.317526873" watchObservedRunningTime="2026-04-13 20:01:15.280061955 +0000 UTC m=+64.330416562" Apr 13 20:01:15.322020 kubelet[3186]: I0413 20:01:15.320355 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5bfd86f5bd-q2wp4" podStartSLOduration=38.225917065 podStartE2EDuration="45.320335888s" podCreationTimestamp="2026-04-13 20:00:30 +0000 UTC" firstStartedPulling="2026-04-13 20:01:07.74872855 +0000 UTC m=+56.799083197" lastFinishedPulling="2026-04-13 20:01:14.843147333 +0000 UTC m=+63.893502020" observedRunningTime="2026-04-13 20:01:15.283164279 +0000 UTC m=+64.333518926" watchObservedRunningTime="2026-04-13 20:01:15.320335888 +0000 UTC m=+64.370690535" Apr 13 20:01:17.161499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3483963063.mount: Deactivated successfully. Apr 13 20:01:17.714202 containerd[1763]: time="2026-04-13T20:01:17.714154391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:17.721146 containerd[1763]: time="2026-04-13T20:01:17.721073720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 13 20:01:17.724958 containerd[1763]: time="2026-04-13T20:01:17.724914645Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:17.730935 containerd[1763]: time="2026-04-13T20:01:17.730864773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:17.732196 containerd[1763]: time="2026-04-13T20:01:17.731625974Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.88831852s" Apr 13 20:01:17.732196 containerd[1763]: time="2026-04-13T20:01:17.731658134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 13 20:01:17.733274 containerd[1763]: time="2026-04-13T20:01:17.733250816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:01:17.740708 containerd[1763]: time="2026-04-13T20:01:17.740409905Z" level=info msg="CreateContainer within sandbox \"57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 13 20:01:17.798079 containerd[1763]: time="2026-04-13T20:01:17.798032980Z" level=info msg="CreateContainer within sandbox \"57f642f13c09034f48c3da317e9e1ace13aebf338e33710d3c6d08f8a30017c3\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"cbed764f632012378eb93fb096614182c5663415d15b5950aa04154cb748ebf9\"" Apr 13 20:01:17.799494 containerd[1763]: time="2026-04-13T20:01:17.799292662Z" level=info msg="StartContainer for \"cbed764f632012378eb93fb096614182c5663415d15b5950aa04154cb748ebf9\"" Apr 13 20:01:17.863284 systemd[1]: Started cri-containerd-cbed764f632012378eb93fb096614182c5663415d15b5950aa04154cb748ebf9.scope - libcontainer container cbed764f632012378eb93fb096614182c5663415d15b5950aa04154cb748ebf9. Apr 13 20:01:17.900310 containerd[1763]: time="2026-04-13T20:01:17.900198953Z" level=info msg="StartContainer for \"cbed764f632012378eb93fb096614182c5663415d15b5950aa04154cb748ebf9\" returns successfully" Apr 13 20:01:18.109496 containerd[1763]: time="2026-04-13T20:01:18.109449905Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:18.114300 containerd[1763]: time="2026-04-13T20:01:18.113147910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 13 20:01:18.115581 containerd[1763]: time="2026-04-13T20:01:18.115550433Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 382.082577ms" Apr 13 20:01:18.115698 containerd[1763]: time="2026-04-13T20:01:18.115683314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 13 20:01:18.116695 containerd[1763]: time="2026-04-13T20:01:18.116669395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 13 20:01:18.123981 containerd[1763]: time="2026-04-13T20:01:18.123944884Z" level=info msg="CreateContainer within sandbox \"23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:01:18.166782 containerd[1763]: time="2026-04-13T20:01:18.166702940Z" level=info msg="CreateContainer within sandbox \"23f9a8a514d615f58938d6a26628d1a7f6fa0b6c87c5622c1ebe66a57175ec38\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1a6e9d1a7a0ffa04c146c9ef3682aaa04af1edee60412b0dcb82c8fd2c1289ea\"" Apr 13 20:01:18.168063 containerd[1763]: time="2026-04-13T20:01:18.167400061Z" level=info msg="StartContainer for \"1a6e9d1a7a0ffa04c146c9ef3682aaa04af1edee60412b0dcb82c8fd2c1289ea\"" Apr 13 20:01:18.198337 systemd[1]: Started cri-containerd-1a6e9d1a7a0ffa04c146c9ef3682aaa04af1edee60412b0dcb82c8fd2c1289ea.scope - libcontainer container 1a6e9d1a7a0ffa04c146c9ef3682aaa04af1edee60412b0dcb82c8fd2c1289ea. Apr 13 20:01:18.233443 containerd[1763]: time="2026-04-13T20:01:18.233403667Z" level=info msg="StartContainer for \"1a6e9d1a7a0ffa04c146c9ef3682aaa04af1edee60412b0dcb82c8fd2c1289ea\" returns successfully" Apr 13 20:01:18.291211 kubelet[3186]: I0413 20:01:18.291162 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-t6m5k" podStartSLOduration=39.388565263 podStartE2EDuration="49.291146502s" podCreationTimestamp="2026-04-13 20:00:29 +0000 UTC" firstStartedPulling="2026-04-13 20:01:07.830103856 +0000 UTC m=+56.880458503" lastFinishedPulling="2026-04-13 20:01:17.732685135 +0000 UTC m=+66.783039742" observedRunningTime="2026-04-13 20:01:18.281192129 +0000 UTC m=+67.331546776" watchObservedRunningTime="2026-04-13 20:01:18.291146502 +0000 UTC m=+67.341501149" Apr 13 20:01:19.270773 kubelet[3186]: I0413 20:01:19.269996 3186 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:01:19.713807 containerd[1763]: time="2026-04-13T20:01:19.713754233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:19.721094 containerd[1763]: time="2026-04-13T20:01:19.720902842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 13 20:01:19.730002 containerd[1763]: time="2026-04-13T20:01:19.729884334Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.613182779s" Apr 13 20:01:19.730002 containerd[1763]: time="2026-04-13T20:01:19.729923534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 13 20:01:19.733149 containerd[1763]: time="2026-04-13T20:01:19.731710616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 13 20:01:19.745071 containerd[1763]: time="2026-04-13T20:01:19.745027473Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:19.745833 containerd[1763]: time="2026-04-13T20:01:19.745806914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:19.746556 containerd[1763]: time="2026-04-13T20:01:19.746372395Z" level=info msg="CreateContainer within sandbox \"affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 13 20:01:19.795164 containerd[1763]: time="2026-04-13T20:01:19.793204576Z" level=info msg="CreateContainer within sandbox \"affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5b522fd3fe3ecf1db2802029b8dccaa3e3b94923540082f820e9c7b86c4a2e97\"" Apr 13 20:01:19.795164 containerd[1763]: time="2026-04-13T20:01:19.795100858Z" level=info msg="StartContainer for \"5b522fd3fe3ecf1db2802029b8dccaa3e3b94923540082f820e9c7b86c4a2e97\"" Apr 13 20:01:19.828276 systemd[1]: Started cri-containerd-5b522fd3fe3ecf1db2802029b8dccaa3e3b94923540082f820e9c7b86c4a2e97.scope - libcontainer container 5b522fd3fe3ecf1db2802029b8dccaa3e3b94923540082f820e9c7b86c4a2e97. Apr 13 20:01:19.862034 containerd[1763]: time="2026-04-13T20:01:19.861990506Z" level=info msg="StartContainer for \"5b522fd3fe3ecf1db2802029b8dccaa3e3b94923540082f820e9c7b86c4a2e97\" returns successfully" Apr 13 20:01:21.368388 containerd[1763]: time="2026-04-13T20:01:21.368347665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:21.371920 containerd[1763]: time="2026-04-13T20:01:21.371887350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 13 20:01:21.375907 containerd[1763]: time="2026-04-13T20:01:21.375878035Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:21.385180 containerd[1763]: time="2026-04-13T20:01:21.385120727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:21.389145 containerd[1763]: time="2026-04-13T20:01:21.387281250Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.655062033s" Apr 13 20:01:21.389145 containerd[1763]: time="2026-04-13T20:01:21.387318810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 13 20:01:21.392058 containerd[1763]: time="2026-04-13T20:01:21.392028096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 13 20:01:21.400524 containerd[1763]: time="2026-04-13T20:01:21.400483627Z" level=info msg="CreateContainer within sandbox \"45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 20:01:21.441999 containerd[1763]: time="2026-04-13T20:01:21.441948921Z" level=info msg="CreateContainer within sandbox \"45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e1286779096b0adb5fe089170f1ddbd65cbb1f5a3f26e9e0122f86411376d507\"" Apr 13 20:01:21.443088 containerd[1763]: time="2026-04-13T20:01:21.442710042Z" level=info msg="StartContainer for \"e1286779096b0adb5fe089170f1ddbd65cbb1f5a3f26e9e0122f86411376d507\"" Apr 13 20:01:21.470520 systemd[1]: run-containerd-runc-k8s.io-e1286779096b0adb5fe089170f1ddbd65cbb1f5a3f26e9e0122f86411376d507-runc.EWeNc4.mount: Deactivated successfully. Apr 13 20:01:21.481274 systemd[1]: Started cri-containerd-e1286779096b0adb5fe089170f1ddbd65cbb1f5a3f26e9e0122f86411376d507.scope - libcontainer container e1286779096b0adb5fe089170f1ddbd65cbb1f5a3f26e9e0122f86411376d507. Apr 13 20:01:21.517531 containerd[1763]: time="2026-04-13T20:01:21.517347419Z" level=info msg="StartContainer for \"e1286779096b0adb5fe089170f1ddbd65cbb1f5a3f26e9e0122f86411376d507\" returns successfully" Apr 13 20:01:22.333566 kubelet[3186]: I0413 20:01:22.333208 3186 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:01:22.363446 kubelet[3186]: I0413 20:01:22.362427 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-8477974547-xp2gk" podStartSLOduration=43.094198124 podStartE2EDuration="53.362411559s" podCreationTimestamp="2026-04-13 20:00:29 +0000 UTC" firstStartedPulling="2026-04-13 20:01:07.848305 +0000 UTC m=+56.898659647" lastFinishedPulling="2026-04-13 20:01:18.116518435 +0000 UTC m=+67.166873082" observedRunningTime="2026-04-13 20:01:18.31252541 +0000 UTC m=+67.362880017" watchObservedRunningTime="2026-04-13 20:01:22.362411559 +0000 UTC m=+71.412766166" Apr 13 20:01:24.048364 containerd[1763]: time="2026-04-13T20:01:24.048120792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:24.051173 containerd[1763]: time="2026-04-13T20:01:24.051145116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 13 20:01:24.055609 containerd[1763]: time="2026-04-13T20:01:24.055153201Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:24.060187 containerd[1763]: time="2026-04-13T20:01:24.060137527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:24.061394 containerd[1763]: time="2026-04-13T20:01:24.060736288Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.668676872s" Apr 13 20:01:24.061394 containerd[1763]: time="2026-04-13T20:01:24.060767848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 13 20:01:24.062396 containerd[1763]: time="2026-04-13T20:01:24.062364930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 13 20:01:24.070112 containerd[1763]: time="2026-04-13T20:01:24.070074060Z" level=info msg="CreateContainer within sandbox \"affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 13 20:01:24.120764 containerd[1763]: time="2026-04-13T20:01:24.120719326Z" level=info msg="CreateContainer within sandbox \"affa38b224b3e8af463aa45f1eae41f159c14f5524291d97bfad78354bc4cce1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"38f581e8c266059884240970c62acde1417535fc5e86f3e19b95e106f5b15109\"" Apr 13 20:01:24.121503 containerd[1763]: time="2026-04-13T20:01:24.121366927Z" level=info msg="StartContainer for \"38f581e8c266059884240970c62acde1417535fc5e86f3e19b95e106f5b15109\"" Apr 13 20:01:24.153281 systemd[1]: Started cri-containerd-38f581e8c266059884240970c62acde1417535fc5e86f3e19b95e106f5b15109.scope - libcontainer container 38f581e8c266059884240970c62acde1417535fc5e86f3e19b95e106f5b15109. Apr 13 20:01:24.182350 containerd[1763]: time="2026-04-13T20:01:24.182069886Z" level=info msg="StartContainer for \"38f581e8c266059884240970c62acde1417535fc5e86f3e19b95e106f5b15109\" returns successfully" Apr 13 20:01:24.304222 kubelet[3186]: I0413 20:01:24.302674 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mq4g6" podStartSLOduration=38.118024271 podStartE2EDuration="54.302655363s" podCreationTimestamp="2026-04-13 20:00:30 +0000 UTC" firstStartedPulling="2026-04-13 20:01:07.876860237 +0000 UTC m=+56.927214884" lastFinishedPulling="2026-04-13 20:01:24.061491329 +0000 UTC m=+73.111845976" observedRunningTime="2026-04-13 20:01:24.299710359 +0000 UTC m=+73.350065006" watchObservedRunningTime="2026-04-13 20:01:24.302655363 +0000 UTC m=+73.353010010" Apr 13 20:01:25.173376 kubelet[3186]: I0413 20:01:25.173279 3186 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 13 20:01:25.173376 kubelet[3186]: I0413 20:01:25.173310 3186 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 13 20:01:26.037045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount879959747.mount: Deactivated successfully. Apr 13 20:01:26.119161 containerd[1763]: time="2026-04-13T20:01:26.119063430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:26.127773 containerd[1763]: time="2026-04-13T20:01:26.127728601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 13 20:01:26.130054 containerd[1763]: time="2026-04-13T20:01:26.130003444Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:26.137104 containerd[1763]: time="2026-04-13T20:01:26.136349892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:01:26.137104 containerd[1763]: time="2026-04-13T20:01:26.136988333Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.074484563s" Apr 13 20:01:26.137104 containerd[1763]: time="2026-04-13T20:01:26.137017373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 13 20:01:26.146570 containerd[1763]: time="2026-04-13T20:01:26.146530705Z" level=info msg="CreateContainer within sandbox \"45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 20:01:26.201100 containerd[1763]: time="2026-04-13T20:01:26.201057695Z" level=info msg="CreateContainer within sandbox \"45f2061043a5b6bfb139c59b1fc3a05cf69040d3663b3f4ecdd2835a73461039\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4904b0a06576c0fb65ecf5518582a0ef0a599ecfa48e6da9c12dfd46914b20ce\"" Apr 13 20:01:26.202149 containerd[1763]: time="2026-04-13T20:01:26.202102736Z" level=info msg="StartContainer for \"4904b0a06576c0fb65ecf5518582a0ef0a599ecfa48e6da9c12dfd46914b20ce\"" Apr 13 20:01:26.231291 systemd[1]: Started cri-containerd-4904b0a06576c0fb65ecf5518582a0ef0a599ecfa48e6da9c12dfd46914b20ce.scope - libcontainer container 4904b0a06576c0fb65ecf5518582a0ef0a599ecfa48e6da9c12dfd46914b20ce. Apr 13 20:01:26.270716 containerd[1763]: time="2026-04-13T20:01:26.269823062Z" level=info msg="StartContainer for \"4904b0a06576c0fb65ecf5518582a0ef0a599ecfa48e6da9c12dfd46914b20ce\" returns successfully" Apr 13 20:01:26.307078 kubelet[3186]: I0413 20:01:26.306619 3186 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-55c4dbdfb7-4kblx" podStartSLOduration=1.155669767 podStartE2EDuration="17.306603949s" podCreationTimestamp="2026-04-13 20:01:09 +0000 UTC" firstStartedPulling="2026-04-13 20:01:09.986893952 +0000 UTC m=+59.037248639" lastFinishedPulling="2026-04-13 20:01:26.137828174 +0000 UTC m=+75.188182821" observedRunningTime="2026-04-13 20:01:26.305577427 +0000 UTC m=+75.355932074" watchObservedRunningTime="2026-04-13 20:01:26.306603949 +0000 UTC m=+75.356958596" Apr 13 20:01:27.106419 kubelet[3186]: I0413 20:01:27.105996 3186 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:01:43.718477 systemd[1]: run-containerd-runc-k8s.io-ffb5ae265694316c28045ce0ba8fe8bc32b164ae781bfbe4b075eae56f455349-runc.NcyI8S.mount: Deactivated successfully. Apr 13 20:01:53.984452 systemd[1]: Started sshd@7-10.0.0.17:22-66.132.172.181:50430.service - OpenSSH per-connection server daemon (66.132.172.181:50430). Apr 13 20:02:02.341072 systemd[1]: Started sshd@8-10.0.0.17:22-20.229.252.112:58390.service - OpenSSH per-connection server daemon (20.229.252.112:58390). Apr 13 20:02:03.260305 sshd[5835]: Accepted publickey for core from 20.229.252.112 port 58390 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 20:02:03.262243 sshd[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:02:03.266763 systemd-logind[1714]: New session 10 of user core. Apr 13 20:02:03.276277 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 20:02:04.000006 sshd[5835]: pam_unix(sshd:session): session closed for user core Apr 13 20:02:04.004295 systemd[1]: sshd@8-10.0.0.17:22-20.229.252.112:58390.service: Deactivated successfully. Apr 13 20:02:04.006713 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 20:02:04.008628 systemd-logind[1714]: Session 10 logged out. Waiting for processes to exit. Apr 13 20:02:04.009440 systemd-logind[1714]: Removed session 10. Apr 13 20:02:09.152459 systemd[1]: Started sshd@9-10.0.0.17:22-20.229.252.112:44120.service - OpenSSH per-connection server daemon (20.229.252.112:44120). Apr 13 20:02:09.309641 sshd[5821]: Connection closed by 66.132.172.181 port 50430 [preauth] Apr 13 20:02:09.311024 systemd[1]: sshd@7-10.0.0.17:22-66.132.172.181:50430.service: Deactivated successfully. Apr 13 20:02:10.035073 sshd[5871]: Accepted publickey for core from 20.229.252.112 port 44120 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 20:02:10.037640 sshd[5871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:02:10.042242 systemd-logind[1714]: New session 11 of user core. Apr 13 20:02:10.049274 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 20:02:10.715986 sshd[5871]: pam_unix(sshd:session): session closed for user core Apr 13 20:02:10.720353 systemd-logind[1714]: Session 11 logged out. Waiting for processes to exit. Apr 13 20:02:10.720836 systemd[1]: sshd@9-10.0.0.17:22-20.229.252.112:44120.service: Deactivated successfully. Apr 13 20:02:10.722824 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 20:02:10.723798 systemd-logind[1714]: Removed session 11. Apr 13 20:02:15.276987 systemd[1]: run-containerd-runc-k8s.io-ffb5ae265694316c28045ce0ba8fe8bc32b164ae781bfbe4b075eae56f455349-runc.5iHUfY.mount: Deactivated successfully. Apr 13 20:02:15.878050 systemd[1]: Started sshd@10-10.0.0.17:22-20.229.252.112:58556.service - OpenSSH per-connection server daemon (20.229.252.112:58556). Apr 13 20:02:16.804749 sshd[5909]: Accepted publickey for core from 20.229.252.112 port 58556 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 20:02:16.806641 sshd[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:02:16.810905 systemd-logind[1714]: New session 12 of user core. Apr 13 20:02:16.817350 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 20:02:17.504716 sshd[5909]: pam_unix(sshd:session): session closed for user core Apr 13 20:02:17.508858 systemd[1]: sshd@10-10.0.0.17:22-20.229.252.112:58556.service: Deactivated successfully. Apr 13 20:02:17.510740 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 20:02:17.511622 systemd-logind[1714]: Session 12 logged out. Waiting for processes to exit. Apr 13 20:02:17.512863 systemd-logind[1714]: Removed session 12. Apr 13 20:02:19.288862 systemd[1]: run-containerd-runc-k8s.io-cbed764f632012378eb93fb096614182c5663415d15b5950aa04154cb748ebf9-runc.TGUNeE.mount: Deactivated successfully. Apr 13 20:02:22.469729 systemd[1]: run-containerd-runc-k8s.io-cbed764f632012378eb93fb096614182c5663415d15b5950aa04154cb748ebf9-runc.uTyGwV.mount: Deactivated successfully. Apr 13 20:02:22.673388 systemd[1]: Started sshd@11-10.0.0.17:22-20.229.252.112:58558.service - OpenSSH per-connection server daemon (20.229.252.112:58558). Apr 13 20:02:23.585142 sshd[5980]: Accepted publickey for core from 20.229.252.112 port 58558 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 20:02:23.586481 sshd[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:02:23.590254 systemd-logind[1714]: New session 13 of user core. Apr 13 20:02:23.597421 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 20:02:24.284579 sshd[5980]: pam_unix(sshd:session): session closed for user core Apr 13 20:02:24.288601 systemd[1]: sshd@11-10.0.0.17:22-20.229.252.112:58558.service: Deactivated successfully. Apr 13 20:02:24.292320 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 20:02:24.293087 systemd-logind[1714]: Session 13 logged out. Waiting for processes to exit. Apr 13 20:02:24.293973 systemd-logind[1714]: Removed session 13. Apr 13 20:02:24.448493 systemd[1]: Started sshd@12-10.0.0.17:22-20.229.252.112:58564.service - OpenSSH per-connection server daemon (20.229.252.112:58564). Apr 13 20:02:25.327405 sshd[5994]: Accepted publickey for core from 20.229.252.112 port 58564 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 20:02:25.328754 sshd[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:02:25.332471 systemd-logind[1714]: New session 14 of user core. Apr 13 20:02:25.339267 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 20:02:26.043573 sshd[5994]: pam_unix(sshd:session): session closed for user core Apr 13 20:02:26.047501 systemd[1]: sshd@12-10.0.0.17:22-20.229.252.112:58564.service: Deactivated successfully. Apr 13 20:02:26.049645 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 20:02:26.050714 systemd-logind[1714]: Session 14 logged out. Waiting for processes to exit. Apr 13 20:02:26.051829 systemd-logind[1714]: Removed session 14. Apr 13 20:02:26.207369 systemd[1]: Started sshd@13-10.0.0.17:22-20.229.252.112:60770.service - OpenSSH per-connection server daemon (20.229.252.112:60770). Apr 13 20:02:27.122929 sshd[6005]: Accepted publickey for core from 20.229.252.112 port 60770 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 20:02:27.126769 sshd[6005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:02:27.131107 systemd-logind[1714]: New session 15 of user core. Apr 13 20:02:27.137255 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 20:02:27.820730 sshd[6005]: pam_unix(sshd:session): session closed for user core Apr 13 20:02:27.824583 systemd[1]: sshd@13-10.0.0.17:22-20.229.252.112:60770.service: Deactivated successfully. Apr 13 20:02:27.826503 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 20:02:27.827120 systemd-logind[1714]: Session 15 logged out. Waiting for processes to exit. Apr 13 20:02:27.828232 systemd-logind[1714]: Removed session 15. Apr 13 20:02:32.980760 systemd[1]: Started sshd@14-10.0.0.17:22-20.229.252.112:60774.service - OpenSSH per-connection server daemon (20.229.252.112:60774). Apr 13 20:02:33.901977 sshd[6038]: Accepted publickey for core from 20.229.252.112 port 60774 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 20:02:33.903063 sshd[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:02:33.906805 systemd-logind[1714]: New session 16 of user core. Apr 13 20:02:33.912254 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 20:02:34.599277 sshd[6038]: pam_unix(sshd:session): session closed for user core Apr 13 20:02:34.603863 systemd-logind[1714]: Session 16 logged out. Waiting for processes to exit. Apr 13 20:02:34.605777 systemd[1]: sshd@14-10.0.0.17:22-20.229.252.112:60774.service: Deactivated successfully. Apr 13 20:02:34.607568 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 20:02:34.608719 systemd-logind[1714]: Removed session 16. Apr 13 20:02:34.753215 systemd[1]: Started sshd@15-10.0.0.17:22-20.229.252.112:60776.service - OpenSSH per-connection server daemon (20.229.252.112:60776). Apr 13 20:02:35.637237 sshd[6051]: Accepted publickey for core from 20.229.252.112 port 60776 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 20:02:35.638602 sshd[6051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:02:35.642761 systemd-logind[1714]: New session 17 of user core. Apr 13 20:02:35.651308 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 20:02:36.490687 sshd[6051]: pam_unix(sshd:session): session closed for user core Apr 13 20:02:36.494365 systemd[1]: sshd@15-10.0.0.17:22-20.229.252.112:60776.service: Deactivated successfully. Apr 13 20:02:36.496173 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 20:02:36.496811 systemd-logind[1714]: Session 17 logged out. Waiting for processes to exit. Apr 13 20:02:36.497755 systemd-logind[1714]: Removed session 17. Apr 13 20:02:36.620119 systemd[1]: Started sshd@16-10.0.0.17:22-20.229.252.112:59668.service - OpenSSH per-connection server daemon (20.229.252.112:59668). Apr 13 20:02:37.545540 sshd[6062]: Accepted publickey for core from 20.229.252.112 port 59668 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 20:02:37.546538 sshd[6062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:02:37.552218 systemd-logind[1714]: New session 18 of user core. Apr 13 20:02:37.558700 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 20:02:38.723416 sshd[6062]: pam_unix(sshd:session): session closed for user core Apr 13 20:02:38.731046 systemd[1]: sshd@16-10.0.0.17:22-20.229.252.112:59668.service: Deactivated successfully. Apr 13 20:02:38.733815 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 20:02:38.735319 systemd-logind[1714]: Session 18 logged out. Waiting for processes to exit. Apr 13 20:02:38.736143 systemd-logind[1714]: Removed session 18. Apr 13 20:02:38.889372 systemd[1]: Started sshd@17-10.0.0.17:22-20.229.252.112:59670.service - OpenSSH per-connection server daemon (20.229.252.112:59670). Apr 13 20:02:39.803894 sshd[6114]: Accepted publickey for core from 20.229.252.112 port 59670 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 20:02:39.805280 sshd[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:02:39.808859 systemd-logind[1714]: New session 19 of user core. Apr 13 20:02:39.813254 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 20:02:40.612322 sshd[6114]: pam_unix(sshd:session): session closed for user core Apr 13 20:02:40.615950 systemd[1]: sshd@17-10.0.0.17:22-20.229.252.112:59670.service: Deactivated successfully. Apr 13 20:02:40.617682 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 20:02:40.618409 systemd-logind[1714]: Session 19 logged out. Waiting for processes to exit. Apr 13 20:02:40.619479 systemd-logind[1714]: Removed session 19. Apr 13 20:02:40.767822 systemd[1]: Started sshd@18-10.0.0.17:22-20.229.252.112:59684.service - OpenSSH per-connection server daemon (20.229.252.112:59684). Apr 13 20:02:41.680159 sshd[6137]: Accepted publickey for core from 20.229.252.112 port 59684 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 20:02:41.681103 sshd[6137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:02:41.688945 systemd-logind[1714]: New session 20 of user core. Apr 13 20:02:41.694513 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 20:02:42.400636 sshd[6137]: pam_unix(sshd:session): session closed for user core Apr 13 20:02:42.406375 systemd-logind[1714]: Session 20 logged out. Waiting for processes to exit. Apr 13 20:02:42.407254 systemd[1]: sshd@18-10.0.0.17:22-20.229.252.112:59684.service: Deactivated successfully. Apr 13 20:02:42.409016 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 20:02:42.413683 systemd-logind[1714]: Removed session 20. Apr 13 20:02:43.721914 systemd[1]: run-containerd-runc-k8s.io-ffb5ae265694316c28045ce0ba8fe8bc32b164ae781bfbe4b075eae56f455349-runc.PtQMPT.mount: Deactivated successfully. Apr 13 20:02:47.559063 systemd[1]: Started sshd@19-10.0.0.17:22-20.229.252.112:58300.service - OpenSSH per-connection server daemon (20.229.252.112:58300). Apr 13 20:02:48.474019 sshd[6216]: Accepted publickey for core from 20.229.252.112 port 58300 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 20:02:48.497619 sshd[6216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:02:48.501733 systemd-logind[1714]: New session 21 of user core. Apr 13 20:02:48.509286 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 20:02:49.165283 sshd[6216]: pam_unix(sshd:session): session closed for user core Apr 13 20:02:49.169192 systemd[1]: sshd@19-10.0.0.17:22-20.229.252.112:58300.service: Deactivated successfully. Apr 13 20:02:49.171622 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 20:02:49.172601 systemd-logind[1714]: Session 21 logged out. Waiting for processes to exit. Apr 13 20:02:49.173630 systemd-logind[1714]: Removed session 21. Apr 13 20:02:54.325897 systemd[1]: Started sshd@20-10.0.0.17:22-20.229.252.112:58308.service - OpenSSH per-connection server daemon (20.229.252.112:58308). Apr 13 20:02:55.234739 sshd[6248]: Accepted publickey for core from 20.229.252.112 port 58308 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 20:02:55.236014 sshd[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:02:55.239518 systemd-logind[1714]: New session 22 of user core. Apr 13 20:02:55.243273 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 13 20:02:55.930493 sshd[6248]: pam_unix(sshd:session): session closed for user core Apr 13 20:02:55.934483 systemd-logind[1714]: Session 22 logged out. Waiting for processes to exit. Apr 13 20:02:55.935151 systemd[1]: sshd@20-10.0.0.17:22-20.229.252.112:58308.service: Deactivated successfully. Apr 13 20:02:55.937883 systemd[1]: session-22.scope: Deactivated successfully. Apr 13 20:02:55.938928 systemd-logind[1714]: Removed session 22. Apr 13 20:03:01.087502 systemd[1]: Started sshd@21-10.0.0.17:22-20.229.252.112:52966.service - OpenSSH per-connection server daemon (20.229.252.112:52966). Apr 13 20:03:02.004551 sshd[6261]: Accepted publickey for core from 20.229.252.112 port 52966 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 20:03:02.005434 sshd[6261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:03:02.009228 systemd-logind[1714]: New session 23 of user core. Apr 13 20:03:02.016292 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 13 20:03:02.695240 sshd[6261]: pam_unix(sshd:session): session closed for user core Apr 13 20:03:02.700774 systemd-logind[1714]: Session 23 logged out. Waiting for processes to exit. Apr 13 20:03:02.702810 systemd[1]: sshd@21-10.0.0.17:22-20.229.252.112:52966.service: Deactivated successfully. Apr 13 20:03:02.706640 systemd[1]: session-23.scope: Deactivated successfully. Apr 13 20:03:02.708866 systemd-logind[1714]: Removed session 23. Apr 13 20:03:07.860353 systemd[1]: Started sshd@22-10.0.0.17:22-20.229.252.112:40240.service - OpenSSH per-connection server daemon (20.229.252.112:40240). Apr 13 20:03:08.767208 sshd[6274]: Accepted publickey for core from 20.229.252.112 port 40240 ssh2: RSA SHA256:eM1a3yfLGBv9yc03rH6j13R5s+OuAreTiph5zIMSg/A Apr 13 20:03:08.768101 sshd[6274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:03:08.771820 systemd-logind[1714]: New session 24 of user core. Apr 13 20:03:08.777279 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 13 20:03:09.460966 sshd[6274]: pam_unix(sshd:session): session closed for user core Apr 13 20:03:09.463612 systemd-logind[1714]: Session 24 logged out. Waiting for processes to exit. Apr 13 20:03:09.463980 systemd[1]: sshd@22-10.0.0.17:22-20.229.252.112:40240.service: Deactivated successfully. Apr 13 20:03:09.465990 systemd[1]: session-24.scope: Deactivated successfully. Apr 13 20:03:09.467697 systemd-logind[1714]: Removed session 24.