Apr 25 01:24:17.218280 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 25 01:24:17.218302 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Apr 24 22:19:35 -00 2026 Apr 25 01:24:17.218311 kernel: KASLR enabled Apr 25 01:24:17.218317 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 25 01:24:17.218325 kernel: printk: bootconsole [pl11] enabled Apr 25 01:24:17.218331 kernel: efi: EFI v2.7 by EDK II Apr 25 01:24:17.218338 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f213018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Apr 25 01:24:17.218345 kernel: random: crng init done Apr 25 01:24:17.218351 kernel: ACPI: Early table checksum verification disabled Apr 25 01:24:17.218358 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Apr 25 01:24:17.218365 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 25 01:24:17.218371 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 25 01:24:17.218379 kernel: ACPI: DSDT 0x000000003FD41018 01DF7E (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 25 01:24:17.218386 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 25 01:24:17.218394 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 25 01:24:17.218400 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 25 01:24:17.218408 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 25 01:24:17.218416 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 25 01:24:17.218423 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 25 01:24:17.218430 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 25 01:24:17.218436 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 25 01:24:17.218443 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 25 01:24:17.218450 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Apr 25 01:24:17.218457 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Apr 25 01:24:17.218464 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Apr 25 01:24:17.218471 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Apr 25 01:24:17.218478 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Apr 25 01:24:17.218484 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Apr 25 01:24:17.218493 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Apr 25 01:24:17.218500 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Apr 25 01:24:17.218508 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Apr 25 01:24:17.218515 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Apr 25 01:24:17.218522 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Apr 25 01:24:17.218529 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Apr 25 01:24:17.218535 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Apr 25 01:24:17.218542 kernel: Zone ranges: Apr 25 01:24:17.218549 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 25 01:24:17.218556 kernel: DMA32 empty Apr 25 01:24:17.218562 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 25 01:24:17.218569 kernel: Movable zone start for each node Apr 25 01:24:17.218581 kernel: Early memory node ranges Apr 25 01:24:17.218588 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 25 01:24:17.218595 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Apr 25 01:24:17.218603 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Apr 25 01:24:17.218610 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Apr 25 01:24:17.218618 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Apr 25 01:24:17.218625 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Apr 25 01:24:17.218632 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 25 01:24:17.218640 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 25 01:24:17.218647 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 25 01:24:17.218654 kernel: psci: probing for conduit method from ACPI. Apr 25 01:24:17.218661 kernel: psci: PSCIv1.1 detected in firmware. Apr 25 01:24:17.218668 kernel: psci: Using standard PSCI v0.2 function IDs Apr 25 01:24:17.218676 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 25 01:24:17.218683 kernel: psci: SMC Calling Convention v1.4 Apr 25 01:24:17.218690 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Apr 25 01:24:17.218698 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Apr 25 01:24:17.218707 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 25 01:24:17.218714 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 25 01:24:17.218735 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 25 01:24:17.218742 kernel: Detected PIPT I-cache on CPU0 Apr 25 01:24:17.220785 kernel: CPU features: detected: GIC system register CPU interface Apr 25 01:24:17.220801 kernel: CPU features: detected: Hardware dirty bit management Apr 25 01:24:17.220809 kernel: CPU features: detected: Spectre-BHB Apr 25 01:24:17.220816 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 25 01:24:17.220824 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 25 01:24:17.220831 kernel: CPU features: detected: ARM erratum 1418040 Apr 25 01:24:17.220838 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Apr 25 01:24:17.220855 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 25 01:24:17.220863 kernel: alternatives: applying boot alternatives Apr 25 01:24:17.220872 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=63304dd98a277d4592d17e0085ae3f91ca70cc8ec6dedfdd357a1e9755f9a8b3 Apr 25 01:24:17.220880 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 25 01:24:17.220887 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 25 01:24:17.220895 kernel: Fallback order for Node 0: 0 Apr 25 01:24:17.220902 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Apr 25 01:24:17.220909 kernel: Policy zone: Normal Apr 25 01:24:17.220917 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 25 01:24:17.220924 kernel: software IO TLB: area num 2. Apr 25 01:24:17.220931 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Apr 25 01:24:17.220940 kernel: Memory: 3982632K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211528K reserved, 0K cma-reserved) Apr 25 01:24:17.220948 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 25 01:24:17.220955 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 25 01:24:17.220963 kernel: rcu: RCU event tracing is enabled. Apr 25 01:24:17.220971 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 25 01:24:17.220978 kernel: Trampoline variant of Tasks RCU enabled. Apr 25 01:24:17.220986 kernel: Tracing variant of Tasks RCU enabled. Apr 25 01:24:17.220993 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 25 01:24:17.221000 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 25 01:24:17.221008 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 25 01:24:17.221015 kernel: GICv3: 960 SPIs implemented Apr 25 01:24:17.221023 kernel: GICv3: 0 Extended SPIs implemented Apr 25 01:24:17.221031 kernel: Root IRQ handler: gic_handle_irq Apr 25 01:24:17.221038 kernel: GICv3: GICv3 features: 16 PPIs, RSS Apr 25 01:24:17.221045 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 25 01:24:17.221052 kernel: ITS: No ITS available, not enabling LPIs Apr 25 01:24:17.221060 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 25 01:24:17.221067 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 25 01:24:17.221075 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 25 01:24:17.221082 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 25 01:24:17.221090 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 25 01:24:17.221097 kernel: Console: colour dummy device 80x25 Apr 25 01:24:17.221106 kernel: printk: console [tty1] enabled Apr 25 01:24:17.221114 kernel: ACPI: Core revision 20230628 Apr 25 01:24:17.221122 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 25 01:24:17.221129 kernel: pid_max: default: 32768 minimum: 301 Apr 25 01:24:17.221137 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 25 01:24:17.221144 kernel: landlock: Up and running. Apr 25 01:24:17.221152 kernel: SELinux: Initializing. Apr 25 01:24:17.221159 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 25 01:24:17.221167 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 25 01:24:17.221176 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 25 01:24:17.221184 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 25 01:24:17.221192 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Apr 25 01:24:17.221199 kernel: Hyper-V: Host Build 10.0.26100.1542-1-0 Apr 25 01:24:17.221207 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 25 01:24:17.221214 kernel: rcu: Hierarchical SRCU implementation. Apr 25 01:24:17.221222 kernel: rcu: Max phase no-delay instances is 400. Apr 25 01:24:17.221230 kernel: Remapping and enabling EFI services. Apr 25 01:24:17.221244 kernel: smp: Bringing up secondary CPUs ... Apr 25 01:24:17.221252 kernel: Detected PIPT I-cache on CPU1 Apr 25 01:24:17.221260 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 25 01:24:17.221269 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 25 01:24:17.221279 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 25 01:24:17.221287 kernel: smp: Brought up 1 node, 2 CPUs Apr 25 01:24:17.221295 kernel: SMP: Total of 2 processors activated. Apr 25 01:24:17.221303 kernel: CPU features: detected: 32-bit EL0 Support Apr 25 01:24:17.221311 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 25 01:24:17.221321 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 25 01:24:17.221329 kernel: CPU features: detected: CRC32 instructions Apr 25 01:24:17.221337 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 25 01:24:17.221345 kernel: CPU features: detected: LSE atomic instructions Apr 25 01:24:17.221353 kernel: CPU features: detected: Privileged Access Never Apr 25 01:24:17.221360 kernel: CPU: All CPU(s) started at EL1 Apr 25 01:24:17.221368 kernel: alternatives: applying system-wide alternatives Apr 25 01:24:17.221376 kernel: devtmpfs: initialized Apr 25 01:24:17.221384 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 25 01:24:17.221394 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 25 01:24:17.221402 kernel: pinctrl core: initialized pinctrl subsystem Apr 25 01:24:17.221410 kernel: SMBIOS 3.1.0 present. Apr 25 01:24:17.221418 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/09/2026 Apr 25 01:24:17.221426 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 25 01:24:17.221434 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 25 01:24:17.221442 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 25 01:24:17.221450 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 25 01:24:17.221458 kernel: audit: initializing netlink subsys (disabled) Apr 25 01:24:17.221467 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Apr 25 01:24:17.221475 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 25 01:24:17.221483 kernel: cpuidle: using governor menu Apr 25 01:24:17.221491 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 25 01:24:17.221499 kernel: ASID allocator initialised with 32768 entries Apr 25 01:24:17.221506 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 25 01:24:17.221514 kernel: Serial: AMBA PL011 UART driver Apr 25 01:24:17.221522 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 25 01:24:17.221530 kernel: Modules: 0 pages in range for non-PLT usage Apr 25 01:24:17.221540 kernel: Modules: 509008 pages in range for PLT usage Apr 25 01:24:17.221548 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 25 01:24:17.221556 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 25 01:24:17.221564 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 25 01:24:17.221572 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 25 01:24:17.221579 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 25 01:24:17.221587 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 25 01:24:17.221595 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 25 01:24:17.221603 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 25 01:24:17.221612 kernel: ACPI: Added _OSI(Module Device) Apr 25 01:24:17.221620 kernel: ACPI: Added _OSI(Processor Device) Apr 25 01:24:17.221628 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 25 01:24:17.221636 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 25 01:24:17.221645 kernel: ACPI: Interpreter enabled Apr 25 01:24:17.221653 kernel: ACPI: Using GIC for interrupt routing Apr 25 01:24:17.221661 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 25 01:24:17.221669 kernel: printk: console [ttyAMA0] enabled Apr 25 01:24:17.221677 kernel: printk: bootconsole [pl11] disabled Apr 25 01:24:17.221687 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 25 01:24:17.221696 kernel: iommu: Default domain type: Translated Apr 25 01:24:17.221704 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 25 01:24:17.221712 kernel: efivars: Registered efivars operations Apr 25 01:24:17.221721 kernel: vgaarb: loaded Apr 25 01:24:17.221729 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 25 01:24:17.221738 kernel: VFS: Disk quotas dquot_6.6.0 Apr 25 01:24:17.221746 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 25 01:24:17.221760 kernel: pnp: PnP ACPI init Apr 25 01:24:17.221771 kernel: pnp: PnP ACPI: found 0 devices Apr 25 01:24:17.221779 kernel: NET: Registered PF_INET protocol family Apr 25 01:24:17.221788 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 25 01:24:17.221796 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 25 01:24:17.221804 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 25 01:24:17.221812 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 25 01:24:17.221820 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 25 01:24:17.221829 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 25 01:24:17.221836 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 25 01:24:17.221846 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 25 01:24:17.221855 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 25 01:24:17.221863 kernel: PCI: CLS 0 bytes, default 64 Apr 25 01:24:17.221871 kernel: kvm [1]: HYP mode not available Apr 25 01:24:17.221879 kernel: Initialise system trusted keyrings Apr 25 01:24:17.221887 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 25 01:24:17.221896 kernel: Key type asymmetric registered Apr 25 01:24:17.221904 kernel: Asymmetric key parser 'x509' registered Apr 25 01:24:17.221912 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 25 01:24:17.221922 kernel: io scheduler mq-deadline registered Apr 25 01:24:17.221930 kernel: io scheduler kyber registered Apr 25 01:24:17.221938 kernel: io scheduler bfq registered Apr 25 01:24:17.221946 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 25 01:24:17.221954 kernel: thunder_xcv, ver 1.0 Apr 25 01:24:17.221962 kernel: thunder_bgx, ver 1.0 Apr 25 01:24:17.221970 kernel: nicpf, ver 1.0 Apr 25 01:24:17.221978 kernel: nicvf, ver 1.0 Apr 25 01:24:17.222138 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 25 01:24:17.222221 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-25T01:24:16 UTC (1777080256) Apr 25 01:24:17.222232 kernel: efifb: probing for efifb Apr 25 01:24:17.222240 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 25 01:24:17.222248 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 25 01:24:17.222256 kernel: efifb: scrolling: redraw Apr 25 01:24:17.222264 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 25 01:24:17.222272 kernel: Console: switching to colour frame buffer device 128x48 Apr 25 01:24:17.222279 kernel: fb0: EFI VGA frame buffer device Apr 25 01:24:17.222289 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 25 01:24:17.222297 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 25 01:24:17.222306 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Apr 25 01:24:17.222314 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 25 01:24:17.222322 kernel: watchdog: Hard watchdog permanently disabled Apr 25 01:24:17.222331 kernel: NET: Registered PF_INET6 protocol family Apr 25 01:24:17.222338 kernel: Segment Routing with IPv6 Apr 25 01:24:17.222346 kernel: In-situ OAM (IOAM) with IPv6 Apr 25 01:24:17.222355 kernel: NET: Registered PF_PACKET protocol family Apr 25 01:24:17.222364 kernel: Key type dns_resolver registered Apr 25 01:24:17.222372 kernel: registered taskstats version 1 Apr 25 01:24:17.222380 kernel: Loading compiled-in X.509 certificates Apr 25 01:24:17.222389 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 96a6e7da7ac9a3ef656057ccd8e13f251b310c24' Apr 25 01:24:17.222396 kernel: Key type .fscrypt registered Apr 25 01:24:17.222404 kernel: Key type fscrypt-provisioning registered Apr 25 01:24:17.222412 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 25 01:24:17.222421 kernel: ima: Allocated hash algorithm: sha1 Apr 25 01:24:17.222430 kernel: ima: No architecture policies found Apr 25 01:24:17.222439 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 25 01:24:17.222448 kernel: clk: Disabling unused clocks Apr 25 01:24:17.222456 kernel: Freeing unused kernel memory: 39424K Apr 25 01:24:17.222464 kernel: Run /init as init process Apr 25 01:24:17.222472 kernel: with arguments: Apr 25 01:24:17.222481 kernel: /init Apr 25 01:24:17.222489 kernel: with environment: Apr 25 01:24:17.222498 kernel: HOME=/ Apr 25 01:24:17.222506 kernel: TERM=linux Apr 25 01:24:17.222517 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 25 01:24:17.222530 systemd[1]: Detected virtualization microsoft. Apr 25 01:24:17.222539 systemd[1]: Detected architecture arm64. Apr 25 01:24:17.222548 systemd[1]: Running in initrd. Apr 25 01:24:17.222556 systemd[1]: No hostname configured, using default hostname. Apr 25 01:24:17.222564 systemd[1]: Hostname set to . Apr 25 01:24:17.222573 systemd[1]: Initializing machine ID from random generator. Apr 25 01:24:17.222583 systemd[1]: Queued start job for default target initrd.target. Apr 25 01:24:17.222592 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 25 01:24:17.222601 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 25 01:24:17.222610 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 25 01:24:17.222618 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 25 01:24:17.222627 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 25 01:24:17.222636 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 25 01:24:17.222646 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 25 01:24:17.222656 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 25 01:24:17.222665 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 25 01:24:17.222674 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 25 01:24:17.222682 systemd[1]: Reached target paths.target - Path Units. Apr 25 01:24:17.222691 systemd[1]: Reached target slices.target - Slice Units. Apr 25 01:24:17.222699 systemd[1]: Reached target swap.target - Swaps. Apr 25 01:24:17.222708 systemd[1]: Reached target timers.target - Timer Units. Apr 25 01:24:17.222716 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 25 01:24:17.222727 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 25 01:24:17.222735 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 25 01:24:17.222744 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 25 01:24:17.224807 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 25 01:24:17.224819 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 25 01:24:17.224828 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 25 01:24:17.224837 systemd[1]: Reached target sockets.target - Socket Units. Apr 25 01:24:17.224846 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 25 01:24:17.224861 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 25 01:24:17.224870 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 25 01:24:17.224879 systemd[1]: Starting systemd-fsck-usr.service... Apr 25 01:24:17.224888 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 25 01:24:17.224896 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 25 01:24:17.224936 systemd-journald[217]: Collecting audit messages is disabled. Apr 25 01:24:17.224960 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 01:24:17.224970 systemd-journald[217]: Journal started Apr 25 01:24:17.224991 systemd-journald[217]: Runtime Journal (/run/log/journal/02db605233e347449935336be407be0d) is 8.0M, max 78.5M, 70.5M free. Apr 25 01:24:17.232797 systemd[1]: Started systemd-journald.service - Journal Service. Apr 25 01:24:17.231776 systemd-modules-load[218]: Inserted module 'overlay' Apr 25 01:24:17.249620 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 25 01:24:17.254737 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 25 01:24:17.284026 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 25 01:24:17.284051 kernel: Bridge firewalling registered Apr 25 01:24:17.274827 systemd-modules-load[218]: Inserted module 'br_netfilter' Apr 25 01:24:17.275832 systemd[1]: Finished systemd-fsck-usr.service. Apr 25 01:24:17.292108 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 25 01:24:17.301806 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 01:24:17.318019 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 25 01:24:17.330946 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 25 01:24:17.347077 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 25 01:24:17.362909 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 25 01:24:17.370905 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 25 01:24:17.388780 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 25 01:24:17.393591 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 25 01:24:17.403692 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 25 01:24:17.426925 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 25 01:24:17.438914 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 25 01:24:17.451917 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 25 01:24:17.473945 dracut-cmdline[253]: dracut-dracut-053 Apr 25 01:24:17.484002 dracut-cmdline[253]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=63304dd98a277d4592d17e0085ae3f91ca70cc8ec6dedfdd357a1e9755f9a8b3 Apr 25 01:24:17.516035 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 25 01:24:17.535183 systemd-resolved[254]: Positive Trust Anchors: Apr 25 01:24:17.535196 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 25 01:24:17.535228 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 25 01:24:17.541039 systemd-resolved[254]: Defaulting to hostname 'linux'. Apr 25 01:24:17.541947 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 25 01:24:17.548070 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 25 01:24:17.618759 kernel: SCSI subsystem initialized Apr 25 01:24:17.626757 kernel: Loading iSCSI transport class v2.0-870. Apr 25 01:24:17.635766 kernel: iscsi: registered transport (tcp) Apr 25 01:24:17.653117 kernel: iscsi: registered transport (qla4xxx) Apr 25 01:24:17.653181 kernel: QLogic iSCSI HBA Driver Apr 25 01:24:17.692552 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 25 01:24:17.706008 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 25 01:24:17.748583 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 25 01:24:17.748646 kernel: device-mapper: uevent: version 1.0.3 Apr 25 01:24:17.754011 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 25 01:24:17.803769 kernel: raid6: neonx8 gen() 14097 MB/s Apr 25 01:24:17.822757 kernel: raid6: neonx4 gen() 15688 MB/s Apr 25 01:24:17.841755 kernel: raid6: neonx2 gen() 13242 MB/s Apr 25 01:24:17.861755 kernel: raid6: neonx1 gen() 10483 MB/s Apr 25 01:24:17.880758 kernel: raid6: int64x8 gen() 6981 MB/s Apr 25 01:24:17.899755 kernel: raid6: int64x4 gen() 7371 MB/s Apr 25 01:24:17.919755 kernel: raid6: int64x2 gen() 6145 MB/s Apr 25 01:24:17.941834 kernel: raid6: int64x1 gen() 5069 MB/s Apr 25 01:24:17.941844 kernel: raid6: using algorithm neonx4 gen() 15688 MB/s Apr 25 01:24:17.964817 kernel: raid6: .... xor() 12122 MB/s, rmw enabled Apr 25 01:24:17.964828 kernel: raid6: using neon recovery algorithm Apr 25 01:24:17.976139 kernel: xor: measuring software checksum speed Apr 25 01:24:17.976168 kernel: 8regs : 19793 MB/sec Apr 25 01:24:17.979127 kernel: 32regs : 19693 MB/sec Apr 25 01:24:17.981910 kernel: arm64_neon : 27087 MB/sec Apr 25 01:24:17.985288 kernel: xor: using function: arm64_neon (27087 MB/sec) Apr 25 01:24:18.035768 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 25 01:24:18.046791 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 25 01:24:18.061883 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 25 01:24:18.083074 systemd-udevd[439]: Using default interface naming scheme 'v255'. Apr 25 01:24:18.087627 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 25 01:24:18.103867 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 25 01:24:18.133864 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Apr 25 01:24:18.166932 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 25 01:24:18.180959 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 25 01:24:18.222516 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 25 01:24:18.239109 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 25 01:24:18.272815 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 25 01:24:18.284390 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 25 01:24:18.297338 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 25 01:24:18.308451 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 25 01:24:18.324986 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 25 01:24:18.342247 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 25 01:24:18.352215 kernel: hv_vmbus: Vmbus version:5.3 Apr 25 01:24:18.352237 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 25 01:24:18.358143 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 25 01:24:18.358305 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 25 01:24:18.402670 kernel: hv_vmbus: registering driver hid_hyperv Apr 25 01:24:18.402698 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 25 01:24:18.402710 kernel: hv_vmbus: registering driver hv_storvsc Apr 25 01:24:18.368671 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 25 01:24:18.432393 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 25 01:24:18.432418 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 25 01:24:18.432567 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 25 01:24:18.379302 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 25 01:24:18.379454 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 01:24:18.469829 kernel: scsi host0: storvsc_host_t Apr 25 01:24:18.470044 kernel: scsi host1: storvsc_host_t Apr 25 01:24:18.470143 kernel: hv_vmbus: registering driver hv_netvsc Apr 25 01:24:18.470154 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 25 01:24:18.470180 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 25 01:24:18.414386 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 01:24:18.482405 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 25 01:24:18.471594 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 01:24:18.494015 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 01:24:18.527288 kernel: PTP clock support registered Apr 25 01:24:18.530139 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 25 01:24:18.550142 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 25 01:24:18.579805 kernel: hv_utils: Registering HyperV Utility Driver Apr 25 01:24:18.579830 kernel: hv_vmbus: registering driver hv_utils Apr 25 01:24:18.579840 kernel: hv_utils: Heartbeat IC version 3.0 Apr 25 01:24:18.579850 kernel: hv_utils: Shutdown IC version 3.2 Apr 25 01:24:18.579860 kernel: hv_netvsc 7ced8db6-8bc0-7ced-8db6-8bc07ced8db6 eth0: VF slot 1 added Apr 25 01:24:18.580018 kernel: hv_utils: TimeSync IC version 4.0 Apr 25 01:24:18.550257 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 25 01:24:18.565995 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 25 01:24:18.217375 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 25 01:24:18.217555 systemd-journald[217]: Time jumped backwards, rotating. Apr 25 01:24:18.217595 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 25 01:24:18.566054 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 01:24:18.236670 kernel: hv_vmbus: registering driver hv_pci Apr 25 01:24:18.236701 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 25 01:24:18.236874 kernel: hv_pci 9926467d-51d4-4e3f-b044-493dc393ede3: PCI VMBus probing: Using version 0x10004 Apr 25 01:24:18.181413 systemd-resolved[254]: Clock change detected. Flushing caches. Apr 25 01:24:18.185818 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 01:24:18.220798 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 01:24:18.285099 kernel: hv_pci 9926467d-51d4-4e3f-b044-493dc393ede3: PCI host bridge to bus 51d4:00 Apr 25 01:24:18.285899 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 25 01:24:18.286040 kernel: pci_bus 51d4:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 25 01:24:18.286141 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 25 01:24:18.286233 kernel: pci_bus 51d4:00: No busn resource found for root bus, will use [bus 00-ff] Apr 25 01:24:18.286314 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 25 01:24:18.286411 kernel: pci 51d4:00:02.0: [15b3:1018] type 00 class 0x020000 Apr 25 01:24:18.290712 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 25 01:24:18.296743 kernel: pci 51d4:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 25 01:24:18.291256 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 01:24:18.346720 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 25 01:24:18.346913 kernel: pci 51d4:00:02.0: enabling Extended Tags Apr 25 01:24:18.347026 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 25 01:24:18.347036 kernel: pci 51d4:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 51d4:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Apr 25 01:24:18.347128 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 25 01:24:18.347219 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#16 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 25 01:24:18.347313 kernel: pci_bus 51d4:00: busn_res: [bus 00-ff] end is updated to 00 Apr 25 01:24:18.331697 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 25 01:24:18.363979 kernel: pci 51d4:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 25 01:24:18.385788 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 25 01:24:18.415564 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 25 01:24:18.425616 kernel: mlx5_core 51d4:00:02.0: enabling device (0000 -> 0002) Apr 25 01:24:18.431467 kernel: mlx5_core 51d4:00:02.0: firmware version: 16.30.5026 Apr 25 01:24:18.633978 kernel: hv_netvsc 7ced8db6-8bc0-7ced-8db6-8bc07ced8db6 eth0: VF registering: eth1 Apr 25 01:24:18.634194 kernel: mlx5_core 51d4:00:02.0 eth1: joined to eth0 Apr 25 01:24:18.640527 kernel: mlx5_core 51d4:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Apr 25 01:24:18.649456 kernel: mlx5_core 51d4:00:02.0 enP20948s1: renamed from eth1 Apr 25 01:24:19.053456 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (489) Apr 25 01:24:19.068639 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 25 01:24:19.079361 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 25 01:24:19.106451 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 25 01:24:19.278488 kernel: BTRFS: device fsid 5f4cf890-f9e2-4e04-aa84-1bcfb6e5643e devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (501) Apr 25 01:24:19.291751 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 25 01:24:19.297489 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 25 01:24:19.322560 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 25 01:24:19.346476 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 25 01:24:19.354449 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 25 01:24:20.367070 disk-uuid[607]: The operation has completed successfully. Apr 25 01:24:20.371273 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 25 01:24:20.435810 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 25 01:24:20.435921 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 25 01:24:20.463623 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 25 01:24:20.474399 sh[720]: Success Apr 25 01:24:20.505469 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 25 01:24:20.815399 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 25 01:24:20.820241 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 25 01:24:20.833568 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 25 01:24:20.862617 kernel: BTRFS info (device dm-0): first mount of filesystem 5f4cf890-f9e2-4e04-aa84-1bcfb6e5643e Apr 25 01:24:20.862665 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 25 01:24:20.868082 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 25 01:24:20.872088 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 25 01:24:20.875543 kernel: BTRFS info (device dm-0): using free space tree Apr 25 01:24:21.278845 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 25 01:24:21.284123 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 25 01:24:21.300630 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 25 01:24:21.310699 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 25 01:24:21.341166 kernel: BTRFS info (device sda6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 25 01:24:21.341218 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 25 01:24:21.344884 kernel: BTRFS info (device sda6): using free space tree Apr 25 01:24:21.388484 kernel: BTRFS info (device sda6): auto enabling async discard Apr 25 01:24:21.402886 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 25 01:24:21.406982 kernel: BTRFS info (device sda6): last unmount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 25 01:24:21.413175 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 25 01:24:21.430619 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 25 01:24:21.440534 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 25 01:24:21.452731 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 25 01:24:21.479358 systemd-networkd[904]: lo: Link UP Apr 25 01:24:21.479370 systemd-networkd[904]: lo: Gained carrier Apr 25 01:24:21.480991 systemd-networkd[904]: Enumeration completed Apr 25 01:24:21.485081 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 25 01:24:21.485777 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 25 01:24:21.485780 systemd-networkd[904]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 25 01:24:21.491172 systemd[1]: Reached target network.target - Network. Apr 25 01:24:21.567449 kernel: mlx5_core 51d4:00:02.0 enP20948s1: Link up Apr 25 01:24:21.608462 kernel: hv_netvsc 7ced8db6-8bc0-7ced-8db6-8bc07ced8db6 eth0: Data path switched to VF: enP20948s1 Apr 25 01:24:21.608646 systemd-networkd[904]: enP20948s1: Link UP Apr 25 01:24:21.608752 systemd-networkd[904]: eth0: Link UP Apr 25 01:24:21.608855 systemd-networkd[904]: eth0: Gained carrier Apr 25 01:24:21.608864 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 25 01:24:21.628655 systemd-networkd[904]: enP20948s1: Gained carrier Apr 25 01:24:21.638474 systemd-networkd[904]: eth0: DHCPv4 address 10.0.0.7/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 25 01:24:22.928420 ignition[902]: Ignition 2.19.0 Apr 25 01:24:22.928469 ignition[902]: Stage: fetch-offline Apr 25 01:24:22.932148 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 25 01:24:22.928518 ignition[902]: no configs at "/usr/lib/ignition/base.d" Apr 25 01:24:22.945626 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 25 01:24:22.928526 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 25 01:24:22.928626 ignition[902]: parsed url from cmdline: "" Apr 25 01:24:22.928629 ignition[902]: no config URL provided Apr 25 01:24:22.928634 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Apr 25 01:24:22.928643 ignition[902]: no config at "/usr/lib/ignition/user.ign" Apr 25 01:24:22.928648 ignition[902]: failed to fetch config: resource requires networking Apr 25 01:24:22.928842 ignition[902]: Ignition finished successfully Apr 25 01:24:22.971093 ignition[915]: Ignition 2.19.0 Apr 25 01:24:22.971100 ignition[915]: Stage: fetch Apr 25 01:24:22.971320 ignition[915]: no configs at "/usr/lib/ignition/base.d" Apr 25 01:24:22.971332 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 25 01:24:22.971458 ignition[915]: parsed url from cmdline: "" Apr 25 01:24:22.971461 ignition[915]: no config URL provided Apr 25 01:24:22.971467 ignition[915]: reading system config file "/usr/lib/ignition/user.ign" Apr 25 01:24:22.971474 ignition[915]: no config at "/usr/lib/ignition/user.ign" Apr 25 01:24:22.971498 ignition[915]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 25 01:24:23.086210 ignition[915]: GET result: OK Apr 25 01:24:23.086279 ignition[915]: config has been read from IMDS userdata Apr 25 01:24:23.086324 ignition[915]: parsing config with SHA512: e847e49fb5486bad77303067a992924b0422a94b079e72458a4c0376b151c1357fd854677cdb8288819d7e842426a349ee2000391e04be5487617573b31901ea Apr 25 01:24:23.090314 unknown[915]: fetched base config from "system" Apr 25 01:24:23.090787 ignition[915]: fetch: fetch complete Apr 25 01:24:23.090322 unknown[915]: fetched base config from "system" Apr 25 01:24:23.090793 ignition[915]: fetch: fetch passed Apr 25 01:24:23.090327 unknown[915]: fetched user config from "azure" Apr 25 01:24:23.090849 ignition[915]: Ignition finished successfully Apr 25 01:24:23.094814 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 25 01:24:23.111715 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 25 01:24:23.135245 ignition[921]: Ignition 2.19.0 Apr 25 01:24:23.135255 ignition[921]: Stage: kargs Apr 25 01:24:23.139222 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 25 01:24:23.135454 ignition[921]: no configs at "/usr/lib/ignition/base.d" Apr 25 01:24:23.135464 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 25 01:24:23.136375 ignition[921]: kargs: kargs passed Apr 25 01:24:23.136421 ignition[921]: Ignition finished successfully Apr 25 01:24:23.161745 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 25 01:24:23.180068 ignition[927]: Ignition 2.19.0 Apr 25 01:24:23.180080 ignition[927]: Stage: disks Apr 25 01:24:23.180290 ignition[927]: no configs at "/usr/lib/ignition/base.d" Apr 25 01:24:23.185589 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 25 01:24:23.180299 ignition[927]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 25 01:24:23.194036 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 25 01:24:23.182868 ignition[927]: disks: disks passed Apr 25 01:24:23.202857 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 25 01:24:23.182920 ignition[927]: Ignition finished successfully Apr 25 01:24:23.212552 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 25 01:24:23.221492 systemd[1]: Reached target sysinit.target - System Initialization. Apr 25 01:24:23.228526 systemd[1]: Reached target basic.target - Basic System. Apr 25 01:24:23.249716 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 25 01:24:23.267169 systemd-networkd[904]: eth0: Gained IPv6LL Apr 25 01:24:23.368162 systemd-fsck[935]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 25 01:24:23.377032 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 25 01:24:23.391606 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 25 01:24:23.445501 kernel: EXT4-fs (sda9): mounted filesystem edaa698b-3baa-4242-8691-64cb9f35f18f r/w with ordered data mode. Quota mode: none. Apr 25 01:24:23.445731 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 25 01:24:23.449798 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 25 01:24:23.494510 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 25 01:24:23.514455 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (946) Apr 25 01:24:23.524795 kernel: BTRFS info (device sda6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 25 01:24:23.524820 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 25 01:24:23.529086 kernel: BTRFS info (device sda6): using free space tree Apr 25 01:24:23.533545 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 25 01:24:23.543638 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 25 01:24:23.553712 kernel: BTRFS info (device sda6): auto enabling async discard Apr 25 01:24:23.559175 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 25 01:24:23.559211 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 25 01:24:23.565950 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 25 01:24:23.578809 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 25 01:24:23.597732 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 25 01:24:24.192990 coreos-metadata[963]: Apr 25 01:24:24.192 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 25 01:24:24.201175 coreos-metadata[963]: Apr 25 01:24:24.201 INFO Fetch successful Apr 25 01:24:24.201175 coreos-metadata[963]: Apr 25 01:24:24.201 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 25 01:24:24.214807 coreos-metadata[963]: Apr 25 01:24:24.211 INFO Fetch successful Apr 25 01:24:24.260577 coreos-metadata[963]: Apr 25 01:24:24.260 INFO wrote hostname ci-4081.3.6-n-cf3dcbc0ec to /sysroot/etc/hostname Apr 25 01:24:24.267712 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 25 01:24:24.643973 initrd-setup-root[975]: cut: /sysroot/etc/passwd: No such file or directory Apr 25 01:24:24.689503 initrd-setup-root[982]: cut: /sysroot/etc/group: No such file or directory Apr 25 01:24:24.697603 initrd-setup-root[989]: cut: /sysroot/etc/shadow: No such file or directory Apr 25 01:24:24.726483 initrd-setup-root[996]: cut: /sysroot/etc/gshadow: No such file or directory Apr 25 01:24:26.062575 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 25 01:24:26.075825 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 25 01:24:26.084605 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 25 01:24:26.100450 kernel: BTRFS info (device sda6): last unmount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 25 01:24:26.101902 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 25 01:24:26.128003 ignition[1064]: INFO : Ignition 2.19.0 Apr 25 01:24:26.130732 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 25 01:24:26.135826 ignition[1064]: INFO : Stage: mount Apr 25 01:24:26.135826 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 25 01:24:26.135826 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 25 01:24:26.159204 ignition[1064]: INFO : mount: mount passed Apr 25 01:24:26.159204 ignition[1064]: INFO : Ignition finished successfully Apr 25 01:24:26.141643 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 25 01:24:26.165548 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 25 01:24:26.176777 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 25 01:24:26.203455 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1075) Apr 25 01:24:26.213787 kernel: BTRFS info (device sda6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 25 01:24:26.213809 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 25 01:24:26.217160 kernel: BTRFS info (device sda6): using free space tree Apr 25 01:24:26.225418 kernel: BTRFS info (device sda6): auto enabling async discard Apr 25 01:24:26.226092 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 25 01:24:26.252573 ignition[1092]: INFO : Ignition 2.19.0 Apr 25 01:24:26.252573 ignition[1092]: INFO : Stage: files Apr 25 01:24:26.259173 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 25 01:24:26.259173 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 25 01:24:26.259173 ignition[1092]: DEBUG : files: compiled without relabeling support, skipping Apr 25 01:24:26.278468 ignition[1092]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 25 01:24:26.278468 ignition[1092]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 25 01:24:26.342219 ignition[1092]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 25 01:24:26.348184 ignition[1092]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 25 01:24:26.348184 ignition[1092]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 25 01:24:26.342632 unknown[1092]: wrote ssh authorized keys file for user: core Apr 25 01:24:26.375560 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 25 01:24:26.384254 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 25 01:24:26.410390 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 25 01:24:26.587033 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Apr 25 01:24:27.042775 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 25 01:24:27.369923 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 25 01:24:27.369923 ignition[1092]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 25 01:24:27.417095 ignition[1092]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 25 01:24:27.426242 ignition[1092]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 25 01:24:27.426242 ignition[1092]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 25 01:24:27.426242 ignition[1092]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 25 01:24:27.426242 ignition[1092]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 25 01:24:27.426242 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 25 01:24:27.426242 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 25 01:24:27.426242 ignition[1092]: INFO : files: files passed Apr 25 01:24:27.426242 ignition[1092]: INFO : Ignition finished successfully Apr 25 01:24:27.426837 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 25 01:24:27.463714 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 25 01:24:27.477615 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 25 01:24:27.488819 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 25 01:24:27.488964 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 25 01:24:27.543850 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 25 01:24:27.543850 initrd-setup-root-after-ignition[1120]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 25 01:24:27.557700 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 25 01:24:27.559117 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 25 01:24:27.569995 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 25 01:24:27.588874 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 25 01:24:27.617482 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 25 01:24:27.617605 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 25 01:24:27.627531 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 25 01:24:27.636752 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 25 01:24:27.644952 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 25 01:24:27.656000 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 25 01:24:27.677595 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 25 01:24:27.691660 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 25 01:24:27.707719 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 25 01:24:27.713326 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 25 01:24:27.723876 systemd[1]: Stopped target timers.target - Timer Units. Apr 25 01:24:27.733236 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 25 01:24:27.733413 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 25 01:24:27.746841 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 25 01:24:27.755977 systemd[1]: Stopped target basic.target - Basic System. Apr 25 01:24:27.763922 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 25 01:24:27.772007 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 25 01:24:27.781486 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 25 01:24:27.790736 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 25 01:24:27.799575 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 25 01:24:27.808917 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 25 01:24:27.818468 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 25 01:24:27.827174 systemd[1]: Stopped target swap.target - Swaps. Apr 25 01:24:27.834656 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 25 01:24:27.834827 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 25 01:24:27.846310 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 25 01:24:27.855288 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 25 01:24:27.864623 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 25 01:24:27.873214 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 25 01:24:27.878600 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 25 01:24:27.878771 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 25 01:24:27.892518 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 25 01:24:27.892676 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 25 01:24:27.902017 systemd[1]: ignition-files.service: Deactivated successfully. Apr 25 01:24:27.902182 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 25 01:24:27.911158 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 25 01:24:27.911307 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 25 01:24:27.941093 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 25 01:24:27.950723 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 25 01:24:27.957789 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 25 01:24:27.973678 ignition[1144]: INFO : Ignition 2.19.0 Apr 25 01:24:27.973678 ignition[1144]: INFO : Stage: umount Apr 25 01:24:27.973678 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 25 01:24:27.973678 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 25 01:24:27.973678 ignition[1144]: INFO : umount: umount passed Apr 25 01:24:27.973678 ignition[1144]: INFO : Ignition finished successfully Apr 25 01:24:27.958010 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 25 01:24:27.972766 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 25 01:24:27.972878 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 25 01:24:27.981251 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 25 01:24:27.981348 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 25 01:24:27.988400 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 25 01:24:27.988653 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 25 01:24:28.000876 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 25 01:24:28.000962 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 25 01:24:28.012678 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 25 01:24:28.012743 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 25 01:24:28.021299 systemd[1]: Stopped target network.target - Network. Apr 25 01:24:28.029492 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 25 01:24:28.029568 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 25 01:24:28.039128 systemd[1]: Stopped target paths.target - Path Units. Apr 25 01:24:28.047467 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 25 01:24:28.056527 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 25 01:24:28.066632 systemd[1]: Stopped target slices.target - Slice Units. Apr 25 01:24:28.074529 systemd[1]: Stopped target sockets.target - Socket Units. Apr 25 01:24:28.082423 systemd[1]: iscsid.socket: Deactivated successfully. Apr 25 01:24:28.082484 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 25 01:24:28.094930 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 25 01:24:28.094988 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 25 01:24:28.103705 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 25 01:24:28.103756 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 25 01:24:28.111926 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 25 01:24:28.111962 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 25 01:24:28.120071 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 25 01:24:28.129426 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 25 01:24:28.139502 systemd-networkd[904]: eth0: DHCPv6 lease lost Apr 25 01:24:28.141303 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 25 01:24:28.142034 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 25 01:24:28.142158 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 25 01:24:28.160353 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 25 01:24:28.160458 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 25 01:24:28.165459 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 25 01:24:28.165542 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 25 01:24:28.176873 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 25 01:24:28.176936 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 25 01:24:28.198871 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 25 01:24:28.207167 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 25 01:24:28.207265 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 25 01:24:28.217163 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 25 01:24:28.356636 kernel: hv_netvsc 7ced8db6-8bc0-7ced-8db6-8bc07ced8db6 eth0: Data path switched from VF: enP20948s1 Apr 25 01:24:28.217226 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 25 01:24:28.225951 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 25 01:24:28.225994 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 25 01:24:28.234936 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 25 01:24:28.234975 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 25 01:24:28.245288 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 25 01:24:28.281386 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 25 01:24:28.281640 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 25 01:24:28.292111 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 25 01:24:28.292155 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 25 01:24:28.300341 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 25 01:24:28.300373 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 25 01:24:28.309302 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 25 01:24:28.309351 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 25 01:24:28.322332 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 25 01:24:28.322374 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 25 01:24:28.341618 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 25 01:24:28.341678 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 25 01:24:28.364572 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 25 01:24:28.374497 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 25 01:24:28.374569 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 25 01:24:28.385207 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 25 01:24:28.385268 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 25 01:24:28.395624 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 25 01:24:28.395679 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 25 01:24:28.405034 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 25 01:24:28.405072 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 01:24:28.415756 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 25 01:24:28.417906 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 25 01:24:28.425198 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 25 01:24:28.425282 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 25 01:24:28.433171 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 25 01:24:28.433254 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 25 01:24:28.445059 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 25 01:24:28.452665 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 25 01:24:28.452760 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 25 01:24:28.478705 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 25 01:24:28.512548 systemd[1]: Switching root. Apr 25 01:24:28.647383 systemd-journald[217]: Journal stopped Apr 25 01:24:17.218280 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 25 01:24:17.218302 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Apr 24 22:19:35 -00 2026 Apr 25 01:24:17.218311 kernel: KASLR enabled Apr 25 01:24:17.218317 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Apr 25 01:24:17.218325 kernel: printk: bootconsole [pl11] enabled Apr 25 01:24:17.218331 kernel: efi: EFI v2.7 by EDK II Apr 25 01:24:17.218338 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f213018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Apr 25 01:24:17.218345 kernel: random: crng init done Apr 25 01:24:17.218351 kernel: ACPI: Early table checksum verification disabled Apr 25 01:24:17.218358 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Apr 25 01:24:17.218365 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 25 01:24:17.218371 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 25 01:24:17.218379 kernel: ACPI: DSDT 0x000000003FD41018 01DF7E (v02 MSFTVM DSDT01 00000001 INTL 20230628) Apr 25 01:24:17.218386 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 25 01:24:17.218394 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 25 01:24:17.218400 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 25 01:24:17.218408 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 25 01:24:17.218416 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 25 01:24:17.218423 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 25 01:24:17.218430 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Apr 25 01:24:17.218436 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Apr 25 01:24:17.218443 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Apr 25 01:24:17.218450 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Apr 25 01:24:17.218457 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Apr 25 01:24:17.218464 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Apr 25 01:24:17.218471 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Apr 25 01:24:17.218478 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Apr 25 01:24:17.218484 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Apr 25 01:24:17.218493 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Apr 25 01:24:17.218500 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Apr 25 01:24:17.218508 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Apr 25 01:24:17.218515 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Apr 25 01:24:17.218522 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Apr 25 01:24:17.218529 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Apr 25 01:24:17.218535 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Apr 25 01:24:17.218542 kernel: Zone ranges: Apr 25 01:24:17.218549 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Apr 25 01:24:17.218556 kernel: DMA32 empty Apr 25 01:24:17.218562 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Apr 25 01:24:17.218569 kernel: Movable zone start for each node Apr 25 01:24:17.218581 kernel: Early memory node ranges Apr 25 01:24:17.218588 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Apr 25 01:24:17.218595 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Apr 25 01:24:17.218603 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Apr 25 01:24:17.218610 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Apr 25 01:24:17.218618 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Apr 25 01:24:17.218625 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Apr 25 01:24:17.218632 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Apr 25 01:24:17.218640 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Apr 25 01:24:17.218647 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Apr 25 01:24:17.218654 kernel: psci: probing for conduit method from ACPI. Apr 25 01:24:17.218661 kernel: psci: PSCIv1.1 detected in firmware. Apr 25 01:24:17.218668 kernel: psci: Using standard PSCI v0.2 function IDs Apr 25 01:24:17.218676 kernel: psci: MIGRATE_INFO_TYPE not supported. Apr 25 01:24:17.218683 kernel: psci: SMC Calling Convention v1.4 Apr 25 01:24:17.218690 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Apr 25 01:24:17.218698 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Apr 25 01:24:17.218707 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 25 01:24:17.218714 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 25 01:24:17.218735 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 25 01:24:17.218742 kernel: Detected PIPT I-cache on CPU0 Apr 25 01:24:17.220785 kernel: CPU features: detected: GIC system register CPU interface Apr 25 01:24:17.220801 kernel: CPU features: detected: Hardware dirty bit management Apr 25 01:24:17.220809 kernel: CPU features: detected: Spectre-BHB Apr 25 01:24:17.220816 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 25 01:24:17.220824 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 25 01:24:17.220831 kernel: CPU features: detected: ARM erratum 1418040 Apr 25 01:24:17.220838 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Apr 25 01:24:17.220855 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 25 01:24:17.220863 kernel: alternatives: applying boot alternatives Apr 25 01:24:17.220872 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=63304dd98a277d4592d17e0085ae3f91ca70cc8ec6dedfdd357a1e9755f9a8b3 Apr 25 01:24:17.220880 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 25 01:24:17.220887 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 25 01:24:17.220895 kernel: Fallback order for Node 0: 0 Apr 25 01:24:17.220902 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Apr 25 01:24:17.220909 kernel: Policy zone: Normal Apr 25 01:24:17.220917 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 25 01:24:17.220924 kernel: software IO TLB: area num 2. Apr 25 01:24:17.220931 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Apr 25 01:24:17.220940 kernel: Memory: 3982632K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211528K reserved, 0K cma-reserved) Apr 25 01:24:17.220948 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 25 01:24:17.220955 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 25 01:24:17.220963 kernel: rcu: RCU event tracing is enabled. Apr 25 01:24:17.220971 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 25 01:24:17.220978 kernel: Trampoline variant of Tasks RCU enabled. Apr 25 01:24:17.220986 kernel: Tracing variant of Tasks RCU enabled. Apr 25 01:24:17.220993 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 25 01:24:17.221000 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 25 01:24:17.221008 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 25 01:24:17.221015 kernel: GICv3: 960 SPIs implemented Apr 25 01:24:17.221023 kernel: GICv3: 0 Extended SPIs implemented Apr 25 01:24:17.221031 kernel: Root IRQ handler: gic_handle_irq Apr 25 01:24:17.221038 kernel: GICv3: GICv3 features: 16 PPIs, RSS Apr 25 01:24:17.221045 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Apr 25 01:24:17.221052 kernel: ITS: No ITS available, not enabling LPIs Apr 25 01:24:17.221060 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 25 01:24:17.221067 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 25 01:24:17.221075 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 25 01:24:17.221082 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 25 01:24:17.221090 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 25 01:24:17.221097 kernel: Console: colour dummy device 80x25 Apr 25 01:24:17.221106 kernel: printk: console [tty1] enabled Apr 25 01:24:17.221114 kernel: ACPI: Core revision 20230628 Apr 25 01:24:17.221122 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 25 01:24:17.221129 kernel: pid_max: default: 32768 minimum: 301 Apr 25 01:24:17.221137 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 25 01:24:17.221144 kernel: landlock: Up and running. Apr 25 01:24:17.221152 kernel: SELinux: Initializing. Apr 25 01:24:17.221159 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 25 01:24:17.221167 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 25 01:24:17.221176 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 25 01:24:17.221184 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 25 01:24:17.221192 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Apr 25 01:24:17.221199 kernel: Hyper-V: Host Build 10.0.26100.1542-1-0 Apr 25 01:24:17.221207 kernel: Hyper-V: enabling crash_kexec_post_notifiers Apr 25 01:24:17.221214 kernel: rcu: Hierarchical SRCU implementation. Apr 25 01:24:17.221222 kernel: rcu: Max phase no-delay instances is 400. Apr 25 01:24:17.221230 kernel: Remapping and enabling EFI services. Apr 25 01:24:17.221244 kernel: smp: Bringing up secondary CPUs ... Apr 25 01:24:17.221252 kernel: Detected PIPT I-cache on CPU1 Apr 25 01:24:17.221260 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Apr 25 01:24:17.221269 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 25 01:24:17.221279 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 25 01:24:17.221287 kernel: smp: Brought up 1 node, 2 CPUs Apr 25 01:24:17.221295 kernel: SMP: Total of 2 processors activated. Apr 25 01:24:17.221303 kernel: CPU features: detected: 32-bit EL0 Support Apr 25 01:24:17.221311 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Apr 25 01:24:17.221321 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 25 01:24:17.221329 kernel: CPU features: detected: CRC32 instructions Apr 25 01:24:17.221337 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 25 01:24:17.221345 kernel: CPU features: detected: LSE atomic instructions Apr 25 01:24:17.221353 kernel: CPU features: detected: Privileged Access Never Apr 25 01:24:17.221360 kernel: CPU: All CPU(s) started at EL1 Apr 25 01:24:17.221368 kernel: alternatives: applying system-wide alternatives Apr 25 01:24:17.221376 kernel: devtmpfs: initialized Apr 25 01:24:17.221384 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 25 01:24:17.221394 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 25 01:24:17.221402 kernel: pinctrl core: initialized pinctrl subsystem Apr 25 01:24:17.221410 kernel: SMBIOS 3.1.0 present. Apr 25 01:24:17.221418 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/09/2026 Apr 25 01:24:17.221426 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 25 01:24:17.221434 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 25 01:24:17.221442 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 25 01:24:17.221450 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 25 01:24:17.221458 kernel: audit: initializing netlink subsys (disabled) Apr 25 01:24:17.221467 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Apr 25 01:24:17.221475 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 25 01:24:17.221483 kernel: cpuidle: using governor menu Apr 25 01:24:17.221491 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 25 01:24:17.221499 kernel: ASID allocator initialised with 32768 entries Apr 25 01:24:17.221506 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 25 01:24:17.221514 kernel: Serial: AMBA PL011 UART driver Apr 25 01:24:17.221522 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 25 01:24:17.221530 kernel: Modules: 0 pages in range for non-PLT usage Apr 25 01:24:17.221540 kernel: Modules: 509008 pages in range for PLT usage Apr 25 01:24:17.221548 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 25 01:24:17.221556 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 25 01:24:17.221564 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 25 01:24:17.221572 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 25 01:24:17.221579 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 25 01:24:17.221587 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 25 01:24:17.221595 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 25 01:24:17.221603 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 25 01:24:17.221612 kernel: ACPI: Added _OSI(Module Device) Apr 25 01:24:17.221620 kernel: ACPI: Added _OSI(Processor Device) Apr 25 01:24:17.221628 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 25 01:24:17.221636 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 25 01:24:17.221645 kernel: ACPI: Interpreter enabled Apr 25 01:24:17.221653 kernel: ACPI: Using GIC for interrupt routing Apr 25 01:24:17.221661 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Apr 25 01:24:17.221669 kernel: printk: console [ttyAMA0] enabled Apr 25 01:24:17.221677 kernel: printk: bootconsole [pl11] disabled Apr 25 01:24:17.221687 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Apr 25 01:24:17.221696 kernel: iommu: Default domain type: Translated Apr 25 01:24:17.221704 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 25 01:24:17.221712 kernel: efivars: Registered efivars operations Apr 25 01:24:17.221721 kernel: vgaarb: loaded Apr 25 01:24:17.221729 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 25 01:24:17.221738 kernel: VFS: Disk quotas dquot_6.6.0 Apr 25 01:24:17.221746 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 25 01:24:17.221760 kernel: pnp: PnP ACPI init Apr 25 01:24:17.221771 kernel: pnp: PnP ACPI: found 0 devices Apr 25 01:24:17.221779 kernel: NET: Registered PF_INET protocol family Apr 25 01:24:17.221788 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 25 01:24:17.221796 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 25 01:24:17.221804 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 25 01:24:17.221812 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 25 01:24:17.221820 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 25 01:24:17.221829 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 25 01:24:17.221836 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 25 01:24:17.221846 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 25 01:24:17.221855 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 25 01:24:17.221863 kernel: PCI: CLS 0 bytes, default 64 Apr 25 01:24:17.221871 kernel: kvm [1]: HYP mode not available Apr 25 01:24:17.221879 kernel: Initialise system trusted keyrings Apr 25 01:24:17.221887 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 25 01:24:17.221896 kernel: Key type asymmetric registered Apr 25 01:24:17.221904 kernel: Asymmetric key parser 'x509' registered Apr 25 01:24:17.221912 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 25 01:24:17.221922 kernel: io scheduler mq-deadline registered Apr 25 01:24:17.221930 kernel: io scheduler kyber registered Apr 25 01:24:17.221938 kernel: io scheduler bfq registered Apr 25 01:24:17.221946 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 25 01:24:17.221954 kernel: thunder_xcv, ver 1.0 Apr 25 01:24:17.221962 kernel: thunder_bgx, ver 1.0 Apr 25 01:24:17.221970 kernel: nicpf, ver 1.0 Apr 25 01:24:17.221978 kernel: nicvf, ver 1.0 Apr 25 01:24:17.222138 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 25 01:24:17.222221 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-25T01:24:16 UTC (1777080256) Apr 25 01:24:17.222232 kernel: efifb: probing for efifb Apr 25 01:24:17.222240 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Apr 25 01:24:17.222248 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Apr 25 01:24:17.222256 kernel: efifb: scrolling: redraw Apr 25 01:24:17.222264 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 25 01:24:17.222272 kernel: Console: switching to colour frame buffer device 128x48 Apr 25 01:24:17.222279 kernel: fb0: EFI VGA frame buffer device Apr 25 01:24:17.222289 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Apr 25 01:24:17.222297 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 25 01:24:17.222306 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Apr 25 01:24:17.222314 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 25 01:24:17.222322 kernel: watchdog: Hard watchdog permanently disabled Apr 25 01:24:17.222331 kernel: NET: Registered PF_INET6 protocol family Apr 25 01:24:17.222338 kernel: Segment Routing with IPv6 Apr 25 01:24:17.222346 kernel: In-situ OAM (IOAM) with IPv6 Apr 25 01:24:17.222355 kernel: NET: Registered PF_PACKET protocol family Apr 25 01:24:17.222364 kernel: Key type dns_resolver registered Apr 25 01:24:17.222372 kernel: registered taskstats version 1 Apr 25 01:24:17.222380 kernel: Loading compiled-in X.509 certificates Apr 25 01:24:17.222389 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 96a6e7da7ac9a3ef656057ccd8e13f251b310c24' Apr 25 01:24:17.222396 kernel: Key type .fscrypt registered Apr 25 01:24:17.222404 kernel: Key type fscrypt-provisioning registered Apr 25 01:24:17.222412 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 25 01:24:17.222421 kernel: ima: Allocated hash algorithm: sha1 Apr 25 01:24:17.222430 kernel: ima: No architecture policies found Apr 25 01:24:17.222439 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 25 01:24:17.222448 kernel: clk: Disabling unused clocks Apr 25 01:24:17.222456 kernel: Freeing unused kernel memory: 39424K Apr 25 01:24:17.222464 kernel: Run /init as init process Apr 25 01:24:17.222472 kernel: with arguments: Apr 25 01:24:17.222481 kernel: /init Apr 25 01:24:17.222489 kernel: with environment: Apr 25 01:24:17.222498 kernel: HOME=/ Apr 25 01:24:17.222506 kernel: TERM=linux Apr 25 01:24:17.222517 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 25 01:24:17.222530 systemd[1]: Detected virtualization microsoft. Apr 25 01:24:17.222539 systemd[1]: Detected architecture arm64. Apr 25 01:24:17.222548 systemd[1]: Running in initrd. Apr 25 01:24:17.222556 systemd[1]: No hostname configured, using default hostname. Apr 25 01:24:17.222564 systemd[1]: Hostname set to . Apr 25 01:24:17.222573 systemd[1]: Initializing machine ID from random generator. Apr 25 01:24:17.222583 systemd[1]: Queued start job for default target initrd.target. Apr 25 01:24:17.222592 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 25 01:24:17.222601 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 25 01:24:17.222610 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 25 01:24:17.222618 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 25 01:24:17.222627 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 25 01:24:17.222636 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 25 01:24:17.222646 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 25 01:24:17.222656 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 25 01:24:17.222665 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 25 01:24:17.222674 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 25 01:24:17.222682 systemd[1]: Reached target paths.target - Path Units. Apr 25 01:24:17.222691 systemd[1]: Reached target slices.target - Slice Units. Apr 25 01:24:17.222699 systemd[1]: Reached target swap.target - Swaps. Apr 25 01:24:17.222708 systemd[1]: Reached target timers.target - Timer Units. Apr 25 01:24:17.222716 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 25 01:24:17.222727 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 25 01:24:17.222735 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 25 01:24:17.222744 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 25 01:24:17.224807 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 25 01:24:17.224819 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 25 01:24:17.224828 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 25 01:24:17.224837 systemd[1]: Reached target sockets.target - Socket Units. Apr 25 01:24:17.224846 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 25 01:24:17.224861 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 25 01:24:17.224870 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 25 01:24:17.224879 systemd[1]: Starting systemd-fsck-usr.service... Apr 25 01:24:17.224888 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 25 01:24:17.224896 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 25 01:24:17.224936 systemd-journald[217]: Collecting audit messages is disabled. Apr 25 01:24:17.224960 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 01:24:17.224970 systemd-journald[217]: Journal started Apr 25 01:24:17.224991 systemd-journald[217]: Runtime Journal (/run/log/journal/02db605233e347449935336be407be0d) is 8.0M, max 78.5M, 70.5M free. Apr 25 01:24:17.232797 systemd[1]: Started systemd-journald.service - Journal Service. Apr 25 01:24:17.231776 systemd-modules-load[218]: Inserted module 'overlay' Apr 25 01:24:17.249620 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 25 01:24:17.254737 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 25 01:24:17.284026 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 25 01:24:17.284051 kernel: Bridge firewalling registered Apr 25 01:24:17.274827 systemd-modules-load[218]: Inserted module 'br_netfilter' Apr 25 01:24:17.275832 systemd[1]: Finished systemd-fsck-usr.service. Apr 25 01:24:17.292108 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 25 01:24:17.301806 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 01:24:17.318019 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 25 01:24:17.330946 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 25 01:24:17.347077 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 25 01:24:17.362909 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 25 01:24:17.370905 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 25 01:24:17.388780 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 25 01:24:17.393591 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 25 01:24:17.403692 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 25 01:24:17.426925 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 25 01:24:17.438914 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 25 01:24:17.451917 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 25 01:24:17.473945 dracut-cmdline[253]: dracut-dracut-053 Apr 25 01:24:17.484002 dracut-cmdline[253]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=63304dd98a277d4592d17e0085ae3f91ca70cc8ec6dedfdd357a1e9755f9a8b3 Apr 25 01:24:17.516035 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 25 01:24:17.535183 systemd-resolved[254]: Positive Trust Anchors: Apr 25 01:24:17.535196 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 25 01:24:17.535228 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 25 01:24:17.541039 systemd-resolved[254]: Defaulting to hostname 'linux'. Apr 25 01:24:17.541947 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 25 01:24:17.548070 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 25 01:24:17.618759 kernel: SCSI subsystem initialized Apr 25 01:24:17.626757 kernel: Loading iSCSI transport class v2.0-870. Apr 25 01:24:17.635766 kernel: iscsi: registered transport (tcp) Apr 25 01:24:17.653117 kernel: iscsi: registered transport (qla4xxx) Apr 25 01:24:17.653181 kernel: QLogic iSCSI HBA Driver Apr 25 01:24:17.692552 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 25 01:24:17.706008 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 25 01:24:17.748583 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 25 01:24:17.748646 kernel: device-mapper: uevent: version 1.0.3 Apr 25 01:24:17.754011 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 25 01:24:17.803769 kernel: raid6: neonx8 gen() 14097 MB/s Apr 25 01:24:17.822757 kernel: raid6: neonx4 gen() 15688 MB/s Apr 25 01:24:17.841755 kernel: raid6: neonx2 gen() 13242 MB/s Apr 25 01:24:17.861755 kernel: raid6: neonx1 gen() 10483 MB/s Apr 25 01:24:17.880758 kernel: raid6: int64x8 gen() 6981 MB/s Apr 25 01:24:17.899755 kernel: raid6: int64x4 gen() 7371 MB/s Apr 25 01:24:17.919755 kernel: raid6: int64x2 gen() 6145 MB/s Apr 25 01:24:17.941834 kernel: raid6: int64x1 gen() 5069 MB/s Apr 25 01:24:17.941844 kernel: raid6: using algorithm neonx4 gen() 15688 MB/s Apr 25 01:24:17.964817 kernel: raid6: .... xor() 12122 MB/s, rmw enabled Apr 25 01:24:17.964828 kernel: raid6: using neon recovery algorithm Apr 25 01:24:17.976139 kernel: xor: measuring software checksum speed Apr 25 01:24:17.976168 kernel: 8regs : 19793 MB/sec Apr 25 01:24:17.979127 kernel: 32regs : 19693 MB/sec Apr 25 01:24:17.981910 kernel: arm64_neon : 27087 MB/sec Apr 25 01:24:17.985288 kernel: xor: using function: arm64_neon (27087 MB/sec) Apr 25 01:24:18.035768 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 25 01:24:18.046791 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 25 01:24:18.061883 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 25 01:24:18.083074 systemd-udevd[439]: Using default interface naming scheme 'v255'. Apr 25 01:24:18.087627 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 25 01:24:18.103867 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 25 01:24:18.133864 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Apr 25 01:24:18.166932 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 25 01:24:18.180959 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 25 01:24:18.222516 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 25 01:24:18.239109 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 25 01:24:18.272815 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 25 01:24:18.284390 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 25 01:24:18.297338 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 25 01:24:18.308451 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 25 01:24:18.324986 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 25 01:24:18.342247 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 25 01:24:18.352215 kernel: hv_vmbus: Vmbus version:5.3 Apr 25 01:24:18.352237 kernel: hv_vmbus: registering driver hyperv_keyboard Apr 25 01:24:18.358143 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 25 01:24:18.358305 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 25 01:24:18.402670 kernel: hv_vmbus: registering driver hid_hyperv Apr 25 01:24:18.402698 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Apr 25 01:24:18.402710 kernel: hv_vmbus: registering driver hv_storvsc Apr 25 01:24:18.368671 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 25 01:24:18.432393 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Apr 25 01:24:18.432418 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Apr 25 01:24:18.432567 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 25 01:24:18.379302 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 25 01:24:18.379454 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 01:24:18.469829 kernel: scsi host0: storvsc_host_t Apr 25 01:24:18.470044 kernel: scsi host1: storvsc_host_t Apr 25 01:24:18.470143 kernel: hv_vmbus: registering driver hv_netvsc Apr 25 01:24:18.470154 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Apr 25 01:24:18.470180 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 25 01:24:18.414386 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 01:24:18.482405 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Apr 25 01:24:18.471594 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 01:24:18.494015 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 01:24:18.527288 kernel: PTP clock support registered Apr 25 01:24:18.530139 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 25 01:24:18.550142 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 25 01:24:18.579805 kernel: hv_utils: Registering HyperV Utility Driver Apr 25 01:24:18.579830 kernel: hv_vmbus: registering driver hv_utils Apr 25 01:24:18.579840 kernel: hv_utils: Heartbeat IC version 3.0 Apr 25 01:24:18.579850 kernel: hv_utils: Shutdown IC version 3.2 Apr 25 01:24:18.579860 kernel: hv_netvsc 7ced8db6-8bc0-7ced-8db6-8bc07ced8db6 eth0: VF slot 1 added Apr 25 01:24:18.580018 kernel: hv_utils: TimeSync IC version 4.0 Apr 25 01:24:18.550257 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 25 01:24:18.565995 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 25 01:24:18.217375 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Apr 25 01:24:18.217555 systemd-journald[217]: Time jumped backwards, rotating. Apr 25 01:24:18.217595 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 25 01:24:18.566054 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 01:24:18.236670 kernel: hv_vmbus: registering driver hv_pci Apr 25 01:24:18.236701 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Apr 25 01:24:18.236874 kernel: hv_pci 9926467d-51d4-4e3f-b044-493dc393ede3: PCI VMBus probing: Using version 0x10004 Apr 25 01:24:18.181413 systemd-resolved[254]: Clock change detected. Flushing caches. Apr 25 01:24:18.185818 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 01:24:18.220798 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 01:24:18.285099 kernel: hv_pci 9926467d-51d4-4e3f-b044-493dc393ede3: PCI host bridge to bus 51d4:00 Apr 25 01:24:18.285899 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Apr 25 01:24:18.286040 kernel: pci_bus 51d4:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Apr 25 01:24:18.286141 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 25 01:24:18.286233 kernel: pci_bus 51d4:00: No busn resource found for root bus, will use [bus 00-ff] Apr 25 01:24:18.286314 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 25 01:24:18.286411 kernel: pci 51d4:00:02.0: [15b3:1018] type 00 class 0x020000 Apr 25 01:24:18.290712 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Apr 25 01:24:18.296743 kernel: pci 51d4:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 25 01:24:18.291256 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 01:24:18.346720 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Apr 25 01:24:18.346913 kernel: pci 51d4:00:02.0: enabling Extended Tags Apr 25 01:24:18.347026 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 25 01:24:18.347036 kernel: pci 51d4:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 51d4:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Apr 25 01:24:18.347128 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 25 01:24:18.347219 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#16 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 25 01:24:18.347313 kernel: pci_bus 51d4:00: busn_res: [bus 00-ff] end is updated to 00 Apr 25 01:24:18.331697 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 25 01:24:18.363979 kernel: pci 51d4:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Apr 25 01:24:18.385788 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 25 01:24:18.415564 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 25 01:24:18.425616 kernel: mlx5_core 51d4:00:02.0: enabling device (0000 -> 0002) Apr 25 01:24:18.431467 kernel: mlx5_core 51d4:00:02.0: firmware version: 16.30.5026 Apr 25 01:24:18.633978 kernel: hv_netvsc 7ced8db6-8bc0-7ced-8db6-8bc07ced8db6 eth0: VF registering: eth1 Apr 25 01:24:18.634194 kernel: mlx5_core 51d4:00:02.0 eth1: joined to eth0 Apr 25 01:24:18.640527 kernel: mlx5_core 51d4:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Apr 25 01:24:18.649456 kernel: mlx5_core 51d4:00:02.0 enP20948s1: renamed from eth1 Apr 25 01:24:19.053456 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (489) Apr 25 01:24:19.068639 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 25 01:24:19.079361 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Apr 25 01:24:19.106451 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Apr 25 01:24:19.278488 kernel: BTRFS: device fsid 5f4cf890-f9e2-4e04-aa84-1bcfb6e5643e devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (501) Apr 25 01:24:19.291751 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Apr 25 01:24:19.297489 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Apr 25 01:24:19.322560 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 25 01:24:19.346476 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 25 01:24:19.354449 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 25 01:24:20.367070 disk-uuid[607]: The operation has completed successfully. Apr 25 01:24:20.371273 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 25 01:24:20.435810 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 25 01:24:20.435921 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 25 01:24:20.463623 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 25 01:24:20.474399 sh[720]: Success Apr 25 01:24:20.505469 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 25 01:24:20.815399 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 25 01:24:20.820241 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 25 01:24:20.833568 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 25 01:24:20.862617 kernel: BTRFS info (device dm-0): first mount of filesystem 5f4cf890-f9e2-4e04-aa84-1bcfb6e5643e Apr 25 01:24:20.862665 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 25 01:24:20.868082 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 25 01:24:20.872088 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 25 01:24:20.875543 kernel: BTRFS info (device dm-0): using free space tree Apr 25 01:24:21.278845 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 25 01:24:21.284123 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 25 01:24:21.300630 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 25 01:24:21.310699 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 25 01:24:21.341166 kernel: BTRFS info (device sda6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 25 01:24:21.341218 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 25 01:24:21.344884 kernel: BTRFS info (device sda6): using free space tree Apr 25 01:24:21.388484 kernel: BTRFS info (device sda6): auto enabling async discard Apr 25 01:24:21.402886 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 25 01:24:21.406982 kernel: BTRFS info (device sda6): last unmount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 25 01:24:21.413175 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 25 01:24:21.430619 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 25 01:24:21.440534 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 25 01:24:21.452731 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 25 01:24:21.479358 systemd-networkd[904]: lo: Link UP Apr 25 01:24:21.479370 systemd-networkd[904]: lo: Gained carrier Apr 25 01:24:21.480991 systemd-networkd[904]: Enumeration completed Apr 25 01:24:21.485081 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 25 01:24:21.485777 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 25 01:24:21.485780 systemd-networkd[904]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 25 01:24:21.491172 systemd[1]: Reached target network.target - Network. Apr 25 01:24:21.567449 kernel: mlx5_core 51d4:00:02.0 enP20948s1: Link up Apr 25 01:24:21.608462 kernel: hv_netvsc 7ced8db6-8bc0-7ced-8db6-8bc07ced8db6 eth0: Data path switched to VF: enP20948s1 Apr 25 01:24:21.608646 systemd-networkd[904]: enP20948s1: Link UP Apr 25 01:24:21.608752 systemd-networkd[904]: eth0: Link UP Apr 25 01:24:21.608855 systemd-networkd[904]: eth0: Gained carrier Apr 25 01:24:21.608864 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 25 01:24:21.628655 systemd-networkd[904]: enP20948s1: Gained carrier Apr 25 01:24:21.638474 systemd-networkd[904]: eth0: DHCPv4 address 10.0.0.7/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 25 01:24:22.928420 ignition[902]: Ignition 2.19.0 Apr 25 01:24:22.928469 ignition[902]: Stage: fetch-offline Apr 25 01:24:22.932148 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 25 01:24:22.928518 ignition[902]: no configs at "/usr/lib/ignition/base.d" Apr 25 01:24:22.945626 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 25 01:24:22.928526 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 25 01:24:22.928626 ignition[902]: parsed url from cmdline: "" Apr 25 01:24:22.928629 ignition[902]: no config URL provided Apr 25 01:24:22.928634 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Apr 25 01:24:22.928643 ignition[902]: no config at "/usr/lib/ignition/user.ign" Apr 25 01:24:22.928648 ignition[902]: failed to fetch config: resource requires networking Apr 25 01:24:22.928842 ignition[902]: Ignition finished successfully Apr 25 01:24:22.971093 ignition[915]: Ignition 2.19.0 Apr 25 01:24:22.971100 ignition[915]: Stage: fetch Apr 25 01:24:22.971320 ignition[915]: no configs at "/usr/lib/ignition/base.d" Apr 25 01:24:22.971332 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 25 01:24:22.971458 ignition[915]: parsed url from cmdline: "" Apr 25 01:24:22.971461 ignition[915]: no config URL provided Apr 25 01:24:22.971467 ignition[915]: reading system config file "/usr/lib/ignition/user.ign" Apr 25 01:24:22.971474 ignition[915]: no config at "/usr/lib/ignition/user.ign" Apr 25 01:24:22.971498 ignition[915]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Apr 25 01:24:23.086210 ignition[915]: GET result: OK Apr 25 01:24:23.086279 ignition[915]: config has been read from IMDS userdata Apr 25 01:24:23.086324 ignition[915]: parsing config with SHA512: e847e49fb5486bad77303067a992924b0422a94b079e72458a4c0376b151c1357fd854677cdb8288819d7e842426a349ee2000391e04be5487617573b31901ea Apr 25 01:24:23.090314 unknown[915]: fetched base config from "system" Apr 25 01:24:23.090787 ignition[915]: fetch: fetch complete Apr 25 01:24:23.090322 unknown[915]: fetched base config from "system" Apr 25 01:24:23.090793 ignition[915]: fetch: fetch passed Apr 25 01:24:23.090327 unknown[915]: fetched user config from "azure" Apr 25 01:24:23.090849 ignition[915]: Ignition finished successfully Apr 25 01:24:23.094814 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 25 01:24:23.111715 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 25 01:24:23.135245 ignition[921]: Ignition 2.19.0 Apr 25 01:24:23.135255 ignition[921]: Stage: kargs Apr 25 01:24:23.139222 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 25 01:24:23.135454 ignition[921]: no configs at "/usr/lib/ignition/base.d" Apr 25 01:24:23.135464 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 25 01:24:23.136375 ignition[921]: kargs: kargs passed Apr 25 01:24:23.136421 ignition[921]: Ignition finished successfully Apr 25 01:24:23.161745 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 25 01:24:23.180068 ignition[927]: Ignition 2.19.0 Apr 25 01:24:23.180080 ignition[927]: Stage: disks Apr 25 01:24:23.180290 ignition[927]: no configs at "/usr/lib/ignition/base.d" Apr 25 01:24:23.185589 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 25 01:24:23.180299 ignition[927]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 25 01:24:23.194036 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 25 01:24:23.182868 ignition[927]: disks: disks passed Apr 25 01:24:23.202857 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 25 01:24:23.182920 ignition[927]: Ignition finished successfully Apr 25 01:24:23.212552 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 25 01:24:23.221492 systemd[1]: Reached target sysinit.target - System Initialization. Apr 25 01:24:23.228526 systemd[1]: Reached target basic.target - Basic System. Apr 25 01:24:23.249716 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 25 01:24:23.267169 systemd-networkd[904]: eth0: Gained IPv6LL Apr 25 01:24:23.368162 systemd-fsck[935]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Apr 25 01:24:23.377032 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 25 01:24:23.391606 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 25 01:24:23.445501 kernel: EXT4-fs (sda9): mounted filesystem edaa698b-3baa-4242-8691-64cb9f35f18f r/w with ordered data mode. Quota mode: none. Apr 25 01:24:23.445731 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 25 01:24:23.449798 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 25 01:24:23.494510 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 25 01:24:23.514455 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (946) Apr 25 01:24:23.524795 kernel: BTRFS info (device sda6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 25 01:24:23.524820 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 25 01:24:23.529086 kernel: BTRFS info (device sda6): using free space tree Apr 25 01:24:23.533545 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 25 01:24:23.543638 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 25 01:24:23.553712 kernel: BTRFS info (device sda6): auto enabling async discard Apr 25 01:24:23.559175 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 25 01:24:23.559211 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 25 01:24:23.565950 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 25 01:24:23.578809 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 25 01:24:23.597732 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 25 01:24:24.192990 coreos-metadata[963]: Apr 25 01:24:24.192 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 25 01:24:24.201175 coreos-metadata[963]: Apr 25 01:24:24.201 INFO Fetch successful Apr 25 01:24:24.201175 coreos-metadata[963]: Apr 25 01:24:24.201 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Apr 25 01:24:24.214807 coreos-metadata[963]: Apr 25 01:24:24.211 INFO Fetch successful Apr 25 01:24:24.260577 coreos-metadata[963]: Apr 25 01:24:24.260 INFO wrote hostname ci-4081.3.6-n-cf3dcbc0ec to /sysroot/etc/hostname Apr 25 01:24:24.267712 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 25 01:24:24.643973 initrd-setup-root[975]: cut: /sysroot/etc/passwd: No such file or directory Apr 25 01:24:24.689503 initrd-setup-root[982]: cut: /sysroot/etc/group: No such file or directory Apr 25 01:24:24.697603 initrd-setup-root[989]: cut: /sysroot/etc/shadow: No such file or directory Apr 25 01:24:24.726483 initrd-setup-root[996]: cut: /sysroot/etc/gshadow: No such file or directory Apr 25 01:24:26.062575 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 25 01:24:26.075825 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 25 01:24:26.084605 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 25 01:24:26.100450 kernel: BTRFS info (device sda6): last unmount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 25 01:24:26.101902 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 25 01:24:26.128003 ignition[1064]: INFO : Ignition 2.19.0 Apr 25 01:24:26.130732 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 25 01:24:26.135826 ignition[1064]: INFO : Stage: mount Apr 25 01:24:26.135826 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 25 01:24:26.135826 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 25 01:24:26.159204 ignition[1064]: INFO : mount: mount passed Apr 25 01:24:26.159204 ignition[1064]: INFO : Ignition finished successfully Apr 25 01:24:26.141643 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 25 01:24:26.165548 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 25 01:24:26.176777 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 25 01:24:26.203455 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1075) Apr 25 01:24:26.213787 kernel: BTRFS info (device sda6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 25 01:24:26.213809 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 25 01:24:26.217160 kernel: BTRFS info (device sda6): using free space tree Apr 25 01:24:26.225418 kernel: BTRFS info (device sda6): auto enabling async discard Apr 25 01:24:26.226092 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 25 01:24:26.252573 ignition[1092]: INFO : Ignition 2.19.0 Apr 25 01:24:26.252573 ignition[1092]: INFO : Stage: files Apr 25 01:24:26.259173 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 25 01:24:26.259173 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 25 01:24:26.259173 ignition[1092]: DEBUG : files: compiled without relabeling support, skipping Apr 25 01:24:26.278468 ignition[1092]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 25 01:24:26.278468 ignition[1092]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 25 01:24:26.342219 ignition[1092]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 25 01:24:26.348184 ignition[1092]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 25 01:24:26.348184 ignition[1092]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 25 01:24:26.342632 unknown[1092]: wrote ssh authorized keys file for user: core Apr 25 01:24:26.375560 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 25 01:24:26.384254 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 25 01:24:26.410390 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 25 01:24:26.587033 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 25 01:24:26.595693 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Apr 25 01:24:27.042775 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 25 01:24:27.369923 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 25 01:24:27.369923 ignition[1092]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 25 01:24:27.417095 ignition[1092]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 25 01:24:27.426242 ignition[1092]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 25 01:24:27.426242 ignition[1092]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 25 01:24:27.426242 ignition[1092]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 25 01:24:27.426242 ignition[1092]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 25 01:24:27.426242 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 25 01:24:27.426242 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 25 01:24:27.426242 ignition[1092]: INFO : files: files passed Apr 25 01:24:27.426242 ignition[1092]: INFO : Ignition finished successfully Apr 25 01:24:27.426837 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 25 01:24:27.463714 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 25 01:24:27.477615 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 25 01:24:27.488819 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 25 01:24:27.488964 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 25 01:24:27.543850 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 25 01:24:27.543850 initrd-setup-root-after-ignition[1120]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 25 01:24:27.557700 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 25 01:24:27.559117 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 25 01:24:27.569995 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 25 01:24:27.588874 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 25 01:24:27.617482 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 25 01:24:27.617605 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 25 01:24:27.627531 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 25 01:24:27.636752 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 25 01:24:27.644952 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 25 01:24:27.656000 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 25 01:24:27.677595 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 25 01:24:27.691660 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 25 01:24:27.707719 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 25 01:24:27.713326 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 25 01:24:27.723876 systemd[1]: Stopped target timers.target - Timer Units. Apr 25 01:24:27.733236 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 25 01:24:27.733413 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 25 01:24:27.746841 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 25 01:24:27.755977 systemd[1]: Stopped target basic.target - Basic System. Apr 25 01:24:27.763922 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 25 01:24:27.772007 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 25 01:24:27.781486 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 25 01:24:27.790736 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 25 01:24:27.799575 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 25 01:24:27.808917 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 25 01:24:27.818468 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 25 01:24:27.827174 systemd[1]: Stopped target swap.target - Swaps. Apr 25 01:24:27.834656 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 25 01:24:27.834827 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 25 01:24:27.846310 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 25 01:24:27.855288 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 25 01:24:27.864623 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 25 01:24:27.873214 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 25 01:24:27.878600 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 25 01:24:27.878771 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 25 01:24:27.892518 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 25 01:24:27.892676 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 25 01:24:27.902017 systemd[1]: ignition-files.service: Deactivated successfully. Apr 25 01:24:27.902182 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 25 01:24:27.911158 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 25 01:24:27.911307 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 25 01:24:27.941093 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 25 01:24:27.950723 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 25 01:24:27.957789 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 25 01:24:27.973678 ignition[1144]: INFO : Ignition 2.19.0 Apr 25 01:24:27.973678 ignition[1144]: INFO : Stage: umount Apr 25 01:24:27.973678 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 25 01:24:27.973678 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Apr 25 01:24:27.973678 ignition[1144]: INFO : umount: umount passed Apr 25 01:24:27.973678 ignition[1144]: INFO : Ignition finished successfully Apr 25 01:24:27.958010 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 25 01:24:27.972766 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 25 01:24:27.972878 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 25 01:24:27.981251 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 25 01:24:27.981348 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 25 01:24:27.988400 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 25 01:24:27.988653 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 25 01:24:28.000876 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 25 01:24:28.000962 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 25 01:24:28.012678 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 25 01:24:28.012743 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 25 01:24:28.021299 systemd[1]: Stopped target network.target - Network. Apr 25 01:24:28.029492 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 25 01:24:28.029568 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 25 01:24:28.039128 systemd[1]: Stopped target paths.target - Path Units. Apr 25 01:24:28.047467 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 25 01:24:28.056527 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 25 01:24:28.066632 systemd[1]: Stopped target slices.target - Slice Units. Apr 25 01:24:28.074529 systemd[1]: Stopped target sockets.target - Socket Units. Apr 25 01:24:28.082423 systemd[1]: iscsid.socket: Deactivated successfully. Apr 25 01:24:28.082484 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 25 01:24:28.094930 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 25 01:24:28.094988 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 25 01:24:28.103705 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 25 01:24:28.103756 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 25 01:24:28.111926 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 25 01:24:28.111962 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 25 01:24:28.120071 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 25 01:24:28.129426 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 25 01:24:28.139502 systemd-networkd[904]: eth0: DHCPv6 lease lost Apr 25 01:24:28.141303 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 25 01:24:28.142034 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 25 01:24:28.142158 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 25 01:24:28.160353 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 25 01:24:28.160458 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 25 01:24:28.165459 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 25 01:24:28.165542 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 25 01:24:28.176873 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 25 01:24:28.176936 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 25 01:24:28.198871 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 25 01:24:28.207167 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 25 01:24:28.207265 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 25 01:24:28.217163 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 25 01:24:28.356636 kernel: hv_netvsc 7ced8db6-8bc0-7ced-8db6-8bc07ced8db6 eth0: Data path switched from VF: enP20948s1 Apr 25 01:24:28.217226 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 25 01:24:28.225951 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 25 01:24:28.225994 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 25 01:24:28.234936 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 25 01:24:28.234975 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 25 01:24:28.245288 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 25 01:24:28.281386 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 25 01:24:28.281640 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 25 01:24:28.292111 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 25 01:24:28.292155 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 25 01:24:28.300341 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 25 01:24:28.300373 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 25 01:24:28.309302 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 25 01:24:28.309351 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 25 01:24:28.322332 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 25 01:24:28.322374 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 25 01:24:28.341618 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 25 01:24:28.341678 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 25 01:24:28.364572 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 25 01:24:28.374497 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 25 01:24:28.374569 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 25 01:24:28.385207 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 25 01:24:28.385268 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 25 01:24:28.395624 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 25 01:24:28.395679 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 25 01:24:28.405034 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 25 01:24:28.405072 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 01:24:28.415756 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 25 01:24:28.417906 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 25 01:24:28.425198 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 25 01:24:28.425282 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 25 01:24:28.433171 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 25 01:24:28.433254 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 25 01:24:28.445059 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 25 01:24:28.452665 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 25 01:24:28.452760 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 25 01:24:28.478705 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 25 01:24:28.512548 systemd[1]: Switching root. Apr 25 01:24:28.647383 systemd-journald[217]: Journal stopped Apr 25 01:24:34.829201 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Apr 25 01:24:34.829227 kernel: SELinux: policy capability network_peer_controls=1 Apr 25 01:24:34.829237 kernel: SELinux: policy capability open_perms=1 Apr 25 01:24:34.829247 kernel: SELinux: policy capability extended_socket_class=1 Apr 25 01:24:34.829255 kernel: SELinux: policy capability always_check_network=0 Apr 25 01:24:34.829263 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 25 01:24:34.829271 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 25 01:24:34.829279 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 25 01:24:34.829287 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 25 01:24:34.829296 kernel: audit: type=1403 audit(1777080270.557:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 25 01:24:34.829306 systemd[1]: Successfully loaded SELinux policy in 457.010ms. Apr 25 01:24:34.829316 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.639ms. Apr 25 01:24:34.829328 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 25 01:24:34.829337 systemd[1]: Detected virtualization microsoft. Apr 25 01:24:34.829346 systemd[1]: Detected architecture arm64. Apr 25 01:24:34.829356 systemd[1]: Detected first boot. Apr 25 01:24:34.829366 systemd[1]: Hostname set to . Apr 25 01:24:34.829375 systemd[1]: Initializing machine ID from random generator. Apr 25 01:24:34.829384 zram_generator::config[1187]: No configuration found. Apr 25 01:24:34.829394 systemd[1]: Populated /etc with preset unit settings. Apr 25 01:24:34.829403 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 25 01:24:34.829414 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 25 01:24:34.829424 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 25 01:24:34.829434 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 25 01:24:34.829451 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 25 01:24:34.829461 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 25 01:24:34.829471 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 25 01:24:34.829480 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 25 01:24:34.829492 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 25 01:24:34.829502 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 25 01:24:34.829513 systemd[1]: Created slice user.slice - User and Session Slice. Apr 25 01:24:34.829522 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 25 01:24:34.829532 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 25 01:24:34.829542 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 25 01:24:34.829552 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 25 01:24:34.829562 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 25 01:24:34.829572 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 25 01:24:34.829582 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 25 01:24:34.829592 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 25 01:24:34.829601 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 25 01:24:34.829614 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 25 01:24:34.829623 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 25 01:24:34.829633 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 25 01:24:34.829642 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 25 01:24:34.829653 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 25 01:24:34.829663 systemd[1]: Reached target slices.target - Slice Units. Apr 25 01:24:34.829672 systemd[1]: Reached target swap.target - Swaps. Apr 25 01:24:34.829682 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 25 01:24:34.829691 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 25 01:24:34.829701 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 25 01:24:34.829711 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 25 01:24:34.829722 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 25 01:24:34.829733 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 25 01:24:34.829743 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 25 01:24:34.829754 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 25 01:24:34.829764 systemd[1]: Mounting media.mount - External Media Directory... Apr 25 01:24:34.829774 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 25 01:24:34.829786 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 25 01:24:34.829795 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 25 01:24:34.829806 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 25 01:24:34.829817 systemd[1]: Reached target machines.target - Containers. Apr 25 01:24:34.829827 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 25 01:24:34.829838 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 25 01:24:34.829848 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 25 01:24:34.829858 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 25 01:24:34.829870 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 25 01:24:34.829880 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 25 01:24:34.829890 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 25 01:24:34.829900 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 25 01:24:34.829910 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 25 01:24:34.829920 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 25 01:24:34.829930 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 25 01:24:34.829940 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 25 01:24:34.829950 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 25 01:24:34.829962 systemd[1]: Stopped systemd-fsck-usr.service. Apr 25 01:24:34.829973 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 25 01:24:34.829983 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 25 01:24:34.829994 kernel: fuse: init (API version 7.39) Apr 25 01:24:34.830003 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 25 01:24:34.830013 kernel: loop: module loaded Apr 25 01:24:34.830022 kernel: ACPI: bus type drm_connector registered Apr 25 01:24:34.830031 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 25 01:24:34.830057 systemd-journald[1269]: Collecting audit messages is disabled. Apr 25 01:24:34.830079 systemd-journald[1269]: Journal started Apr 25 01:24:34.830099 systemd-journald[1269]: Runtime Journal (/run/log/journal/22862b70daf0452185970feaff956324) is 8.0M, max 78.5M, 70.5M free. Apr 25 01:24:33.875621 systemd[1]: Queued start job for default target multi-user.target. Apr 25 01:24:34.105263 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 25 01:24:34.105634 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 25 01:24:34.105948 systemd[1]: systemd-journald.service: Consumed 2.547s CPU time. Apr 25 01:24:34.861518 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 25 01:24:34.869460 systemd[1]: verity-setup.service: Deactivated successfully. Apr 25 01:24:34.869522 systemd[1]: Stopped verity-setup.service. Apr 25 01:24:34.886651 systemd[1]: Started systemd-journald.service - Journal Service. Apr 25 01:24:34.887489 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 25 01:24:34.893179 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 25 01:24:34.898918 systemd[1]: Mounted media.mount - External Media Directory. Apr 25 01:24:34.904535 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 25 01:24:34.909976 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 25 01:24:34.915242 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 25 01:24:34.920044 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 25 01:24:34.925343 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 25 01:24:34.931284 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 25 01:24:34.931427 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 25 01:24:34.937087 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 25 01:24:34.937228 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 25 01:24:34.942538 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 25 01:24:34.942676 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 25 01:24:34.947714 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 25 01:24:34.947841 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 25 01:24:34.953563 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 25 01:24:34.953689 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 25 01:24:34.958667 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 25 01:24:34.958800 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 25 01:24:34.963790 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 25 01:24:34.969353 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 25 01:24:34.975171 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 25 01:24:34.980963 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 25 01:24:34.995510 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 25 01:24:35.007518 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 25 01:24:35.013419 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 25 01:24:35.018557 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 25 01:24:35.018620 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 25 01:24:35.024157 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 25 01:24:35.030787 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 25 01:24:35.037602 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 25 01:24:35.045981 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 25 01:24:35.048697 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 25 01:24:35.057748 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 25 01:24:35.064922 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 25 01:24:35.066080 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 25 01:24:35.071038 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 25 01:24:35.072082 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 25 01:24:35.078673 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 25 01:24:35.092687 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 25 01:24:35.099492 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 25 01:24:35.108381 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 25 01:24:35.117971 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 25 01:24:35.124531 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 25 01:24:35.129826 systemd-journald[1269]: Time spent on flushing to /var/log/journal/22862b70daf0452185970feaff956324 is 56.325ms for 903 entries. Apr 25 01:24:35.129826 systemd-journald[1269]: System Journal (/var/log/journal/22862b70daf0452185970feaff956324) is 11.8M, max 2.6G, 2.6G free. Apr 25 01:24:35.296377 systemd-journald[1269]: Received client request to flush runtime journal. Apr 25 01:24:35.296495 kernel: loop0: detected capacity change from 0 to 114328 Apr 25 01:24:35.296536 systemd-journald[1269]: /var/log/journal/22862b70daf0452185970feaff956324/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Apr 25 01:24:35.296569 systemd-journald[1269]: Rotating system journal. Apr 25 01:24:35.138621 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 25 01:24:35.154335 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 25 01:24:35.182763 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 25 01:24:35.188700 udevadm[1324]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 25 01:24:35.297680 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 25 01:24:35.316904 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 25 01:24:35.328993 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Apr 25 01:24:35.329009 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Apr 25 01:24:35.334373 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 25 01:24:35.349588 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 25 01:24:35.355792 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 25 01:24:35.356712 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 25 01:24:35.472844 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 25 01:24:35.485609 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 25 01:24:35.500146 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Apr 25 01:24:35.500164 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Apr 25 01:24:35.505951 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 25 01:24:35.652455 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 25 01:24:35.669463 kernel: loop1: detected capacity change from 0 to 197488 Apr 25 01:24:35.722464 kernel: loop2: detected capacity change from 0 to 31320 Apr 25 01:24:35.925315 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 25 01:24:35.935681 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 25 01:24:35.961028 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Apr 25 01:24:36.102008 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 25 01:24:36.125701 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 25 01:24:36.186382 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 25 01:24:36.212905 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 25 01:24:36.228463 kernel: loop3: detected capacity change from 0 to 114432 Apr 25 01:24:36.244568 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Apr 25 01:24:36.262460 kernel: mousedev: PS/2 mouse device common for all mice Apr 25 01:24:36.309014 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 25 01:24:36.333927 kernel: hv_vmbus: registering driver hv_balloon Apr 25 01:24:36.334039 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Apr 25 01:24:36.337441 kernel: hv_balloon: Memory hot add disabled on ARM64 Apr 25 01:24:36.362758 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 01:24:36.372988 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 25 01:24:36.373382 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 01:24:36.384692 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 01:24:36.430506 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1374) Apr 25 01:24:36.460361 kernel: hv_vmbus: registering driver hyperv_fb Apr 25 01:24:36.460459 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Apr 25 01:24:36.468471 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Apr 25 01:24:36.475647 kernel: Console: switching to colour dummy device 80x25 Apr 25 01:24:36.486977 kernel: Console: switching to colour frame buffer device 128x48 Apr 25 01:24:36.494325 systemd-networkd[1367]: lo: Link UP Apr 25 01:24:36.494952 systemd-networkd[1367]: lo: Gained carrier Apr 25 01:24:36.494993 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Apr 25 01:24:36.497018 systemd-networkd[1367]: Enumeration completed Apr 25 01:24:36.497453 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 25 01:24:36.497539 systemd-networkd[1367]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 25 01:24:36.500929 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 25 01:24:36.523100 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 25 01:24:36.530620 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 25 01:24:36.548904 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 25 01:24:36.550497 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 01:24:36.560623 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 25 01:24:36.573468 kernel: mlx5_core 51d4:00:02.0 enP20948s1: Link up Apr 25 01:24:36.599480 kernel: hv_netvsc 7ced8db6-8bc0-7ced-8db6-8bc07ced8db6 eth0: Data path switched to VF: enP20948s1 Apr 25 01:24:36.601079 systemd-networkd[1367]: enP20948s1: Link UP Apr 25 01:24:36.601543 systemd-networkd[1367]: eth0: Link UP Apr 25 01:24:36.601548 systemd-networkd[1367]: eth0: Gained carrier Apr 25 01:24:36.601564 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 25 01:24:36.606390 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 25 01:24:36.606726 systemd-networkd[1367]: enP20948s1: Gained carrier Apr 25 01:24:36.617530 systemd-networkd[1367]: eth0: DHCPv4 address 10.0.0.7/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 25 01:24:36.690540 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 25 01:24:36.703642 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 25 01:24:36.714483 kernel: loop4: detected capacity change from 0 to 114328 Apr 25 01:24:36.731468 kernel: loop5: detected capacity change from 0 to 197488 Apr 25 01:24:36.747469 kernel: loop6: detected capacity change from 0 to 31320 Apr 25 01:24:36.759461 kernel: loop7: detected capacity change from 0 to 114432 Apr 25 01:24:36.767409 (sd-merge)[1452]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Apr 25 01:24:36.767856 (sd-merge)[1452]: Merged extensions into '/usr'. Apr 25 01:24:36.772823 systemd[1]: Reloading requested from client PID 1321 ('systemd-sysext') (unit systemd-sysext.service)... Apr 25 01:24:36.772839 systemd[1]: Reloading... Apr 25 01:24:36.829111 zram_generator::config[1486]: No configuration found. Apr 25 01:24:36.844472 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 25 01:24:37.000082 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 25 01:24:37.088107 systemd[1]: Reloading finished in 314 ms. Apr 25 01:24:37.121447 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 25 01:24:37.126836 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 25 01:24:37.132365 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 25 01:24:37.141480 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 25 01:24:37.152575 systemd[1]: Starting ensure-sysext.service... Apr 25 01:24:37.157282 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 25 01:24:37.165643 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 25 01:24:37.166321 lvm[1542]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 25 01:24:37.176622 systemd[1]: Reloading requested from client PID 1541 ('systemctl') (unit ensure-sysext.service)... Apr 25 01:24:37.176639 systemd[1]: Reloading... Apr 25 01:24:37.200864 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 25 01:24:37.201936 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 25 01:24:37.203219 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 25 01:24:37.203580 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Apr 25 01:24:37.203720 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Apr 25 01:24:37.206871 systemd-tmpfiles[1543]: Detected autofs mount point /boot during canonicalization of boot. Apr 25 01:24:37.206979 systemd-tmpfiles[1543]: Skipping /boot Apr 25 01:24:37.216577 systemd-tmpfiles[1543]: Detected autofs mount point /boot during canonicalization of boot. Apr 25 01:24:37.216720 systemd-tmpfiles[1543]: Skipping /boot Apr 25 01:24:37.257516 zram_generator::config[1574]: No configuration found. Apr 25 01:24:37.379960 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 25 01:24:37.470800 systemd[1]: Reloading finished in 293 ms. Apr 25 01:24:37.490023 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 25 01:24:37.500902 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 25 01:24:37.523165 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 25 01:24:37.530381 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 25 01:24:37.537723 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 25 01:24:37.554864 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 25 01:24:37.569211 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 25 01:24:37.581916 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 25 01:24:37.585800 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 25 01:24:37.592504 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 25 01:24:37.605977 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 25 01:24:37.613903 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 25 01:24:37.615068 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 25 01:24:37.615249 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 25 01:24:37.622244 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 25 01:24:37.622560 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 25 01:24:37.629041 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 25 01:24:37.629338 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 25 01:24:37.640318 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 25 01:24:37.650359 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 25 01:24:37.658179 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 25 01:24:37.664946 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 25 01:24:37.672496 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 25 01:24:37.680722 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 25 01:24:37.686275 systemd-resolved[1642]: Positive Trust Anchors: Apr 25 01:24:37.686288 systemd-resolved[1642]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 25 01:24:37.686320 systemd-resolved[1642]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 25 01:24:37.686992 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 25 01:24:37.687849 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 25 01:24:37.688567 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 25 01:24:37.694164 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 25 01:24:37.694311 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 25 01:24:37.700165 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 25 01:24:37.700300 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 25 01:24:37.710516 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 25 01:24:37.716679 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 25 01:24:37.722720 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 25 01:24:37.724081 systemd-resolved[1642]: Using system hostname 'ci-4081.3.6-n-cf3dcbc0ec'. Apr 25 01:24:37.736006 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 25 01:24:37.745726 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 25 01:24:37.750191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 25 01:24:37.750372 systemd[1]: Reached target time-set.target - System Time Set. Apr 25 01:24:37.755656 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 25 01:24:37.761048 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 25 01:24:37.761217 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 25 01:24:37.766727 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 25 01:24:37.766862 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 25 01:24:37.772081 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 25 01:24:37.772214 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 25 01:24:37.778119 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 25 01:24:37.778237 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 25 01:24:37.781558 augenrules[1670]: No rules Apr 25 01:24:37.783486 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 25 01:24:37.792034 systemd[1]: Finished ensure-sysext.service. Apr 25 01:24:37.798227 systemd[1]: Reached target network.target - Network. Apr 25 01:24:37.802020 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 25 01:24:37.807273 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 25 01:24:37.807342 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 25 01:24:38.306539 systemd-networkd[1367]: eth0: Gained IPv6LL Apr 25 01:24:38.310221 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 25 01:24:38.316290 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 25 01:24:38.322975 systemd[1]: Reached target network-online.target - Network is Online. Apr 25 01:24:38.327828 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 25 01:24:42.354463 ldconfig[1316]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 25 01:24:42.366712 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 25 01:24:42.376657 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 25 01:24:42.390700 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 25 01:24:42.395832 systemd[1]: Reached target sysinit.target - System Initialization. Apr 25 01:24:42.400854 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 25 01:24:42.406128 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 25 01:24:42.411680 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 25 01:24:42.416453 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 25 01:24:42.421897 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 25 01:24:42.427230 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 25 01:24:42.427264 systemd[1]: Reached target paths.target - Path Units. Apr 25 01:24:42.431317 systemd[1]: Reached target timers.target - Timer Units. Apr 25 01:24:42.436421 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 25 01:24:42.442520 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 25 01:24:42.451087 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 25 01:24:42.456077 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 25 01:24:42.460845 systemd[1]: Reached target sockets.target - Socket Units. Apr 25 01:24:42.464939 systemd[1]: Reached target basic.target - Basic System. Apr 25 01:24:42.468969 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 25 01:24:42.468999 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 25 01:24:42.478527 systemd[1]: Starting chronyd.service - NTP client/server... Apr 25 01:24:42.484562 systemd[1]: Starting containerd.service - containerd container runtime... Apr 25 01:24:42.501398 (chronyd)[1690]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Apr 25 01:24:42.501686 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 25 01:24:42.508668 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 25 01:24:42.516622 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 25 01:24:42.524673 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 25 01:24:42.530287 chronyd[1699]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Apr 25 01:24:42.531585 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 25 01:24:42.531632 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Apr 25 01:24:42.534667 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Apr 25 01:24:42.539423 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Apr 25 01:24:42.542580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 01:24:42.545247 KVP[1700]: KVP starting; pid is:1700 Apr 25 01:24:42.551293 jq[1696]: false Apr 25 01:24:42.552719 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 25 01:24:42.561862 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 25 01:24:42.569560 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 25 01:24:42.580470 extend-filesystems[1697]: Found loop4 Apr 25 01:24:42.580470 extend-filesystems[1697]: Found loop5 Apr 25 01:24:42.580470 extend-filesystems[1697]: Found loop6 Apr 25 01:24:42.580470 extend-filesystems[1697]: Found loop7 Apr 25 01:24:42.580470 extend-filesystems[1697]: Found sda Apr 25 01:24:42.580470 extend-filesystems[1697]: Found sda1 Apr 25 01:24:42.580470 extend-filesystems[1697]: Found sda2 Apr 25 01:24:42.580470 extend-filesystems[1697]: Found sda3 Apr 25 01:24:42.580470 extend-filesystems[1697]: Found usr Apr 25 01:24:42.580470 extend-filesystems[1697]: Found sda4 Apr 25 01:24:42.580470 extend-filesystems[1697]: Found sda6 Apr 25 01:24:42.580470 extend-filesystems[1697]: Found sda7 Apr 25 01:24:42.580470 extend-filesystems[1697]: Found sda9 Apr 25 01:24:42.580470 extend-filesystems[1697]: Checking size of /dev/sda9 Apr 25 01:24:42.691554 kernel: hv_utils: KVP IC version 4.0 Apr 25 01:24:42.587744 chronyd[1699]: Timezone right/UTC failed leap second check, ignoring Apr 25 01:24:42.582617 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 25 01:24:42.597567 KVP[1700]: KVP LIC Version: 3.1 Apr 25 01:24:42.599842 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 25 01:24:42.599623 chronyd[1699]: Loaded seccomp filter (level 2) Apr 25 01:24:42.612104 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 25 01:24:42.619363 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 25 01:24:42.620631 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 25 01:24:42.623701 systemd[1]: Starting update-engine.service - Update Engine... Apr 25 01:24:42.692700 jq[1720]: true Apr 25 01:24:42.647707 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 25 01:24:42.659540 systemd[1]: Started chronyd.service - NTP client/server. Apr 25 01:24:42.685994 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 25 01:24:42.686166 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 25 01:24:42.690094 systemd[1]: motdgen.service: Deactivated successfully. Apr 25 01:24:42.690268 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 25 01:24:42.701100 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 25 01:24:42.702516 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 25 01:24:42.711009 extend-filesystems[1697]: Old size kept for /dev/sda9 Apr 25 01:24:42.725348 extend-filesystems[1697]: Found sr0 Apr 25 01:24:42.713125 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 25 01:24:42.713533 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 25 01:24:42.729324 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 25 01:24:42.736686 update_engine[1719]: I20260425 01:24:42.735975 1719 main.cc:92] Flatcar Update Engine starting Apr 25 01:24:42.758786 (ntainerd)[1734]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 25 01:24:42.765566 jq[1733]: true Apr 25 01:24:42.771288 dbus-daemon[1693]: [system] SELinux support is enabled Apr 25 01:24:42.774729 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 25 01:24:42.782975 systemd-logind[1716]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 25 01:24:42.784273 update_engine[1719]: I20260425 01:24:42.783336 1719 update_check_scheduler.cc:74] Next update check in 5m12s Apr 25 01:24:42.784659 systemd-logind[1716]: New seat seat0. Apr 25 01:24:42.793253 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 25 01:24:42.793301 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 25 01:24:42.801734 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 25 01:24:42.801754 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 25 01:24:42.810282 systemd[1]: Started systemd-logind.service - User Login Management. Apr 25 01:24:42.816157 systemd[1]: Started update-engine.service - Update Engine. Apr 25 01:24:42.827129 tar[1730]: linux-arm64/LICENSE Apr 25 01:24:42.827129 tar[1730]: linux-arm64/helm Apr 25 01:24:42.889845 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 25 01:24:42.894365 coreos-metadata[1692]: Apr 25 01:24:42.890 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Apr 25 01:24:42.902429 coreos-metadata[1692]: Apr 25 01:24:42.902 INFO Fetch successful Apr 25 01:24:42.902429 coreos-metadata[1692]: Apr 25 01:24:42.902 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Apr 25 01:24:42.908451 coreos-metadata[1692]: Apr 25 01:24:42.908 INFO Fetch successful Apr 25 01:24:42.909004 coreos-metadata[1692]: Apr 25 01:24:42.908 INFO Fetching http://168.63.129.16/machine/edefbfac-5d1b-4a84-adea-25bb640b0c0d/44bf1828%2D1f55%2D452a%2D841d%2De0a43b317783.%5Fci%2D4081.3.6%2Dn%2Dcf3dcbc0ec?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Apr 25 01:24:42.910511 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1750) Apr 25 01:24:42.912177 coreos-metadata[1692]: Apr 25 01:24:42.912 INFO Fetch successful Apr 25 01:24:42.912177 coreos-metadata[1692]: Apr 25 01:24:42.912 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Apr 25 01:24:42.927036 coreos-metadata[1692]: Apr 25 01:24:42.925 INFO Fetch successful Apr 25 01:24:42.972608 bash[1783]: Updated "/home/core/.ssh/authorized_keys" Apr 25 01:24:42.973908 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 25 01:24:42.993988 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 25 01:24:43.001958 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 25 01:24:43.010067 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 25 01:24:43.187594 locksmithd[1770]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 25 01:24:43.434761 tar[1730]: linux-arm64/README.md Apr 25 01:24:43.449310 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 25 01:24:43.647126 sshd_keygen[1729]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 25 01:24:43.671753 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 25 01:24:43.683801 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 25 01:24:43.690697 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Apr 25 01:24:43.706263 systemd[1]: issuegen.service: Deactivated successfully. Apr 25 01:24:43.706456 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 25 01:24:43.722858 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 25 01:24:43.743682 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 25 01:24:43.757268 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 25 01:24:43.763014 containerd[1734]: time="2026-04-25T01:24:43.762926460Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 25 01:24:43.764776 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 25 01:24:43.774329 systemd[1]: Reached target getty.target - Login Prompts. Apr 25 01:24:43.787649 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Apr 25 01:24:43.797397 containerd[1734]: time="2026-04-25T01:24:43.797351180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 25 01:24:43.799120 containerd[1734]: time="2026-04-25T01:24:43.799085500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 25 01:24:43.799278 containerd[1734]: time="2026-04-25T01:24:43.799260420Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 25 01:24:43.799355 containerd[1734]: time="2026-04-25T01:24:43.799341500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 25 01:24:43.799588 containerd[1734]: time="2026-04-25T01:24:43.799570100Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 25 01:24:43.799663 containerd[1734]: time="2026-04-25T01:24:43.799649380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 25 01:24:43.799802 containerd[1734]: time="2026-04-25T01:24:43.799783460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 25 01:24:43.799867 containerd[1734]: time="2026-04-25T01:24:43.799854060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 25 01:24:43.800489 containerd[1734]: time="2026-04-25T01:24:43.800082460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 25 01:24:43.800489 containerd[1734]: time="2026-04-25T01:24:43.800106500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 25 01:24:43.800489 containerd[1734]: time="2026-04-25T01:24:43.800119620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 25 01:24:43.800489 containerd[1734]: time="2026-04-25T01:24:43.800129260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 25 01:24:43.800489 containerd[1734]: time="2026-04-25T01:24:43.800209900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 25 01:24:43.800489 containerd[1734]: time="2026-04-25T01:24:43.800395420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 25 01:24:43.800753 containerd[1734]: time="2026-04-25T01:24:43.800731340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 25 01:24:43.800816 containerd[1734]: time="2026-04-25T01:24:43.800803620Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 25 01:24:43.800958 containerd[1734]: time="2026-04-25T01:24:43.800939740Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 25 01:24:43.801073 containerd[1734]: time="2026-04-25T01:24:43.801056660Z" level=info msg="metadata content store policy set" policy=shared Apr 25 01:24:43.808381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 01:24:43.814924 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 25 01:24:43.824695 containerd[1734]: time="2026-04-25T01:24:43.824656980Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 25 01:24:43.824839 containerd[1734]: time="2026-04-25T01:24:43.824826620Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 25 01:24:43.825029 containerd[1734]: time="2026-04-25T01:24:43.825013780Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 25 01:24:43.825101 containerd[1734]: time="2026-04-25T01:24:43.825088900Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 25 01:24:43.825159 containerd[1734]: time="2026-04-25T01:24:43.825148300Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 25 01:24:43.825410 containerd[1734]: time="2026-04-25T01:24:43.825392620Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 25 01:24:43.825748 containerd[1734]: time="2026-04-25T01:24:43.825733100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 25 01:24:43.825933 containerd[1734]: time="2026-04-25T01:24:43.825917860Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 25 01:24:43.826017 containerd[1734]: time="2026-04-25T01:24:43.826005100Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 25 01:24:43.826086 containerd[1734]: time="2026-04-25T01:24:43.826073900Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 25 01:24:43.826156 containerd[1734]: time="2026-04-25T01:24:43.826134740Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 25 01:24:43.826207 containerd[1734]: time="2026-04-25T01:24:43.826196180Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 25 01:24:43.826278 containerd[1734]: time="2026-04-25T01:24:43.826265340Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 25 01:24:43.826350 containerd[1734]: time="2026-04-25T01:24:43.826338100Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 25 01:24:43.826429 containerd[1734]: time="2026-04-25T01:24:43.826415500Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 25 01:24:43.826500 containerd[1734]: time="2026-04-25T01:24:43.826487620Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 25 01:24:43.826577 containerd[1734]: time="2026-04-25T01:24:43.826564460Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 25 01:24:43.826643 containerd[1734]: time="2026-04-25T01:24:43.826632580Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 25 01:24:43.826709 containerd[1734]: time="2026-04-25T01:24:43.826691660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 25 01:24:43.826766 containerd[1734]: time="2026-04-25T01:24:43.826754700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 25 01:24:43.826833 containerd[1734]: time="2026-04-25T01:24:43.826821940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 25 01:24:43.826901 containerd[1734]: time="2026-04-25T01:24:43.826888900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 25 01:24:43.826956 containerd[1734]: time="2026-04-25T01:24:43.826941740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 25 01:24:43.827011 containerd[1734]: time="2026-04-25T01:24:43.827000460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 25 01:24:43.827082 containerd[1734]: time="2026-04-25T01:24:43.827070100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 25 01:24:43.827151 containerd[1734]: time="2026-04-25T01:24:43.827131740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 25 01:24:43.827215 containerd[1734]: time="2026-04-25T01:24:43.827198260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 25 01:24:43.827277 containerd[1734]: time="2026-04-25T01:24:43.827261260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 25 01:24:43.827402 containerd[1734]: time="2026-04-25T01:24:43.827338740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 25 01:24:43.827402 containerd[1734]: time="2026-04-25T01:24:43.827354620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 25 01:24:43.827402 containerd[1734]: time="2026-04-25T01:24:43.827367980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 25 01:24:43.827402 containerd[1734]: time="2026-04-25T01:24:43.827383540Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 25 01:24:43.827537 containerd[1734]: time="2026-04-25T01:24:43.827525300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 25 01:24:43.827608 containerd[1734]: time="2026-04-25T01:24:43.827596820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 25 01:24:43.827735 containerd[1734]: time="2026-04-25T01:24:43.827651060Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 25 01:24:43.828334 containerd[1734]: time="2026-04-25T01:24:43.828312620Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 25 01:24:43.828526 containerd[1734]: time="2026-04-25T01:24:43.828429500Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 25 01:24:43.828526 containerd[1734]: time="2026-04-25T01:24:43.828470660Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 25 01:24:43.828526 containerd[1734]: time="2026-04-25T01:24:43.828486580Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 25 01:24:43.828526 containerd[1734]: time="2026-04-25T01:24:43.828496740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 25 01:24:43.828711 containerd[1734]: time="2026-04-25T01:24:43.828517700Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 25 01:24:43.828711 containerd[1734]: time="2026-04-25T01:24:43.828649500Z" level=info msg="NRI interface is disabled by configuration." Apr 25 01:24:43.828711 containerd[1734]: time="2026-04-25T01:24:43.828664660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 25 01:24:43.829118 containerd[1734]: time="2026-04-25T01:24:43.829057020Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 25 01:24:43.829410 containerd[1734]: time="2026-04-25T01:24:43.829279980Z" level=info msg="Connect containerd service" Apr 25 01:24:43.829410 containerd[1734]: time="2026-04-25T01:24:43.829320660Z" level=info msg="using legacy CRI server" Apr 25 01:24:43.829410 containerd[1734]: time="2026-04-25T01:24:43.829340220Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 25 01:24:43.829612 containerd[1734]: time="2026-04-25T01:24:43.829589980Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 25 01:24:43.830373 containerd[1734]: time="2026-04-25T01:24:43.830293700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 25 01:24:43.835678 containerd[1734]: time="2026-04-25T01:24:43.830495700Z" level=info msg="Start subscribing containerd event" Apr 25 01:24:43.835678 containerd[1734]: time="2026-04-25T01:24:43.830570900Z" level=info msg="Start recovering state" Apr 25 01:24:43.835678 containerd[1734]: time="2026-04-25T01:24:43.830609100Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 25 01:24:43.835678 containerd[1734]: time="2026-04-25T01:24:43.830650340Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 25 01:24:43.835678 containerd[1734]: time="2026-04-25T01:24:43.830656460Z" level=info msg="Start event monitor" Apr 25 01:24:43.835678 containerd[1734]: time="2026-04-25T01:24:43.830670860Z" level=info msg="Start snapshots syncer" Apr 25 01:24:43.835678 containerd[1734]: time="2026-04-25T01:24:43.830680980Z" level=info msg="Start cni network conf syncer for default" Apr 25 01:24:43.835678 containerd[1734]: time="2026-04-25T01:24:43.830689100Z" level=info msg="Start streaming server" Apr 25 01:24:43.830853 systemd[1]: Started containerd.service - containerd container runtime. Apr 25 01:24:43.836636 containerd[1734]: time="2026-04-25T01:24:43.836529100Z" level=info msg="containerd successfully booted in 0.075391s" Apr 25 01:24:43.838721 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 25 01:24:43.844571 systemd[1]: Startup finished in 647ms (kernel) + 13.770s (initrd) + 13.743s (userspace) = 28.161s. Apr 25 01:24:44.192556 kubelet[1852]: E0425 01:24:44.192511 1852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 25 01:24:44.196134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 25 01:24:44.196288 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 25 01:24:44.214086 login[1843]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:24:44.217857 login[1846]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:24:44.225377 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 25 01:24:44.232941 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 25 01:24:44.235233 systemd-logind[1716]: New session 1 of user core. Apr 25 01:24:44.239693 systemd-logind[1716]: New session 2 of user core. Apr 25 01:24:44.262244 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 25 01:24:44.271718 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 25 01:24:44.299628 (systemd)[1871]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 25 01:24:44.449183 systemd[1871]: Queued start job for default target default.target. Apr 25 01:24:44.454823 systemd[1871]: Created slice app.slice - User Application Slice. Apr 25 01:24:44.454854 systemd[1871]: Reached target paths.target - Paths. Apr 25 01:24:44.454867 systemd[1871]: Reached target timers.target - Timers. Apr 25 01:24:44.456099 systemd[1871]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 25 01:24:44.465614 systemd[1871]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 25 01:24:44.465669 systemd[1871]: Reached target sockets.target - Sockets. Apr 25 01:24:44.465680 systemd[1871]: Reached target basic.target - Basic System. Apr 25 01:24:44.465718 systemd[1871]: Reached target default.target - Main User Target. Apr 25 01:24:44.465743 systemd[1871]: Startup finished in 158ms. Apr 25 01:24:44.465825 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 25 01:24:44.473787 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 25 01:24:44.476063 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 25 01:24:45.620798 waagent[1848]: 2026-04-25T01:24:45.620707Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Apr 25 01:24:45.625176 waagent[1848]: 2026-04-25T01:24:45.625123Z INFO Daemon Daemon OS: flatcar 4081.3.6 Apr 25 01:24:45.628636 waagent[1848]: 2026-04-25T01:24:45.628596Z INFO Daemon Daemon Python: 3.11.9 Apr 25 01:24:45.632009 waagent[1848]: 2026-04-25T01:24:45.631952Z INFO Daemon Daemon Run daemon Apr 25 01:24:45.635251 waagent[1848]: 2026-04-25T01:24:45.635216Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Apr 25 01:24:45.642388 waagent[1848]: 2026-04-25T01:24:45.642190Z INFO Daemon Daemon Using waagent for provisioning Apr 25 01:24:45.646296 waagent[1848]: 2026-04-25T01:24:45.646257Z INFO Daemon Daemon Activate resource disk Apr 25 01:24:45.649787 waagent[1848]: 2026-04-25T01:24:45.649749Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Apr 25 01:24:45.659194 waagent[1848]: 2026-04-25T01:24:45.659144Z INFO Daemon Daemon Found device: None Apr 25 01:24:45.662788 waagent[1848]: 2026-04-25T01:24:45.662746Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Apr 25 01:24:45.669230 waagent[1848]: 2026-04-25T01:24:45.669192Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Apr 25 01:24:45.679361 waagent[1848]: 2026-04-25T01:24:45.679315Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 25 01:24:45.684583 waagent[1848]: 2026-04-25T01:24:45.684544Z INFO Daemon Daemon Running default provisioning handler Apr 25 01:24:45.695543 waagent[1848]: 2026-04-25T01:24:45.695479Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Apr 25 01:24:45.706130 waagent[1848]: 2026-04-25T01:24:45.706076Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Apr 25 01:24:45.713526 waagent[1848]: 2026-04-25T01:24:45.713482Z INFO Daemon Daemon cloud-init is enabled: False Apr 25 01:24:45.717325 waagent[1848]: 2026-04-25T01:24:45.717291Z INFO Daemon Daemon Copying ovf-env.xml Apr 25 01:24:45.883662 waagent[1848]: 2026-04-25T01:24:45.883519Z INFO Daemon Daemon Successfully mounted dvd Apr 25 01:24:45.897773 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Apr 25 01:24:45.898152 waagent[1848]: 2026-04-25T01:24:45.897897Z INFO Daemon Daemon Detect protocol endpoint Apr 25 01:24:45.901745 waagent[1848]: 2026-04-25T01:24:45.901689Z INFO Daemon Daemon Clean protocol and wireserver endpoint Apr 25 01:24:45.906301 waagent[1848]: 2026-04-25T01:24:45.906257Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Apr 25 01:24:45.911258 waagent[1848]: 2026-04-25T01:24:45.911217Z INFO Daemon Daemon Test for route to 168.63.129.16 Apr 25 01:24:45.915638 waagent[1848]: 2026-04-25T01:24:45.915596Z INFO Daemon Daemon Route to 168.63.129.16 exists Apr 25 01:24:45.919707 waagent[1848]: 2026-04-25T01:24:45.919667Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Apr 25 01:24:45.973315 waagent[1848]: 2026-04-25T01:24:45.973274Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Apr 25 01:24:45.978656 waagent[1848]: 2026-04-25T01:24:45.978630Z INFO Daemon Daemon Wire protocol version:2012-11-30 Apr 25 01:24:45.982923 waagent[1848]: 2026-04-25T01:24:45.982888Z INFO Daemon Daemon Server preferred version:2015-04-05 Apr 25 01:24:46.319527 waagent[1848]: 2026-04-25T01:24:46.318911Z INFO Daemon Daemon Initializing goal state during protocol detection Apr 25 01:24:46.324006 waagent[1848]: 2026-04-25T01:24:46.323951Z INFO Daemon Daemon Forcing an update of the goal state. Apr 25 01:24:46.331180 waagent[1848]: 2026-04-25T01:24:46.331134Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 25 01:24:46.350222 waagent[1848]: 2026-04-25T01:24:46.350182Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.181 Apr 25 01:24:46.354782 waagent[1848]: 2026-04-25T01:24:46.354736Z INFO Daemon Apr 25 01:24:46.357200 waagent[1848]: 2026-04-25T01:24:46.357162Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 61c42bdf-8df3-4daa-9621-674877d36054 eTag: 17050695902269664167 source: Fabric] Apr 25 01:24:46.366842 waagent[1848]: 2026-04-25T01:24:46.366798Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Apr 25 01:24:46.372473 waagent[1848]: 2026-04-25T01:24:46.372414Z INFO Daemon Apr 25 01:24:46.374597 waagent[1848]: 2026-04-25T01:24:46.374557Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Apr 25 01:24:46.382989 waagent[1848]: 2026-04-25T01:24:46.382954Z INFO Daemon Daemon Downloading artifacts profile blob Apr 25 01:24:46.457538 waagent[1848]: 2026-04-25T01:24:46.457453Z INFO Daemon Downloaded certificate {'thumbprint': '5853C71B32D1B97942AA0FA8706102EA805AE538', 'hasPrivateKey': True} Apr 25 01:24:46.465331 waagent[1848]: 2026-04-25T01:24:46.465284Z INFO Daemon Fetch goal state completed Apr 25 01:24:46.474925 waagent[1848]: 2026-04-25T01:24:46.474885Z INFO Daemon Daemon Starting provisioning Apr 25 01:24:46.478839 waagent[1848]: 2026-04-25T01:24:46.478798Z INFO Daemon Daemon Handle ovf-env.xml. Apr 25 01:24:46.482296 waagent[1848]: 2026-04-25T01:24:46.482266Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-cf3dcbc0ec] Apr 25 01:24:46.493256 waagent[1848]: 2026-04-25T01:24:46.488392Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-cf3dcbc0ec] Apr 25 01:24:46.493771 waagent[1848]: 2026-04-25T01:24:46.493722Z INFO Daemon Daemon Examine /proc/net/route for primary interface Apr 25 01:24:46.498660 waagent[1848]: 2026-04-25T01:24:46.498608Z INFO Daemon Daemon Primary interface is [eth0] Apr 25 01:24:46.528774 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 25 01:24:46.529019 systemd-networkd[1367]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 25 01:24:46.529128 systemd-networkd[1367]: eth0: DHCP lease lost Apr 25 01:24:46.530692 waagent[1848]: 2026-04-25T01:24:46.530601Z INFO Daemon Daemon Create user account if not exists Apr 25 01:24:46.535191 waagent[1848]: 2026-04-25T01:24:46.535138Z INFO Daemon Daemon User core already exists, skip useradd Apr 25 01:24:46.535528 systemd-networkd[1367]: eth0: DHCPv6 lease lost Apr 25 01:24:46.540078 waagent[1848]: 2026-04-25T01:24:46.540003Z INFO Daemon Daemon Configure sudoer Apr 25 01:24:46.544060 waagent[1848]: 2026-04-25T01:24:46.544010Z INFO Daemon Daemon Configure sshd Apr 25 01:24:46.547672 waagent[1848]: 2026-04-25T01:24:46.547627Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Apr 25 01:24:46.560721 waagent[1848]: 2026-04-25T01:24:46.557276Z INFO Daemon Daemon Deploy ssh public key. Apr 25 01:24:46.569472 systemd-networkd[1367]: eth0: DHCPv4 address 10.0.0.7/24, gateway 10.0.0.1 acquired from 168.63.129.16 Apr 25 01:24:47.679362 waagent[1848]: 2026-04-25T01:24:47.679311Z INFO Daemon Daemon Provisioning complete Apr 25 01:24:47.693811 waagent[1848]: 2026-04-25T01:24:47.693767Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Apr 25 01:24:47.698525 waagent[1848]: 2026-04-25T01:24:47.698485Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Apr 25 01:24:47.706066 waagent[1848]: 2026-04-25T01:24:47.706027Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Apr 25 01:24:47.838319 waagent[1922]: 2026-04-25T01:24:47.837686Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Apr 25 01:24:47.838319 waagent[1922]: 2026-04-25T01:24:47.837836Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Apr 25 01:24:47.838319 waagent[1922]: 2026-04-25T01:24:47.837889Z INFO ExtHandler ExtHandler Python: 3.11.9 Apr 25 01:24:47.879915 waagent[1922]: 2026-04-25T01:24:47.879839Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Apr 25 01:24:47.880235 waagent[1922]: 2026-04-25T01:24:47.880199Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 25 01:24:47.880393 waagent[1922]: 2026-04-25T01:24:47.880359Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 25 01:24:47.888067 waagent[1922]: 2026-04-25T01:24:47.887992Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Apr 25 01:24:47.893606 waagent[1922]: 2026-04-25T01:24:47.893540Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.181 Apr 25 01:24:47.894277 waagent[1922]: 2026-04-25T01:24:47.894237Z INFO ExtHandler Apr 25 01:24:47.894429 waagent[1922]: 2026-04-25T01:24:47.894396Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 2281323e-7a93-4b03-9597-d68795612d9a eTag: 17050695902269664167 source: Fabric] Apr 25 01:24:47.896109 waagent[1922]: 2026-04-25T01:24:47.894804Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Apr 25 01:24:47.896109 waagent[1922]: 2026-04-25T01:24:47.895372Z INFO ExtHandler Apr 25 01:24:47.896109 waagent[1922]: 2026-04-25T01:24:47.895466Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Apr 25 01:24:47.899457 waagent[1922]: 2026-04-25T01:24:47.898686Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Apr 25 01:24:48.007274 waagent[1922]: 2026-04-25T01:24:48.007148Z INFO ExtHandler Downloaded certificate {'thumbprint': '5853C71B32D1B97942AA0FA8706102EA805AE538', 'hasPrivateKey': True} Apr 25 01:24:48.008010 waagent[1922]: 2026-04-25T01:24:48.007965Z INFO ExtHandler Fetch goal state completed Apr 25 01:24:48.021765 waagent[1922]: 2026-04-25T01:24:48.021712Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1922 Apr 25 01:24:48.022046 waagent[1922]: 2026-04-25T01:24:48.022009Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Apr 25 01:24:48.023841 waagent[1922]: 2026-04-25T01:24:48.023792Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Apr 25 01:24:48.024322 waagent[1922]: 2026-04-25T01:24:48.024283Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Apr 25 01:24:48.809015 waagent[1922]: 2026-04-25T01:24:48.808588Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Apr 25 01:24:48.809015 waagent[1922]: 2026-04-25T01:24:48.808791Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Apr 25 01:24:48.814892 waagent[1922]: 2026-04-25T01:24:48.814856Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Apr 25 01:24:48.821812 systemd[1]: Reloading requested from client PID 1935 ('systemctl') (unit waagent.service)... Apr 25 01:24:48.821828 systemd[1]: Reloading... Apr 25 01:24:48.897537 zram_generator::config[1969]: No configuration found. Apr 25 01:24:49.011370 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 25 01:24:49.102489 systemd[1]: Reloading finished in 280 ms. Apr 25 01:24:49.129458 waagent[1922]: 2026-04-25T01:24:49.126953Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Apr 25 01:24:49.132604 systemd[1]: Reloading requested from client PID 2023 ('systemctl') (unit waagent.service)... Apr 25 01:24:49.132618 systemd[1]: Reloading... Apr 25 01:24:49.214467 zram_generator::config[2061]: No configuration found. Apr 25 01:24:49.318526 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 25 01:24:49.409518 systemd[1]: Reloading finished in 276 ms. Apr 25 01:24:49.442468 waagent[1922]: 2026-04-25T01:24:49.442094Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Apr 25 01:24:49.442468 waagent[1922]: 2026-04-25T01:24:49.442250Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Apr 25 01:24:50.010469 waagent[1922]: 2026-04-25T01:24:50.009939Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Apr 25 01:24:50.010632 waagent[1922]: 2026-04-25T01:24:50.010565Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Apr 25 01:24:50.011459 waagent[1922]: 2026-04-25T01:24:50.011326Z INFO ExtHandler ExtHandler Starting env monitor service. Apr 25 01:24:50.011907 waagent[1922]: 2026-04-25T01:24:50.011851Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Apr 25 01:24:50.012111 waagent[1922]: 2026-04-25T01:24:50.012064Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 25 01:24:50.012111 waagent[1922]: 2026-04-25T01:24:50.012154Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Apr 25 01:24:50.013036 waagent[1922]: 2026-04-25T01:24:50.012359Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 25 01:24:50.013036 waagent[1922]: 2026-04-25T01:24:50.012598Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Apr 25 01:24:50.013036 waagent[1922]: 2026-04-25T01:24:50.012783Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Apr 25 01:24:50.013036 waagent[1922]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Apr 25 01:24:50.013036 waagent[1922]: eth0 00000000 0100000A 0003 0 0 1024 00000000 0 0 0 Apr 25 01:24:50.013036 waagent[1922]: eth0 0000000A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Apr 25 01:24:50.013036 waagent[1922]: eth0 0100000A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Apr 25 01:24:50.013036 waagent[1922]: eth0 10813FA8 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 25 01:24:50.013036 waagent[1922]: eth0 FEA9FEA9 0100000A 0007 0 0 1024 FFFFFFFF 0 0 0 Apr 25 01:24:50.013372 waagent[1922]: 2026-04-25T01:24:50.013319Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Apr 25 01:24:50.013564 waagent[1922]: 2026-04-25T01:24:50.013514Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Apr 25 01:24:50.013616 waagent[1922]: 2026-04-25T01:24:50.013572Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Apr 25 01:24:50.014108 waagent[1922]: 2026-04-25T01:24:50.014042Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Apr 25 01:24:50.014165 waagent[1922]: 2026-04-25T01:24:50.014106Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Apr 25 01:24:50.014517 waagent[1922]: 2026-04-25T01:24:50.014396Z INFO EnvHandler ExtHandler Configure routes Apr 25 01:24:50.014918 waagent[1922]: 2026-04-25T01:24:50.014876Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Apr 25 01:24:50.015030 waagent[1922]: 2026-04-25T01:24:50.015000Z INFO EnvHandler ExtHandler Gateway:None Apr 25 01:24:50.015151 waagent[1922]: 2026-04-25T01:24:50.015122Z INFO EnvHandler ExtHandler Routes:None Apr 25 01:24:50.019403 waagent[1922]: 2026-04-25T01:24:50.019355Z INFO ExtHandler ExtHandler Apr 25 01:24:50.019988 waagent[1922]: 2026-04-25T01:24:50.019778Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: bf57cfda-f6e0-471e-a341-b2bdca600073 correlation aa90421b-701c-413d-af6b-d67269da13e5 created: 2026-04-25T01:23:42.612518Z] Apr 25 01:24:50.020789 waagent[1922]: 2026-04-25T01:24:50.020735Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Apr 25 01:24:50.022818 waagent[1922]: 2026-04-25T01:24:50.022775Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Apr 25 01:24:50.057120 waagent[1922]: 2026-04-25T01:24:50.057060Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 6296CA70-78A6-402A-89AC-0392D061F518;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Apr 25 01:24:50.066832 waagent[1922]: 2026-04-25T01:24:50.066377Z INFO MonitorHandler ExtHandler Network interfaces: Apr 25 01:24:50.066832 waagent[1922]: Executing ['ip', '-a', '-o', 'link']: Apr 25 01:24:50.066832 waagent[1922]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Apr 25 01:24:50.066832 waagent[1922]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:b6:8b:c0 brd ff:ff:ff:ff:ff:ff Apr 25 01:24:50.066832 waagent[1922]: 3: enP20948s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:b6:8b:c0 brd ff:ff:ff:ff:ff:ff\ altname enP20948p0s2 Apr 25 01:24:50.066832 waagent[1922]: Executing ['ip', '-4', '-a', '-o', 'address']: Apr 25 01:24:50.066832 waagent[1922]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Apr 25 01:24:50.066832 waagent[1922]: 2: eth0 inet 10.0.0.7/24 metric 1024 brd 10.0.0.255 scope global eth0\ valid_lft forever preferred_lft forever Apr 25 01:24:50.066832 waagent[1922]: Executing ['ip', '-6', '-a', '-o', 'address']: Apr 25 01:24:50.066832 waagent[1922]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Apr 25 01:24:50.066832 waagent[1922]: 2: eth0 inet6 fe80::7eed:8dff:feb6:8bc0/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Apr 25 01:24:50.139583 waagent[1922]: 2026-04-25T01:24:50.138607Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Apr 25 01:24:50.139583 waagent[1922]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 25 01:24:50.139583 waagent[1922]: pkts bytes target prot opt in out source destination Apr 25 01:24:50.139583 waagent[1922]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 25 01:24:50.139583 waagent[1922]: pkts bytes target prot opt in out source destination Apr 25 01:24:50.139583 waagent[1922]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 25 01:24:50.139583 waagent[1922]: pkts bytes target prot opt in out source destination Apr 25 01:24:50.139583 waagent[1922]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 25 01:24:50.139583 waagent[1922]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 25 01:24:50.139583 waagent[1922]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 25 01:24:50.141758 waagent[1922]: 2026-04-25T01:24:50.141700Z INFO EnvHandler ExtHandler Current Firewall rules: Apr 25 01:24:50.141758 waagent[1922]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Apr 25 01:24:50.141758 waagent[1922]: pkts bytes target prot opt in out source destination Apr 25 01:24:50.141758 waagent[1922]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Apr 25 01:24:50.141758 waagent[1922]: pkts bytes target prot opt in out source destination Apr 25 01:24:50.141758 waagent[1922]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Apr 25 01:24:50.141758 waagent[1922]: pkts bytes target prot opt in out source destination Apr 25 01:24:50.141758 waagent[1922]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Apr 25 01:24:50.141758 waagent[1922]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Apr 25 01:24:50.141758 waagent[1922]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Apr 25 01:24:50.142002 waagent[1922]: 2026-04-25T01:24:50.141968Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Apr 25 01:24:54.446889 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 25 01:24:54.456626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 01:24:54.563947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 01:24:54.572684 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 25 01:24:54.657868 kubelet[2150]: E0425 01:24:54.657802 2150 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 25 01:24:54.660728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 25 01:24:54.660859 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 25 01:25:01.228988 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 25 01:25:01.241018 systemd[1]: Started sshd@0-10.0.0.7:22-4.175.71.9:36298.service - OpenSSH per-connection server daemon (4.175.71.9:36298). Apr 25 01:25:02.267256 sshd[2158]: Accepted publickey for core from 4.175.71.9 port 36298 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:25:02.268620 sshd[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:25:02.272201 systemd-logind[1716]: New session 3 of user core. Apr 25 01:25:02.282787 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 25 01:25:03.060685 systemd[1]: Started sshd@1-10.0.0.7:22-4.175.71.9:36310.service - OpenSSH per-connection server daemon (4.175.71.9:36310). Apr 25 01:25:03.977478 sshd[2163]: Accepted publickey for core from 4.175.71.9 port 36310 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:25:03.978241 sshd[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:25:03.981799 systemd-logind[1716]: New session 4 of user core. Apr 25 01:25:03.989596 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 25 01:25:04.617601 sshd[2163]: pam_unix(sshd:session): session closed for user core Apr 25 01:25:04.620810 systemd[1]: sshd@1-10.0.0.7:22-4.175.71.9:36310.service: Deactivated successfully. Apr 25 01:25:04.622809 systemd[1]: session-4.scope: Deactivated successfully. Apr 25 01:25:04.624496 systemd-logind[1716]: Session 4 logged out. Waiting for processes to exit. Apr 25 01:25:04.626025 systemd-logind[1716]: Removed session 4. Apr 25 01:25:04.773507 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 25 01:25:04.775175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 01:25:04.776865 systemd[1]: Started sshd@2-10.0.0.7:22-4.175.71.9:36320.service - OpenSSH per-connection server daemon (4.175.71.9:36320). Apr 25 01:25:04.894346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 01:25:04.907693 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 25 01:25:05.001252 kubelet[2180]: E0425 01:25:05.001197 2180 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 25 01:25:05.003874 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 25 01:25:05.004027 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 25 01:25:05.694203 sshd[2171]: Accepted publickey for core from 4.175.71.9 port 36320 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:25:05.695613 sshd[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:25:05.699349 systemd-logind[1716]: New session 5 of user core. Apr 25 01:25:05.709846 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 25 01:25:06.325543 sshd[2171]: pam_unix(sshd:session): session closed for user core Apr 25 01:25:06.328886 systemd-logind[1716]: Session 5 logged out. Waiting for processes to exit. Apr 25 01:25:06.329126 systemd[1]: sshd@2-10.0.0.7:22-4.175.71.9:36320.service: Deactivated successfully. Apr 25 01:25:06.330738 systemd[1]: session-5.scope: Deactivated successfully. Apr 25 01:25:06.332380 systemd-logind[1716]: Removed session 5. Apr 25 01:25:06.381557 chronyd[1699]: Selected source PHC0 Apr 25 01:25:06.478797 systemd[1]: Started sshd@3-10.0.0.7:22-4.175.71.9:39502.service - OpenSSH per-connection server daemon (4.175.71.9:39502). Apr 25 01:25:07.384531 sshd[2191]: Accepted publickey for core from 4.175.71.9 port 39502 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:25:07.385880 sshd[2191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:25:07.389513 systemd-logind[1716]: New session 6 of user core. Apr 25 01:25:07.395737 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 25 01:25:08.011536 sshd[2191]: pam_unix(sshd:session): session closed for user core Apr 25 01:25:08.015590 systemd[1]: sshd@3-10.0.0.7:22-4.175.71.9:39502.service: Deactivated successfully. Apr 25 01:25:08.017251 systemd[1]: session-6.scope: Deactivated successfully. Apr 25 01:25:08.020084 systemd-logind[1716]: Session 6 logged out. Waiting for processes to exit. Apr 25 01:25:08.020976 systemd-logind[1716]: Removed session 6. Apr 25 01:25:08.170181 systemd[1]: Started sshd@4-10.0.0.7:22-4.175.71.9:39514.service - OpenSSH per-connection server daemon (4.175.71.9:39514). Apr 25 01:25:09.085918 sshd[2198]: Accepted publickey for core from 4.175.71.9 port 39514 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:25:09.087270 sshd[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:25:09.090874 systemd-logind[1716]: New session 7 of user core. Apr 25 01:25:09.101727 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 25 01:25:09.727515 sudo[2201]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 25 01:25:09.727797 sudo[2201]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 25 01:25:09.794351 sudo[2201]: pam_unix(sudo:session): session closed for user root Apr 25 01:25:09.942554 sshd[2198]: pam_unix(sshd:session): session closed for user core Apr 25 01:25:09.946164 systemd[1]: sshd@4-10.0.0.7:22-4.175.71.9:39514.service: Deactivated successfully. Apr 25 01:25:09.947665 systemd[1]: session-7.scope: Deactivated successfully. Apr 25 01:25:09.948280 systemd-logind[1716]: Session 7 logged out. Waiting for processes to exit. Apr 25 01:25:09.949239 systemd-logind[1716]: Removed session 7. Apr 25 01:25:10.095203 systemd[1]: Started sshd@5-10.0.0.7:22-4.175.71.9:39526.service - OpenSSH per-connection server daemon (4.175.71.9:39526). Apr 25 01:25:10.998465 sshd[2206]: Accepted publickey for core from 4.175.71.9 port 39526 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:25:10.999380 sshd[2206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:25:11.003098 systemd-logind[1716]: New session 8 of user core. Apr 25 01:25:11.010623 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 25 01:25:11.477734 sudo[2210]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 25 01:25:11.478337 sudo[2210]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 25 01:25:11.481386 sudo[2210]: pam_unix(sudo:session): session closed for user root Apr 25 01:25:11.485873 sudo[2209]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 25 01:25:11.486135 sudo[2209]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 25 01:25:11.498676 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 25 01:25:11.500331 auditctl[2213]: No rules Apr 25 01:25:11.501284 systemd[1]: audit-rules.service: Deactivated successfully. Apr 25 01:25:11.501805 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 25 01:25:11.504409 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 25 01:25:11.535192 augenrules[2231]: No rules Apr 25 01:25:11.536619 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 25 01:25:11.538054 sudo[2209]: pam_unix(sudo:session): session closed for user root Apr 25 01:25:11.683886 sshd[2206]: pam_unix(sshd:session): session closed for user core Apr 25 01:25:11.687668 systemd[1]: sshd@5-10.0.0.7:22-4.175.71.9:39526.service: Deactivated successfully. Apr 25 01:25:11.689262 systemd[1]: session-8.scope: Deactivated successfully. Apr 25 01:25:11.689929 systemd-logind[1716]: Session 8 logged out. Waiting for processes to exit. Apr 25 01:25:11.690806 systemd-logind[1716]: Removed session 8. Apr 25 01:25:11.847875 systemd[1]: Started sshd@6-10.0.0.7:22-4.175.71.9:39534.service - OpenSSH per-connection server daemon (4.175.71.9:39534). Apr 25 01:25:12.762464 sshd[2239]: Accepted publickey for core from 4.175.71.9 port 39534 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:25:12.763297 sshd[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:25:12.766803 systemd-logind[1716]: New session 9 of user core. Apr 25 01:25:12.774626 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 25 01:25:13.250029 sudo[2242]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 25 01:25:13.250316 sudo[2242]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 25 01:25:14.747820 (dockerd)[2257]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 25 01:25:14.747901 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 25 01:25:15.018789 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 25 01:25:15.031641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 01:25:15.571643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 01:25:15.574809 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 25 01:25:15.608058 kubelet[2266]: E0425 01:25:15.608008 2266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 25 01:25:15.610934 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 25 01:25:15.611183 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 25 01:25:15.908866 dockerd[2257]: time="2026-04-25T01:25:15.907732852Z" level=info msg="Starting up" Apr 25 01:25:16.320023 dockerd[2257]: time="2026-04-25T01:25:16.319795028Z" level=info msg="Loading containers: start." Apr 25 01:25:16.485454 kernel: Initializing XFRM netlink socket Apr 25 01:25:16.620074 systemd-networkd[1367]: docker0: Link UP Apr 25 01:25:16.645487 dockerd[2257]: time="2026-04-25T01:25:16.645443904Z" level=info msg="Loading containers: done." Apr 25 01:25:16.656346 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3091724894-merged.mount: Deactivated successfully. Apr 25 01:25:16.667522 dockerd[2257]: time="2026-04-25T01:25:16.667486269Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 25 01:25:16.667633 dockerd[2257]: time="2026-04-25T01:25:16.667597389Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 25 01:25:16.667753 dockerd[2257]: time="2026-04-25T01:25:16.667735749Z" level=info msg="Daemon has completed initialization" Apr 25 01:25:16.729455 dockerd[2257]: time="2026-04-25T01:25:16.729189723Z" level=info msg="API listen on /run/docker.sock" Apr 25 01:25:16.729387 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 25 01:25:17.271639 containerd[1734]: time="2026-04-25T01:25:17.271606169Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 25 01:25:18.227032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3475479323.mount: Deactivated successfully. Apr 25 01:25:19.771678 containerd[1734]: time="2026-04-25T01:25:19.771621653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:19.774824 containerd[1734]: time="2026-04-25T01:25:19.774795892Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=24608785" Apr 25 01:25:19.778210 containerd[1734]: time="2026-04-25T01:25:19.778181451Z" level=info msg="ImageCreate event name:\"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:19.783078 containerd[1734]: time="2026-04-25T01:25:19.783008689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:19.784421 containerd[1734]: time="2026-04-25T01:25:19.784033248Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"24605384\" in 2.512392319s" Apr 25 01:25:19.784421 containerd[1734]: time="2026-04-25T01:25:19.784070648Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\"" Apr 25 01:25:19.785333 containerd[1734]: time="2026-04-25T01:25:19.785176608Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 25 01:25:21.443473 containerd[1734]: time="2026-04-25T01:25:21.442462004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:21.445835 containerd[1734]: time="2026-04-25T01:25:21.445619843Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=19073294" Apr 25 01:25:21.449326 containerd[1734]: time="2026-04-25T01:25:21.448880802Z" level=info msg="ImageCreate event name:\"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:21.454284 containerd[1734]: time="2026-04-25T01:25:21.454253680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:21.455306 containerd[1734]: time="2026-04-25T01:25:21.455274679Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"20579933\" in 1.670066711s" Apr 25 01:25:21.455365 containerd[1734]: time="2026-04-25T01:25:21.455306359Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\"" Apr 25 01:25:21.456829 containerd[1734]: time="2026-04-25T01:25:21.456808879Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 25 01:25:22.710827 containerd[1734]: time="2026-04-25T01:25:22.710772834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:22.716025 containerd[1734]: time="2026-04-25T01:25:22.715991434Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=13800836" Apr 25 01:25:22.720455 containerd[1734]: time="2026-04-25T01:25:22.719486954Z" level=info msg="ImageCreate event name:\"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:22.724456 containerd[1734]: time="2026-04-25T01:25:22.724405794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:22.726594 containerd[1734]: time="2026-04-25T01:25:22.726432434Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"15307493\" in 1.269513035s" Apr 25 01:25:22.726594 containerd[1734]: time="2026-04-25T01:25:22.726480954Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\"" Apr 25 01:25:22.727156 containerd[1734]: time="2026-04-25T01:25:22.726995874Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 25 01:25:23.994709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3564549114.mount: Deactivated successfully. Apr 25 01:25:24.241477 containerd[1734]: time="2026-04-25T01:25:24.241219366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:24.244327 containerd[1734]: time="2026-04-25T01:25:24.244180206Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=22340584" Apr 25 01:25:24.248109 containerd[1734]: time="2026-04-25T01:25:24.247550486Z" level=info msg="ImageCreate event name:\"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:24.251581 containerd[1734]: time="2026-04-25T01:25:24.251520326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:24.252477 containerd[1734]: time="2026-04-25T01:25:24.252320206Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"22339603\" in 1.525292012s" Apr 25 01:25:24.252477 containerd[1734]: time="2026-04-25T01:25:24.252351326Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\"" Apr 25 01:25:24.252888 containerd[1734]: time="2026-04-25T01:25:24.252868086Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 25 01:25:24.440461 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Apr 25 01:25:24.904140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3620955500.mount: Deactivated successfully. Apr 25 01:25:25.768834 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 25 01:25:25.777813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 01:25:25.882141 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 01:25:25.895742 (kubelet)[2542]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 25 01:25:25.926453 kubelet[2542]: E0425 01:25:25.926378 2542 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 25 01:25:25.929114 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 25 01:25:25.929372 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 25 01:25:26.689471 containerd[1734]: time="2026-04-25T01:25:26.689365986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:26.692717 containerd[1734]: time="2026-04-25T01:25:26.692689786Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172211" Apr 25 01:25:26.696268 containerd[1734]: time="2026-04-25T01:25:26.695770386Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:26.701929 containerd[1734]: time="2026-04-25T01:25:26.701895106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:26.703267 containerd[1734]: time="2026-04-25T01:25:26.703229866Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 2.45026262s" Apr 25 01:25:26.703374 containerd[1734]: time="2026-04-25T01:25:26.703358866Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Apr 25 01:25:26.704162 containerd[1734]: time="2026-04-25T01:25:26.704137186Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 25 01:25:27.315588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount223273474.mount: Deactivated successfully. Apr 25 01:25:27.338781 containerd[1734]: time="2026-04-25T01:25:27.338731471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:27.342737 containerd[1734]: time="2026-04-25T01:25:27.342570271Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Apr 25 01:25:27.346459 containerd[1734]: time="2026-04-25T01:25:27.346197071Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:27.350995 containerd[1734]: time="2026-04-25T01:25:27.350949871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:27.351901 containerd[1734]: time="2026-04-25T01:25:27.351647151Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 647.477765ms" Apr 25 01:25:27.351901 containerd[1734]: time="2026-04-25T01:25:27.351678751Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 25 01:25:27.352353 containerd[1734]: time="2026-04-25T01:25:27.352313511Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 25 01:25:27.695289 update_engine[1719]: I20260425 01:25:27.695159 1719 update_attempter.cc:509] Updating boot flags... Apr 25 01:25:27.755953 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2569) Apr 25 01:25:28.540604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2005901191.mount: Deactivated successfully. Apr 25 01:25:29.739782 containerd[1734]: time="2026-04-25T01:25:29.739725808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:29.743854 containerd[1734]: time="2026-04-25T01:25:29.743823287Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21752308" Apr 25 01:25:29.747721 containerd[1734]: time="2026-04-25T01:25:29.747683566Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:29.753163 containerd[1734]: time="2026-04-25T01:25:29.753136004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:29.754914 containerd[1734]: time="2026-04-25T01:25:29.753848964Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 2.400973733s" Apr 25 01:25:29.754914 containerd[1734]: time="2026-04-25T01:25:29.753879884Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Apr 25 01:25:32.873308 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 01:25:32.882695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 01:25:32.918318 systemd[1]: Reloading requested from client PID 2682 ('systemctl') (unit session-9.scope)... Apr 25 01:25:32.918480 systemd[1]: Reloading... Apr 25 01:25:33.012331 zram_generator::config[2722]: No configuration found. Apr 25 01:25:33.132406 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 25 01:25:33.226379 systemd[1]: Reloading finished in 307 ms. Apr 25 01:25:33.262039 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 25 01:25:33.262114 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 25 01:25:33.263477 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 01:25:33.272842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 01:25:33.419582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 01:25:33.429767 (kubelet)[2788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 25 01:25:33.461129 kubelet[2788]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 25 01:25:33.941816 kubelet[2788]: I0425 01:25:33.941759 2788 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 25 01:25:33.941816 kubelet[2788]: I0425 01:25:33.941809 2788 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 25 01:25:33.943018 kubelet[2788]: I0425 01:25:33.943003 2788 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 25 01:25:33.943056 kubelet[2788]: I0425 01:25:33.943018 2788 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 25 01:25:33.945176 kubelet[2788]: I0425 01:25:33.943746 2788 server.go:951] "Client rotation is on, will bootstrap in background" Apr 25 01:25:33.954029 kubelet[2788]: E0425 01:25:33.953995 2788 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 25 01:25:33.954936 kubelet[2788]: I0425 01:25:33.954918 2788 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 25 01:25:33.957942 kubelet[2788]: E0425 01:25:33.957915 2788 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 25 01:25:33.958074 kubelet[2788]: I0425 01:25:33.958062 2788 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 25 01:25:33.960614 kubelet[2788]: I0425 01:25:33.960597 2788 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 25 01:25:33.961481 kubelet[2788]: I0425 01:25:33.961455 2788 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 25 01:25:33.961721 kubelet[2788]: I0425 01:25:33.961563 2788 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-cf3dcbc0ec","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 25 01:25:33.961849 kubelet[2788]: I0425 01:25:33.961838 2788 topology_manager.go:143] "Creating topology manager with none policy" Apr 25 01:25:33.961905 kubelet[2788]: I0425 01:25:33.961898 2788 container_manager_linux.go:308] "Creating device plugin manager" Apr 25 01:25:33.962045 kubelet[2788]: I0425 01:25:33.962035 2788 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 25 01:25:33.967927 kubelet[2788]: I0425 01:25:33.967909 2788 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 25 01:25:33.968183 kubelet[2788]: I0425 01:25:33.968169 2788 kubelet.go:482] "Attempting to sync node with API server" Apr 25 01:25:33.968254 kubelet[2788]: I0425 01:25:33.968245 2788 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 25 01:25:33.968313 kubelet[2788]: I0425 01:25:33.968306 2788 kubelet.go:394] "Adding apiserver pod source" Apr 25 01:25:33.968378 kubelet[2788]: I0425 01:25:33.968369 2788 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 25 01:25:33.971356 kubelet[2788]: I0425 01:25:33.971339 2788 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 25 01:25:33.972384 kubelet[2788]: I0425 01:25:33.972366 2788 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 25 01:25:33.972501 kubelet[2788]: I0425 01:25:33.972491 2788 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 25 01:25:33.972589 kubelet[2788]: W0425 01:25:33.972580 2788 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 25 01:25:33.974984 kubelet[2788]: I0425 01:25:33.974969 2788 server.go:1257] "Started kubelet" Apr 25 01:25:33.976883 kubelet[2788]: I0425 01:25:33.976865 2788 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 25 01:25:33.979269 kubelet[2788]: E0425 01:25:33.978328 2788 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-cf3dcbc0ec.18a97525fbe8aa1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-cf3dcbc0ec,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-cf3dcbc0ec,},FirstTimestamp:2026-04-25 01:25:33.97494019 +0000 UTC m=+0.542090015,LastTimestamp:2026-04-25 01:25:33.97494019 +0000 UTC m=+0.542090015,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-cf3dcbc0ec,}" Apr 25 01:25:33.979986 kubelet[2788]: I0425 01:25:33.979937 2788 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 25 01:25:33.981042 kubelet[2788]: I0425 01:25:33.981018 2788 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 25 01:25:33.982976 kubelet[2788]: I0425 01:25:33.982959 2788 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 25 01:25:33.984473 kubelet[2788]: E0425 01:25:33.983633 2788 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-cf3dcbc0ec\" not found" Apr 25 01:25:33.984473 kubelet[2788]: I0425 01:25:33.984192 2788 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 25 01:25:33.984473 kubelet[2788]: I0425 01:25:33.984238 2788 reconciler.go:29] "Reconciler: start to sync state" Apr 25 01:25:33.985762 kubelet[2788]: E0425 01:25:33.985735 2788 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-cf3dcbc0ec?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="200ms" Apr 25 01:25:33.986353 kubelet[2788]: I0425 01:25:33.986332 2788 factory.go:223] Registration of the systemd container factory successfully Apr 25 01:25:33.986575 kubelet[2788]: I0425 01:25:33.986558 2788 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 25 01:25:33.990324 kubelet[2788]: I0425 01:25:33.990286 2788 server.go:317] "Adding debug handlers to kubelet server" Apr 25 01:25:33.992817 kubelet[2788]: I0425 01:25:33.992764 2788 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 25 01:25:33.992817 kubelet[2788]: I0425 01:25:33.992864 2788 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 25 01:25:33.992817 kubelet[2788]: I0425 01:25:33.993048 2788 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 25 01:25:33.992817 kubelet[2788]: I0425 01:25:33.993340 2788 factory.go:223] Registration of the containerd container factory successfully Apr 25 01:25:34.012973 kubelet[2788]: E0425 01:25:34.012936 2788 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 25 01:25:34.084454 kubelet[2788]: E0425 01:25:34.084417 2788 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-cf3dcbc0ec\" not found" Apr 25 01:25:34.095243 kubelet[2788]: I0425 01:25:34.094985 2788 cpu_manager.go:225] "Starting" policy="none" Apr 25 01:25:34.095243 kubelet[2788]: I0425 01:25:34.095001 2788 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 25 01:25:34.095243 kubelet[2788]: I0425 01:25:34.095021 2788 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 25 01:25:34.185070 kubelet[2788]: E0425 01:25:34.185026 2788 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-cf3dcbc0ec\" not found" Apr 25 01:25:34.186414 kubelet[2788]: E0425 01:25:34.186389 2788 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-cf3dcbc0ec?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="400ms" Apr 25 01:25:34.285762 kubelet[2788]: E0425 01:25:34.285729 2788 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-cf3dcbc0ec\" not found" Apr 25 01:25:34.386339 kubelet[2788]: E0425 01:25:34.386316 2788 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-cf3dcbc0ec\" not found" Apr 25 01:25:34.435375 kubelet[2788]: I0425 01:25:34.435213 2788 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 25 01:25:34.436316 kubelet[2788]: I0425 01:25:34.436291 2788 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 25 01:25:34.436316 kubelet[2788]: I0425 01:25:34.436322 2788 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 25 01:25:34.436604 kubelet[2788]: I0425 01:25:34.436349 2788 kubelet.go:2501] "Starting kubelet main sync loop" Apr 25 01:25:34.437569 kubelet[2788]: E0425 01:25:34.437410 2788 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 25 01:25:34.486924 kubelet[2788]: E0425 01:25:34.486891 2788 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-cf3dcbc0ec\" not found" Apr 25 01:25:34.589006 kubelet[2788]: E0425 01:25:34.537993 2788 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 25 01:25:34.589006 kubelet[2788]: E0425 01:25:34.587412 2788 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-cf3dcbc0ec\" not found" Apr 25 01:25:34.589006 kubelet[2788]: E0425 01:25:34.587730 2788 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-cf3dcbc0ec?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="800ms" Apr 25 01:25:34.592470 kubelet[2788]: I0425 01:25:34.592443 2788 policy_none.go:50] "Start" Apr 25 01:25:34.592548 kubelet[2788]: I0425 01:25:34.592480 2788 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 25 01:25:34.592548 kubelet[2788]: I0425 01:25:34.592495 2788 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 25 01:25:34.600496 kubelet[2788]: I0425 01:25:34.600473 2788 policy_none.go:44] "Start" Apr 25 01:25:34.604053 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 25 01:25:34.619133 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 25 01:25:34.622262 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 25 01:25:34.630125 kubelet[2788]: E0425 01:25:34.630092 2788 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 25 01:25:34.630322 kubelet[2788]: I0425 01:25:34.630307 2788 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 25 01:25:34.630363 kubelet[2788]: I0425 01:25:34.630323 2788 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 25 01:25:34.630793 kubelet[2788]: I0425 01:25:34.630646 2788 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 25 01:25:34.632759 kubelet[2788]: E0425 01:25:34.632738 2788 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 25 01:25:34.632915 kubelet[2788]: E0425 01:25:34.632781 2788 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-cf3dcbc0ec\" not found" Apr 25 01:25:34.732134 kubelet[2788]: I0425 01:25:34.732105 2788 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:34.732479 kubelet[2788]: E0425 01:25:34.732455 2788 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:34.788631 kubelet[2788]: I0425 01:25:34.788507 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91a55acd1e2fc46bf60febdf49453479-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec\" (UID: \"91a55acd1e2fc46bf60febdf49453479\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:34.788631 kubelet[2788]: I0425 01:25:34.788559 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91a55acd1e2fc46bf60febdf49453479-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec\" (UID: \"91a55acd1e2fc46bf60febdf49453479\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:34.788631 kubelet[2788]: I0425 01:25:34.788593 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91a55acd1e2fc46bf60febdf49453479-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec\" (UID: \"91a55acd1e2fc46bf60febdf49453479\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:34.934512 kubelet[2788]: I0425 01:25:34.934190 2788 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:34.936466 kubelet[2788]: E0425 01:25:34.935926 2788 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:34.944050 systemd[1]: Created slice kubepods-burstable-pod91a55acd1e2fc46bf60febdf49453479.slice - libcontainer container kubepods-burstable-pod91a55acd1e2fc46bf60febdf49453479.slice. Apr 25 01:25:34.953221 kubelet[2788]: E0425 01:25:34.953192 2788 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-cf3dcbc0ec\" not found" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:34.990078 kubelet[2788]: I0425 01:25:34.990040 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e439ab051de48651b51f8f5d01dd1ec-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec\" (UID: \"8e439ab051de48651b51f8f5d01dd1ec\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:34.990078 kubelet[2788]: I0425 01:25:34.990081 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e439ab051de48651b51f8f5d01dd1ec-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec\" (UID: \"8e439ab051de48651b51f8f5d01dd1ec\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:34.990245 kubelet[2788]: I0425 01:25:34.990100 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8e439ab051de48651b51f8f5d01dd1ec-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec\" (UID: \"8e439ab051de48651b51f8f5d01dd1ec\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:34.990245 kubelet[2788]: I0425 01:25:34.990115 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e439ab051de48651b51f8f5d01dd1ec-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec\" (UID: \"8e439ab051de48651b51f8f5d01dd1ec\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:34.990245 kubelet[2788]: I0425 01:25:34.990130 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8e439ab051de48651b51f8f5d01dd1ec-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec\" (UID: \"8e439ab051de48651b51f8f5d01dd1ec\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:35.146780 containerd[1734]: time="2026-04-25T01:25:35.146675534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec,Uid:91a55acd1e2fc46bf60febdf49453479,Namespace:kube-system,Attempt:0,}" Apr 25 01:25:35.152816 systemd[1]: Created slice kubepods-burstable-pod8e439ab051de48651b51f8f5d01dd1ec.slice - libcontainer container kubepods-burstable-pod8e439ab051de48651b51f8f5d01dd1ec.slice. Apr 25 01:25:35.155368 kubelet[2788]: E0425 01:25:35.155344 2788 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-cf3dcbc0ec\" not found" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:35.161646 systemd[1]: Created slice kubepods-burstable-pod0a375f6398ee80c4edf378a9d270d8da.slice - libcontainer container kubepods-burstable-pod0a375f6398ee80c4edf378a9d270d8da.slice. Apr 25 01:25:35.163317 containerd[1734]: time="2026-04-25T01:25:35.163130333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec,Uid:8e439ab051de48651b51f8f5d01dd1ec,Namespace:kube-system,Attempt:0,}" Apr 25 01:25:35.163398 kubelet[2788]: E0425 01:25:35.163212 2788 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-cf3dcbc0ec\" not found" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:35.191910 kubelet[2788]: I0425 01:25:35.191451 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a375f6398ee80c4edf378a9d270d8da-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-cf3dcbc0ec\" (UID: \"0a375f6398ee80c4edf378a9d270d8da\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:35.337772 kubelet[2788]: I0425 01:25:35.337743 2788 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:35.338074 kubelet[2788]: E0425 01:25:35.338052 2788 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:35.388799 kubelet[2788]: E0425 01:25:35.388761 2788 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-cf3dcbc0ec?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="1.6s" Apr 25 01:25:35.470633 containerd[1734]: time="2026-04-25T01:25:35.470302798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-cf3dcbc0ec,Uid:0a375f6398ee80c4edf378a9d270d8da,Namespace:kube-system,Attempt:0,}" Apr 25 01:25:35.907655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2584418885.mount: Deactivated successfully. Apr 25 01:25:35.938504 containerd[1734]: time="2026-04-25T01:25:35.938420376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 25 01:25:35.941666 containerd[1734]: time="2026-04-25T01:25:35.941629415Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 25 01:25:35.944589 containerd[1734]: time="2026-04-25T01:25:35.944556735Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 25 01:25:35.948722 containerd[1734]: time="2026-04-25T01:25:35.947959815Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 25 01:25:35.950993 containerd[1734]: time="2026-04-25T01:25:35.950921775Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 25 01:25:35.955237 containerd[1734]: time="2026-04-25T01:25:35.954216535Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 25 01:25:35.957950 containerd[1734]: time="2026-04-25T01:25:35.956427775Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 25 01:25:35.961210 containerd[1734]: time="2026-04-25T01:25:35.960929935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 25 01:25:35.961993 containerd[1734]: time="2026-04-25T01:25:35.961766615Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 798.534402ms" Apr 25 01:25:35.964220 containerd[1734]: time="2026-04-25T01:25:35.964183454Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 493.806336ms" Apr 25 01:25:35.964864 containerd[1734]: time="2026-04-25T01:25:35.964825454Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 818.07232ms" Apr 25 01:25:36.125333 kubelet[2788]: E0425 01:25:36.125288 2788 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 25 01:25:36.155031 kubelet[2788]: I0425 01:25:36.154732 2788 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:36.155132 kubelet[2788]: E0425 01:25:36.155089 2788 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:36.989670 kubelet[2788]: E0425 01:25:36.989625 2788 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-cf3dcbc0ec?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="3.2s" Apr 25 01:25:37.042095 containerd[1734]: time="2026-04-25T01:25:37.041988110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 01:25:37.042095 containerd[1734]: time="2026-04-25T01:25:37.042054910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 01:25:37.042095 containerd[1734]: time="2026-04-25T01:25:37.042070550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:25:37.043266 containerd[1734]: time="2026-04-25T01:25:37.043028589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:25:37.043694 containerd[1734]: time="2026-04-25T01:25:37.043456109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 01:25:37.043694 containerd[1734]: time="2026-04-25T01:25:37.043515389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 01:25:37.043694 containerd[1734]: time="2026-04-25T01:25:37.043527589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:25:37.043694 containerd[1734]: time="2026-04-25T01:25:37.043601429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:25:37.047768 containerd[1734]: time="2026-04-25T01:25:37.046890388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 01:25:37.047768 containerd[1734]: time="2026-04-25T01:25:37.046954348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 01:25:37.047768 containerd[1734]: time="2026-04-25T01:25:37.046966068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:25:37.047768 containerd[1734]: time="2026-04-25T01:25:37.047047428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:25:37.086612 systemd[1]: Started cri-containerd-65e24df2de267b10a26994ff29bb65fdfbaf7783fe4bee4d8bc096f48cc5394e.scope - libcontainer container 65e24df2de267b10a26994ff29bb65fdfbaf7783fe4bee4d8bc096f48cc5394e. Apr 25 01:25:37.088420 systemd[1]: Started cri-containerd-d3da6f00abd9267ce1048b22d71749f401f3aa6707dbef56fa9e9ef6f1321552.scope - libcontainer container d3da6f00abd9267ce1048b22d71749f401f3aa6707dbef56fa9e9ef6f1321552. Apr 25 01:25:37.092995 systemd[1]: Started cri-containerd-32977ac8c1176fedad482f4945f54224ba05792d3a8cb250f9f7e6d2c76c6fd4.scope - libcontainer container 32977ac8c1176fedad482f4945f54224ba05792d3a8cb250f9f7e6d2c76c6fd4. Apr 25 01:25:37.141546 containerd[1734]: time="2026-04-25T01:25:37.141336204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec,Uid:91a55acd1e2fc46bf60febdf49453479,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3da6f00abd9267ce1048b22d71749f401f3aa6707dbef56fa9e9ef6f1321552\"" Apr 25 01:25:37.145182 containerd[1734]: time="2026-04-25T01:25:37.145142483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec,Uid:8e439ab051de48651b51f8f5d01dd1ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"32977ac8c1176fedad482f4945f54224ba05792d3a8cb250f9f7e6d2c76c6fd4\"" Apr 25 01:25:37.150078 containerd[1734]: time="2026-04-25T01:25:37.150009082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-cf3dcbc0ec,Uid:0a375f6398ee80c4edf378a9d270d8da,Namespace:kube-system,Attempt:0,} returns sandbox id \"65e24df2de267b10a26994ff29bb65fdfbaf7783fe4bee4d8bc096f48cc5394e\"" Apr 25 01:25:37.158184 containerd[1734]: time="2026-04-25T01:25:37.157920120Z" level=info msg="CreateContainer within sandbox \"d3da6f00abd9267ce1048b22d71749f401f3aa6707dbef56fa9e9ef6f1321552\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 25 01:25:37.161311 containerd[1734]: time="2026-04-25T01:25:37.161281799Z" level=info msg="CreateContainer within sandbox \"32977ac8c1176fedad482f4945f54224ba05792d3a8cb250f9f7e6d2c76c6fd4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 25 01:25:37.166953 containerd[1734]: time="2026-04-25T01:25:37.166919798Z" level=info msg="CreateContainer within sandbox \"65e24df2de267b10a26994ff29bb65fdfbaf7783fe4bee4d8bc096f48cc5394e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 25 01:25:37.208219 containerd[1734]: time="2026-04-25T01:25:37.208172707Z" level=info msg="CreateContainer within sandbox \"32977ac8c1176fedad482f4945f54224ba05792d3a8cb250f9f7e6d2c76c6fd4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"32d5c7c94e638093cfc7d48ac3b2287a0ec1be4d88a4a2fe6f76c09917185a3a\"" Apr 25 01:25:37.208849 containerd[1734]: time="2026-04-25T01:25:37.208825227Z" level=info msg="StartContainer for \"32d5c7c94e638093cfc7d48ac3b2287a0ec1be4d88a4a2fe6f76c09917185a3a\"" Apr 25 01:25:37.228272 containerd[1734]: time="2026-04-25T01:25:37.227912302Z" level=info msg="CreateContainer within sandbox \"d3da6f00abd9267ce1048b22d71749f401f3aa6707dbef56fa9e9ef6f1321552\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6a4bf55b2d4d7e50cebd6522c00d59cd5ef7d947fde33752cb3972b58f10ac77\"" Apr 25 01:25:37.232686 containerd[1734]: time="2026-04-25T01:25:37.231051181Z" level=info msg="StartContainer for \"6a4bf55b2d4d7e50cebd6522c00d59cd5ef7d947fde33752cb3972b58f10ac77\"" Apr 25 01:25:37.234615 systemd[1]: Started cri-containerd-32d5c7c94e638093cfc7d48ac3b2287a0ec1be4d88a4a2fe6f76c09917185a3a.scope - libcontainer container 32d5c7c94e638093cfc7d48ac3b2287a0ec1be4d88a4a2fe6f76c09917185a3a. Apr 25 01:25:37.246896 containerd[1734]: time="2026-04-25T01:25:37.246788337Z" level=info msg="CreateContainer within sandbox \"65e24df2de267b10a26994ff29bb65fdfbaf7783fe4bee4d8bc096f48cc5394e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"303fbb2365dbaf4e90fd7b5ee6dd0ddd8c028e3f65b3371e5160ad6d30efa725\"" Apr 25 01:25:37.248405 containerd[1734]: time="2026-04-25T01:25:37.248375697Z" level=info msg="StartContainer for \"303fbb2365dbaf4e90fd7b5ee6dd0ddd8c028e3f65b3371e5160ad6d30efa725\"" Apr 25 01:25:37.266906 systemd[1]: Started cri-containerd-6a4bf55b2d4d7e50cebd6522c00d59cd5ef7d947fde33752cb3972b58f10ac77.scope - libcontainer container 6a4bf55b2d4d7e50cebd6522c00d59cd5ef7d947fde33752cb3972b58f10ac77. Apr 25 01:25:37.286606 systemd[1]: Started cri-containerd-303fbb2365dbaf4e90fd7b5ee6dd0ddd8c028e3f65b3371e5160ad6d30efa725.scope - libcontainer container 303fbb2365dbaf4e90fd7b5ee6dd0ddd8c028e3f65b3371e5160ad6d30efa725. Apr 25 01:25:37.295431 containerd[1734]: time="2026-04-25T01:25:37.293968085Z" level=info msg="StartContainer for \"32d5c7c94e638093cfc7d48ac3b2287a0ec1be4d88a4a2fe6f76c09917185a3a\" returns successfully" Apr 25 01:25:37.347625 containerd[1734]: time="2026-04-25T01:25:37.347576031Z" level=info msg="StartContainer for \"6a4bf55b2d4d7e50cebd6522c00d59cd5ef7d947fde33752cb3972b58f10ac77\" returns successfully" Apr 25 01:25:37.347829 containerd[1734]: time="2026-04-25T01:25:37.347576031Z" level=info msg="StartContainer for \"303fbb2365dbaf4e90fd7b5ee6dd0ddd8c028e3f65b3371e5160ad6d30efa725\" returns successfully" Apr 25 01:25:37.449649 kubelet[2788]: E0425 01:25:37.449622 2788 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-cf3dcbc0ec\" not found" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:37.455449 kubelet[2788]: E0425 01:25:37.453821 2788 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-cf3dcbc0ec\" not found" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:37.458031 kubelet[2788]: E0425 01:25:37.457877 2788 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-cf3dcbc0ec\" not found" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:37.760075 kubelet[2788]: I0425 01:25:37.759365 2788 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:38.461454 kubelet[2788]: E0425 01:25:38.460916 2788 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-cf3dcbc0ec\" not found" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:38.461454 kubelet[2788]: E0425 01:25:38.461099 2788 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-cf3dcbc0ec\" not found" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:38.973671 kubelet[2788]: I0425 01:25:38.973626 2788 apiserver.go:52] "Watching apiserver" Apr 25 01:25:38.985275 kubelet[2788]: I0425 01:25:38.985135 2788 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 25 01:25:39.100567 kubelet[2788]: I0425 01:25:39.099765 2788 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:39.100567 kubelet[2788]: E0425 01:25:39.099805 2788 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-cf3dcbc0ec\": node \"ci-4081.3.6-n-cf3dcbc0ec\" not found" Apr 25 01:25:39.184505 kubelet[2788]: I0425 01:25:39.184331 2788 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:39.285829 kubelet[2788]: E0425 01:25:39.285794 2788 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:39.285829 kubelet[2788]: I0425 01:25:39.285826 2788 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:39.287775 kubelet[2788]: E0425 01:25:39.287658 2788 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:39.287775 kubelet[2788]: I0425 01:25:39.287681 2788 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:39.291387 kubelet[2788]: E0425 01:25:39.291355 2788 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-cf3dcbc0ec\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:39.460501 kubelet[2788]: I0425 01:25:39.460222 2788 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:39.460501 kubelet[2788]: I0425 01:25:39.460502 2788 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:39.462769 kubelet[2788]: E0425 01:25:39.462728 2788 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-cf3dcbc0ec\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:39.463648 kubelet[2788]: E0425 01:25:39.463465 2788 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:41.472967 systemd[1]: Reloading requested from client PID 3073 ('systemctl') (unit session-9.scope)... Apr 25 01:25:41.473257 systemd[1]: Reloading... Apr 25 01:25:41.565465 zram_generator::config[3119]: No configuration found. Apr 25 01:25:41.679689 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 25 01:25:41.790675 systemd[1]: Reloading finished in 317 ms. Apr 25 01:25:41.831624 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 01:25:41.850606 systemd[1]: kubelet.service: Deactivated successfully. Apr 25 01:25:41.850833 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 01:25:41.856809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 25 01:25:42.012564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 25 01:25:42.018628 (kubelet)[3177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 25 01:25:42.060487 kubelet[3177]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 25 01:25:42.069534 kubelet[3177]: I0425 01:25:42.069476 3177 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 25 01:25:42.069671 kubelet[3177]: I0425 01:25:42.069661 3177 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 25 01:25:42.069739 kubelet[3177]: I0425 01:25:42.069731 3177 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 25 01:25:42.069785 kubelet[3177]: I0425 01:25:42.069775 3177 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 25 01:25:42.070324 kubelet[3177]: I0425 01:25:42.070308 3177 server.go:951] "Client rotation is on, will bootstrap in background" Apr 25 01:25:42.072166 kubelet[3177]: I0425 01:25:42.072144 3177 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 25 01:25:42.074186 kubelet[3177]: I0425 01:25:42.074043 3177 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 25 01:25:42.078488 kubelet[3177]: E0425 01:25:42.078455 3177 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 25 01:25:42.078646 kubelet[3177]: I0425 01:25:42.078633 3177 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 25 01:25:42.082224 kubelet[3177]: I0425 01:25:42.082197 3177 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 25 01:25:42.082953 kubelet[3177]: I0425 01:25:42.082560 3177 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 25 01:25:42.082953 kubelet[3177]: I0425 01:25:42.082585 3177 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-cf3dcbc0ec","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 25 01:25:42.082953 kubelet[3177]: I0425 01:25:42.082734 3177 topology_manager.go:143] "Creating topology manager with none policy" Apr 25 01:25:42.082953 kubelet[3177]: I0425 01:25:42.082741 3177 container_manager_linux.go:308] "Creating device plugin manager" Apr 25 01:25:42.083161 kubelet[3177]: I0425 01:25:42.082761 3177 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 25 01:25:42.083306 kubelet[3177]: I0425 01:25:42.083290 3177 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 25 01:25:42.083583 kubelet[3177]: I0425 01:25:42.083567 3177 kubelet.go:482] "Attempting to sync node with API server" Apr 25 01:25:42.083673 kubelet[3177]: I0425 01:25:42.083663 3177 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 25 01:25:42.083732 kubelet[3177]: I0425 01:25:42.083724 3177 kubelet.go:394] "Adding apiserver pod source" Apr 25 01:25:42.083785 kubelet[3177]: I0425 01:25:42.083777 3177 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 25 01:25:42.088922 kubelet[3177]: I0425 01:25:42.088877 3177 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 25 01:25:42.090920 kubelet[3177]: I0425 01:25:42.090890 3177 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 25 01:25:42.091013 kubelet[3177]: I0425 01:25:42.090932 3177 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 25 01:25:42.101466 kubelet[3177]: I0425 01:25:42.100022 3177 server.go:1257] "Started kubelet" Apr 25 01:25:42.104273 kubelet[3177]: I0425 01:25:42.103834 3177 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 25 01:25:42.112295 kubelet[3177]: I0425 01:25:42.112267 3177 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 25 01:25:42.112674 kubelet[3177]: E0425 01:25:42.112651 3177 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-cf3dcbc0ec\" not found" Apr 25 01:25:42.124160 kubelet[3177]: I0425 01:25:42.112872 3177 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 25 01:25:42.124850 kubelet[3177]: I0425 01:25:42.124823 3177 factory.go:223] Registration of the systemd container factory successfully Apr 25 01:25:42.125312 kubelet[3177]: I0425 01:25:42.124917 3177 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 25 01:25:42.125630 kubelet[3177]: I0425 01:25:42.125611 3177 server.go:317] "Adding debug handlers to kubelet server" Apr 25 01:25:42.126262 kubelet[3177]: I0425 01:25:42.123717 3177 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 25 01:25:42.127854 kubelet[3177]: I0425 01:25:42.113566 3177 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 25 01:25:42.127995 kubelet[3177]: I0425 01:25:42.127979 3177 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 25 01:25:42.128461 kubelet[3177]: I0425 01:25:42.128177 3177 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 25 01:25:42.131388 kubelet[3177]: I0425 01:25:42.118036 3177 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 25 01:25:42.131388 kubelet[3177]: I0425 01:25:42.118153 3177 reconciler.go:29] "Reconciler: start to sync state" Apr 25 01:25:42.134799 kubelet[3177]: I0425 01:25:42.134773 3177 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 25 01:25:42.135747 kubelet[3177]: I0425 01:25:42.135730 3177 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 25 01:25:42.135843 kubelet[3177]: I0425 01:25:42.135833 3177 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 25 01:25:42.135901 kubelet[3177]: I0425 01:25:42.135892 3177 kubelet.go:2501] "Starting kubelet main sync loop" Apr 25 01:25:42.135986 kubelet[3177]: E0425 01:25:42.135970 3177 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 25 01:25:42.138457 kubelet[3177]: I0425 01:25:42.138425 3177 factory.go:223] Registration of the containerd container factory successfully Apr 25 01:25:42.187616 kubelet[3177]: I0425 01:25:42.187591 3177 cpu_manager.go:225] "Starting" policy="none" Apr 25 01:25:42.188563 kubelet[3177]: I0425 01:25:42.187771 3177 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 25 01:25:42.188563 kubelet[3177]: I0425 01:25:42.187798 3177 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 25 01:25:42.188563 kubelet[3177]: I0425 01:25:42.187925 3177 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 25 01:25:42.188563 kubelet[3177]: I0425 01:25:42.187935 3177 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 25 01:25:42.188563 kubelet[3177]: I0425 01:25:42.187954 3177 policy_none.go:50] "Start" Apr 25 01:25:42.188563 kubelet[3177]: I0425 01:25:42.187962 3177 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 25 01:25:42.188563 kubelet[3177]: I0425 01:25:42.187971 3177 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 25 01:25:42.188563 kubelet[3177]: I0425 01:25:42.188065 3177 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 25 01:25:42.188563 kubelet[3177]: I0425 01:25:42.188076 3177 policy_none.go:44] "Start" Apr 25 01:25:42.193688 kubelet[3177]: E0425 01:25:42.193660 3177 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 25 01:25:42.193842 kubelet[3177]: I0425 01:25:42.193825 3177 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 25 01:25:42.193890 kubelet[3177]: I0425 01:25:42.193843 3177 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 25 01:25:42.194959 kubelet[3177]: I0425 01:25:42.194398 3177 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 25 01:25:42.197626 kubelet[3177]: E0425 01:25:42.196753 3177 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 25 01:25:42.237287 kubelet[3177]: I0425 01:25:42.236980 3177 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:42.237287 kubelet[3177]: I0425 01:25:42.237113 3177 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:42.237287 kubelet[3177]: I0425 01:25:42.236980 3177 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:42.248485 kubelet[3177]: I0425 01:25:42.248430 3177 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 25 01:25:42.254132 kubelet[3177]: I0425 01:25:42.253943 3177 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 25 01:25:42.254132 kubelet[3177]: I0425 01:25:42.253997 3177 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 25 01:25:42.296165 kubelet[3177]: I0425 01:25:42.296128 3177 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:42.307467 kubelet[3177]: I0425 01:25:42.307241 3177 kubelet_node_status.go:123] "Node was previously registered" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:42.307467 kubelet[3177]: I0425 01:25:42.307329 3177 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:42.431926 kubelet[3177]: I0425 01:25:42.431826 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8e439ab051de48651b51f8f5d01dd1ec-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec\" (UID: \"8e439ab051de48651b51f8f5d01dd1ec\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:42.432290 kubelet[3177]: I0425 01:25:42.432265 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e439ab051de48651b51f8f5d01dd1ec-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec\" (UID: \"8e439ab051de48651b51f8f5d01dd1ec\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:42.432448 kubelet[3177]: I0425 01:25:42.432372 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8e439ab051de48651b51f8f5d01dd1ec-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec\" (UID: \"8e439ab051de48651b51f8f5d01dd1ec\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:42.432924 kubelet[3177]: I0425 01:25:42.432878 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e439ab051de48651b51f8f5d01dd1ec-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec\" (UID: \"8e439ab051de48651b51f8f5d01dd1ec\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:42.433107 kubelet[3177]: I0425 01:25:42.433089 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a375f6398ee80c4edf378a9d270d8da-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-cf3dcbc0ec\" (UID: \"0a375f6398ee80c4edf378a9d270d8da\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:42.433218 kubelet[3177]: I0425 01:25:42.433204 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e439ab051de48651b51f8f5d01dd1ec-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec\" (UID: \"8e439ab051de48651b51f8f5d01dd1ec\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:42.433366 kubelet[3177]: I0425 01:25:42.433321 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91a55acd1e2fc46bf60febdf49453479-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec\" (UID: \"91a55acd1e2fc46bf60febdf49453479\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:42.433469 kubelet[3177]: I0425 01:25:42.433456 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91a55acd1e2fc46bf60febdf49453479-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec\" (UID: \"91a55acd1e2fc46bf60febdf49453479\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:42.433581 kubelet[3177]: I0425 01:25:42.433567 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91a55acd1e2fc46bf60febdf49453479-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec\" (UID: \"91a55acd1e2fc46bf60febdf49453479\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:43.087075 kubelet[3177]: I0425 01:25:43.087039 3177 apiserver.go:52] "Watching apiserver" Apr 25 01:25:43.132053 kubelet[3177]: I0425 01:25:43.132012 3177 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 25 01:25:43.169840 kubelet[3177]: I0425 01:25:43.169775 3177 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:43.170171 kubelet[3177]: I0425 01:25:43.170151 3177 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:43.182833 kubelet[3177]: I0425 01:25:43.182807 3177 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 25 01:25:43.182959 kubelet[3177]: E0425 01:25:43.182858 3177 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:43.183853 kubelet[3177]: I0425 01:25:43.183822 3177 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 25 01:25:43.183928 kubelet[3177]: E0425 01:25:43.183860 3177 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-cf3dcbc0ec\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:25:44.213098 kubelet[3177]: I0425 01:25:44.212677 3177 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-cf3dcbc0ec" podStartSLOduration=2.212648265 podStartE2EDuration="2.212648265s" podCreationTimestamp="2026-04-25 01:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 01:25:44.200986629 +0000 UTC m=+2.179608356" watchObservedRunningTime="2026-04-25 01:25:44.212648265 +0000 UTC m=+2.191269992" Apr 25 01:25:44.224536 kubelet[3177]: I0425 01:25:44.224431 3177 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-cf3dcbc0ec" podStartSLOduration=2.224417502 podStartE2EDuration="2.224417502s" podCreationTimestamp="2026-04-25 01:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 01:25:44.212995065 +0000 UTC m=+2.191616792" watchObservedRunningTime="2026-04-25 01:25:44.224417502 +0000 UTC m=+2.203039229" Apr 25 01:25:44.224736 kubelet[3177]: I0425 01:25:44.224542 3177 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-cf3dcbc0ec" podStartSLOduration=2.224537822 podStartE2EDuration="2.224537822s" podCreationTimestamp="2026-04-25 01:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 01:25:44.224376662 +0000 UTC m=+2.202998389" watchObservedRunningTime="2026-04-25 01:25:44.224537822 +0000 UTC m=+2.203159549" Apr 25 01:25:47.457447 kubelet[3177]: I0425 01:25:47.457386 3177 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 25 01:25:47.458090 containerd[1734]: time="2026-04-25T01:25:47.457738174Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 25 01:25:47.458800 kubelet[3177]: I0425 01:25:47.458545 3177 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 25 01:25:48.169544 systemd[1]: Created slice kubepods-besteffort-podd21a25b2_5771_4079_86d3_873abbd12c8c.slice - libcontainer container kubepods-besteffort-podd21a25b2_5771_4079_86d3_873abbd12c8c.slice. Apr 25 01:25:48.266279 kubelet[3177]: I0425 01:25:48.266239 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d21a25b2-5771-4079-86d3-873abbd12c8c-kube-proxy\") pod \"kube-proxy-zcwk7\" (UID: \"d21a25b2-5771-4079-86d3-873abbd12c8c\") " pod="kube-system/kube-proxy-zcwk7" Apr 25 01:25:48.266279 kubelet[3177]: I0425 01:25:48.266278 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d21a25b2-5771-4079-86d3-873abbd12c8c-xtables-lock\") pod \"kube-proxy-zcwk7\" (UID: \"d21a25b2-5771-4079-86d3-873abbd12c8c\") " pod="kube-system/kube-proxy-zcwk7" Apr 25 01:25:48.266472 kubelet[3177]: I0425 01:25:48.266296 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d21a25b2-5771-4079-86d3-873abbd12c8c-lib-modules\") pod \"kube-proxy-zcwk7\" (UID: \"d21a25b2-5771-4079-86d3-873abbd12c8c\") " pod="kube-system/kube-proxy-zcwk7" Apr 25 01:25:48.266472 kubelet[3177]: I0425 01:25:48.266314 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7msfs\" (UniqueName: \"kubernetes.io/projected/d21a25b2-5771-4079-86d3-873abbd12c8c-kube-api-access-7msfs\") pod \"kube-proxy-zcwk7\" (UID: \"d21a25b2-5771-4079-86d3-873abbd12c8c\") " pod="kube-system/kube-proxy-zcwk7" Apr 25 01:25:48.483516 containerd[1734]: time="2026-04-25T01:25:48.483399699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zcwk7,Uid:d21a25b2-5771-4079-86d3-873abbd12c8c,Namespace:kube-system,Attempt:0,}" Apr 25 01:25:48.527750 containerd[1734]: time="2026-04-25T01:25:48.527644162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 01:25:48.527750 containerd[1734]: time="2026-04-25T01:25:48.527701242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 01:25:48.527750 containerd[1734]: time="2026-04-25T01:25:48.527716642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:25:48.528277 containerd[1734]: time="2026-04-25T01:25:48.527792962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:25:48.549648 systemd[1]: Started cri-containerd-8c5b975d708581f7710674721fcc88daabd4616324afb93d74164aa1954699cc.scope - libcontainer container 8c5b975d708581f7710674721fcc88daabd4616324afb93d74164aa1954699cc. Apr 25 01:25:48.574958 containerd[1734]: time="2026-04-25T01:25:48.574915704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zcwk7,Uid:d21a25b2-5771-4079-86d3-873abbd12c8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c5b975d708581f7710674721fcc88daabd4616324afb93d74164aa1954699cc\"" Apr 25 01:25:48.586327 containerd[1734]: time="2026-04-25T01:25:48.586286380Z" level=info msg="CreateContainer within sandbox \"8c5b975d708581f7710674721fcc88daabd4616324afb93d74164aa1954699cc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 25 01:25:48.631006 containerd[1734]: time="2026-04-25T01:25:48.630960962Z" level=info msg="CreateContainer within sandbox \"8c5b975d708581f7710674721fcc88daabd4616324afb93d74164aa1954699cc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"672955471a6e0f39f98d3f6cc88197d6fb997e4b605e98ed416c2269afdd71d8\"" Apr 25 01:25:48.632020 containerd[1734]: time="2026-04-25T01:25:48.631987602Z" level=info msg="StartContainer for \"672955471a6e0f39f98d3f6cc88197d6fb997e4b605e98ed416c2269afdd71d8\"" Apr 25 01:25:48.657600 systemd[1]: Started cri-containerd-672955471a6e0f39f98d3f6cc88197d6fb997e4b605e98ed416c2269afdd71d8.scope - libcontainer container 672955471a6e0f39f98d3f6cc88197d6fb997e4b605e98ed416c2269afdd71d8. Apr 25 01:25:48.704033 containerd[1734]: time="2026-04-25T01:25:48.703988494Z" level=info msg="StartContainer for \"672955471a6e0f39f98d3f6cc88197d6fb997e4b605e98ed416c2269afdd71d8\" returns successfully" Apr 25 01:25:48.751293 systemd[1]: Created slice kubepods-besteffort-pod11695b78_19cf_4936_aa77_dfc84ca08cd9.slice - libcontainer container kubepods-besteffort-pod11695b78_19cf_4936_aa77_dfc84ca08cd9.slice. Apr 25 01:25:48.769277 kubelet[3177]: I0425 01:25:48.769193 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/11695b78-19cf-4936-aa77-dfc84ca08cd9-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-tk85t\" (UID: \"11695b78-19cf-4936-aa77-dfc84ca08cd9\") " pod="tigera-operator/tigera-operator-6cf4cccc57-tk85t" Apr 25 01:25:48.769277 kubelet[3177]: I0425 01:25:48.769235 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcfm8\" (UniqueName: \"kubernetes.io/projected/11695b78-19cf-4936-aa77-dfc84ca08cd9-kube-api-access-hcfm8\") pod \"tigera-operator-6cf4cccc57-tk85t\" (UID: \"11695b78-19cf-4936-aa77-dfc84ca08cd9\") " pod="tigera-operator/tigera-operator-6cf4cccc57-tk85t" Apr 25 01:25:49.063402 containerd[1734]: time="2026-04-25T01:25:49.063361236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-tk85t,Uid:11695b78-19cf-4936-aa77-dfc84ca08cd9,Namespace:tigera-operator,Attempt:0,}" Apr 25 01:25:49.116583 containerd[1734]: time="2026-04-25T01:25:49.116361295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 01:25:49.116583 containerd[1734]: time="2026-04-25T01:25:49.116416055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 01:25:49.116583 containerd[1734]: time="2026-04-25T01:25:49.116431135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:25:49.116583 containerd[1734]: time="2026-04-25T01:25:49.116533975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:25:49.133689 systemd[1]: Started cri-containerd-45cc74d01d838e4a341752cba4748bf136858ddbbd54cac7aceebaff97639c20.scope - libcontainer container 45cc74d01d838e4a341752cba4748bf136858ddbbd54cac7aceebaff97639c20. Apr 25 01:25:49.163831 containerd[1734]: time="2026-04-25T01:25:49.163792437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-tk85t,Uid:11695b78-19cf-4936-aa77-dfc84ca08cd9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"45cc74d01d838e4a341752cba4748bf136858ddbbd54cac7aceebaff97639c20\"" Apr 25 01:25:49.167126 containerd[1734]: time="2026-04-25T01:25:49.167079316Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 25 01:25:49.212861 kubelet[3177]: I0425 01:25:49.212569 3177 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-zcwk7" podStartSLOduration=1.212554498 podStartE2EDuration="1.212554498s" podCreationTimestamp="2026-04-25 01:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 01:25:49.212467098 +0000 UTC m=+7.191088825" watchObservedRunningTime="2026-04-25 01:25:49.212554498 +0000 UTC m=+7.191176225" Apr 25 01:25:49.386015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2937325650.mount: Deactivated successfully. Apr 25 01:25:50.918984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1590488308.mount: Deactivated successfully. Apr 25 01:25:51.783898 containerd[1734]: time="2026-04-25T01:25:51.783670428Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:51.786763 containerd[1734]: time="2026-04-25T01:25:51.786725867Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 25 01:25:51.791427 containerd[1734]: time="2026-04-25T01:25:51.791390385Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:51.796965 containerd[1734]: time="2026-04-25T01:25:51.796926743Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:25:51.798465 containerd[1734]: time="2026-04-25T01:25:51.798418063Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.631295387s" Apr 25 01:25:51.798496 containerd[1734]: time="2026-04-25T01:25:51.798466903Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 25 01:25:51.806351 containerd[1734]: time="2026-04-25T01:25:51.806283980Z" level=info msg="CreateContainer within sandbox \"45cc74d01d838e4a341752cba4748bf136858ddbbd54cac7aceebaff97639c20\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 25 01:25:51.836539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2881570749.mount: Deactivated successfully. Apr 25 01:25:51.847981 containerd[1734]: time="2026-04-25T01:25:51.847850444Z" level=info msg="CreateContainer within sandbox \"45cc74d01d838e4a341752cba4748bf136858ddbbd54cac7aceebaff97639c20\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"306d0d3b855c2c0ee781c805c446ebd747d9d165be4d13311a1d9d9e7d455739\"" Apr 25 01:25:51.849609 containerd[1734]: time="2026-04-25T01:25:51.849583923Z" level=info msg="StartContainer for \"306d0d3b855c2c0ee781c805c446ebd747d9d165be4d13311a1d9d9e7d455739\"" Apr 25 01:25:51.873622 systemd[1]: Started cri-containerd-306d0d3b855c2c0ee781c805c446ebd747d9d165be4d13311a1d9d9e7d455739.scope - libcontainer container 306d0d3b855c2c0ee781c805c446ebd747d9d165be4d13311a1d9d9e7d455739. Apr 25 01:25:51.904345 containerd[1734]: time="2026-04-25T01:25:51.904283742Z" level=info msg="StartContainer for \"306d0d3b855c2c0ee781c805c446ebd747d9d165be4d13311a1d9d9e7d455739\" returns successfully" Apr 25 01:25:52.202862 kubelet[3177]: I0425 01:25:52.201878 3177 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-tk85t" podStartSLOduration=1.5683418009999999 podStartE2EDuration="4.201865707s" podCreationTimestamp="2026-04-25 01:25:48 +0000 UTC" firstStartedPulling="2026-04-25 01:25:49.165791836 +0000 UTC m=+7.144413523" lastFinishedPulling="2026-04-25 01:25:51.799315702 +0000 UTC m=+9.777937429" observedRunningTime="2026-04-25 01:25:52.201677588 +0000 UTC m=+10.180299315" watchObservedRunningTime="2026-04-25 01:25:52.201865707 +0000 UTC m=+10.180487434" Apr 25 01:25:57.732582 sudo[2242]: pam_unix(sudo:session): session closed for user root Apr 25 01:25:57.882981 sshd[2239]: pam_unix(sshd:session): session closed for user core Apr 25 01:25:57.888816 systemd[1]: sshd@6-10.0.0.7:22-4.175.71.9:39534.service: Deactivated successfully. Apr 25 01:25:57.890330 systemd[1]: session-9.scope: Deactivated successfully. Apr 25 01:25:57.890820 systemd[1]: session-9.scope: Consumed 4.818s CPU time, 155.2M memory peak, 0B memory swap peak. Apr 25 01:25:57.892472 systemd-logind[1716]: Session 9 logged out. Waiting for processes to exit. Apr 25 01:25:57.894942 systemd-logind[1716]: Removed session 9. Apr 25 01:26:03.941358 systemd[1]: Created slice kubepods-besteffort-poddb912044_2799_4b9a_9cb5_3c218cc8a2f9.slice - libcontainer container kubepods-besteffort-poddb912044_2799_4b9a_9cb5_3c218cc8a2f9.slice. Apr 25 01:26:03.965497 kubelet[3177]: I0425 01:26:03.964689 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db912044-2799-4b9a-9cb5-3c218cc8a2f9-tigera-ca-bundle\") pod \"calico-typha-6c4db9bcdc-ss8sx\" (UID: \"db912044-2799-4b9a-9cb5-3c218cc8a2f9\") " pod="calico-system/calico-typha-6c4db9bcdc-ss8sx" Apr 25 01:26:03.965497 kubelet[3177]: I0425 01:26:03.964729 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/db912044-2799-4b9a-9cb5-3c218cc8a2f9-typha-certs\") pod \"calico-typha-6c4db9bcdc-ss8sx\" (UID: \"db912044-2799-4b9a-9cb5-3c218cc8a2f9\") " pod="calico-system/calico-typha-6c4db9bcdc-ss8sx" Apr 25 01:26:03.965497 kubelet[3177]: I0425 01:26:03.964751 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx86r\" (UniqueName: \"kubernetes.io/projected/db912044-2799-4b9a-9cb5-3c218cc8a2f9-kube-api-access-gx86r\") pod \"calico-typha-6c4db9bcdc-ss8sx\" (UID: \"db912044-2799-4b9a-9cb5-3c218cc8a2f9\") " pod="calico-system/calico-typha-6c4db9bcdc-ss8sx" Apr 25 01:26:04.032770 systemd[1]: Created slice kubepods-besteffort-pod207e9921_348b_4272_8c2e_665ee7954782.slice - libcontainer container kubepods-besteffort-pod207e9921_348b_4272_8c2e_665ee7954782.slice. Apr 25 01:26:04.065402 kubelet[3177]: I0425 01:26:04.065365 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/207e9921-348b-4272-8c2e-665ee7954782-cni-net-dir\") pod \"calico-node-6cdl8\" (UID: \"207e9921-348b-4272-8c2e-665ee7954782\") " pod="calico-system/calico-node-6cdl8" Apr 25 01:26:04.065597 kubelet[3177]: I0425 01:26:04.065581 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/207e9921-348b-4272-8c2e-665ee7954782-flexvol-driver-host\") pod \"calico-node-6cdl8\" (UID: \"207e9921-348b-4272-8c2e-665ee7954782\") " pod="calico-system/calico-node-6cdl8" Apr 25 01:26:04.065692 kubelet[3177]: I0425 01:26:04.065681 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/207e9921-348b-4272-8c2e-665ee7954782-nodeproc\") pod \"calico-node-6cdl8\" (UID: \"207e9921-348b-4272-8c2e-665ee7954782\") " pod="calico-system/calico-node-6cdl8" Apr 25 01:26:04.067054 kubelet[3177]: I0425 01:26:04.065748 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/207e9921-348b-4272-8c2e-665ee7954782-policysync\") pod \"calico-node-6cdl8\" (UID: \"207e9921-348b-4272-8c2e-665ee7954782\") " pod="calico-system/calico-node-6cdl8" Apr 25 01:26:04.067054 kubelet[3177]: I0425 01:26:04.065768 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/207e9921-348b-4272-8c2e-665ee7954782-node-certs\") pod \"calico-node-6cdl8\" (UID: \"207e9921-348b-4272-8c2e-665ee7954782\") " pod="calico-system/calico-node-6cdl8" Apr 25 01:26:04.067054 kubelet[3177]: I0425 01:26:04.065781 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/207e9921-348b-4272-8c2e-665ee7954782-var-lib-calico\") pod \"calico-node-6cdl8\" (UID: \"207e9921-348b-4272-8c2e-665ee7954782\") " pod="calico-system/calico-node-6cdl8" Apr 25 01:26:04.067054 kubelet[3177]: I0425 01:26:04.065796 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/207e9921-348b-4272-8c2e-665ee7954782-lib-modules\") pod \"calico-node-6cdl8\" (UID: \"207e9921-348b-4272-8c2e-665ee7954782\") " pod="calico-system/calico-node-6cdl8" Apr 25 01:26:04.067054 kubelet[3177]: I0425 01:26:04.065812 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/207e9921-348b-4272-8c2e-665ee7954782-sys-fs\") pod \"calico-node-6cdl8\" (UID: \"207e9921-348b-4272-8c2e-665ee7954782\") " pod="calico-system/calico-node-6cdl8" Apr 25 01:26:04.067054 kubelet[3177]: I0425 01:26:04.065838 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/207e9921-348b-4272-8c2e-665ee7954782-bpffs\") pod \"calico-node-6cdl8\" (UID: \"207e9921-348b-4272-8c2e-665ee7954782\") " pod="calico-system/calico-node-6cdl8" Apr 25 01:26:04.067270 kubelet[3177]: I0425 01:26:04.065854 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/207e9921-348b-4272-8c2e-665ee7954782-cni-log-dir\") pod \"calico-node-6cdl8\" (UID: \"207e9921-348b-4272-8c2e-665ee7954782\") " pod="calico-system/calico-node-6cdl8" Apr 25 01:26:04.067270 kubelet[3177]: I0425 01:26:04.065867 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/207e9921-348b-4272-8c2e-665ee7954782-tigera-ca-bundle\") pod \"calico-node-6cdl8\" (UID: \"207e9921-348b-4272-8c2e-665ee7954782\") " pod="calico-system/calico-node-6cdl8" Apr 25 01:26:04.067270 kubelet[3177]: I0425 01:26:04.065895 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/207e9921-348b-4272-8c2e-665ee7954782-cni-bin-dir\") pod \"calico-node-6cdl8\" (UID: \"207e9921-348b-4272-8c2e-665ee7954782\") " pod="calico-system/calico-node-6cdl8" Apr 25 01:26:04.067270 kubelet[3177]: I0425 01:26:04.065908 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/207e9921-348b-4272-8c2e-665ee7954782-var-run-calico\") pod \"calico-node-6cdl8\" (UID: \"207e9921-348b-4272-8c2e-665ee7954782\") " pod="calico-system/calico-node-6cdl8" Apr 25 01:26:04.067270 kubelet[3177]: I0425 01:26:04.065922 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/207e9921-348b-4272-8c2e-665ee7954782-xtables-lock\") pod \"calico-node-6cdl8\" (UID: \"207e9921-348b-4272-8c2e-665ee7954782\") " pod="calico-system/calico-node-6cdl8" Apr 25 01:26:04.067386 kubelet[3177]: I0425 01:26:04.065938 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5lb5\" (UniqueName: \"kubernetes.io/projected/207e9921-348b-4272-8c2e-665ee7954782-kube-api-access-g5lb5\") pod \"calico-node-6cdl8\" (UID: \"207e9921-348b-4272-8c2e-665ee7954782\") " pod="calico-system/calico-node-6cdl8" Apr 25 01:26:04.140422 kubelet[3177]: E0425 01:26:04.140358 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9jdr" podUID="824a5d45-7c15-4527-82b4-bbcfeeb63e50" Apr 25 01:26:04.166890 kubelet[3177]: I0425 01:26:04.166854 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/824a5d45-7c15-4527-82b4-bbcfeeb63e50-socket-dir\") pod \"csi-node-driver-m9jdr\" (UID: \"824a5d45-7c15-4527-82b4-bbcfeeb63e50\") " pod="calico-system/csi-node-driver-m9jdr" Apr 25 01:26:04.167035 kubelet[3177]: I0425 01:26:04.166913 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/824a5d45-7c15-4527-82b4-bbcfeeb63e50-kubelet-dir\") pod \"csi-node-driver-m9jdr\" (UID: \"824a5d45-7c15-4527-82b4-bbcfeeb63e50\") " pod="calico-system/csi-node-driver-m9jdr" Apr 25 01:26:04.167035 kubelet[3177]: I0425 01:26:04.166965 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/824a5d45-7c15-4527-82b4-bbcfeeb63e50-registration-dir\") pod \"csi-node-driver-m9jdr\" (UID: \"824a5d45-7c15-4527-82b4-bbcfeeb63e50\") " pod="calico-system/csi-node-driver-m9jdr" Apr 25 01:26:04.167035 kubelet[3177]: I0425 01:26:04.166982 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/824a5d45-7c15-4527-82b4-bbcfeeb63e50-varrun\") pod \"csi-node-driver-m9jdr\" (UID: \"824a5d45-7c15-4527-82b4-bbcfeeb63e50\") " pod="calico-system/csi-node-driver-m9jdr" Apr 25 01:26:04.167035 kubelet[3177]: I0425 01:26:04.167026 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpvnv\" (UniqueName: \"kubernetes.io/projected/824a5d45-7c15-4527-82b4-bbcfeeb63e50-kube-api-access-tpvnv\") pod \"csi-node-driver-m9jdr\" (UID: \"824a5d45-7c15-4527-82b4-bbcfeeb63e50\") " pod="calico-system/csi-node-driver-m9jdr" Apr 25 01:26:04.167947 kubelet[3177]: E0425 01:26:04.167927 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.168144 kubelet[3177]: W0425 01:26:04.168031 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.168144 kubelet[3177]: E0425 01:26:04.168062 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.168343 kubelet[3177]: E0425 01:26:04.168323 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.169047 kubelet[3177]: W0425 01:26:04.168479 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.169047 kubelet[3177]: E0425 01:26:04.168500 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.169332 kubelet[3177]: E0425 01:26:04.169318 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.169402 kubelet[3177]: W0425 01:26:04.169391 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.170050 kubelet[3177]: E0425 01:26:04.169904 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.170473 kubelet[3177]: E0425 01:26:04.170458 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.170568 kubelet[3177]: W0425 01:26:04.170555 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.170629 kubelet[3177]: E0425 01:26:04.170618 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.172604 kubelet[3177]: E0425 01:26:04.172586 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.172803 kubelet[3177]: W0425 01:26:04.172691 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.172803 kubelet[3177]: E0425 01:26:04.172708 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.173003 kubelet[3177]: E0425 01:26:04.172992 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.174521 kubelet[3177]: W0425 01:26:04.174393 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.174521 kubelet[3177]: E0425 01:26:04.174420 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.174752 kubelet[3177]: E0425 01:26:04.174739 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.175011 kubelet[3177]: W0425 01:26:04.174809 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.175011 kubelet[3177]: E0425 01:26:04.174827 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.176536 kubelet[3177]: E0425 01:26:04.175165 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.176536 kubelet[3177]: W0425 01:26:04.175181 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.176536 kubelet[3177]: E0425 01:26:04.175194 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.176536 kubelet[3177]: E0425 01:26:04.175388 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.176536 kubelet[3177]: W0425 01:26:04.175397 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.176536 kubelet[3177]: E0425 01:26:04.175408 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.177449 kubelet[3177]: E0425 01:26:04.177412 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.177819 kubelet[3177]: W0425 01:26:04.177431 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.177883 kubelet[3177]: E0425 01:26:04.177823 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.178571 kubelet[3177]: E0425 01:26:04.178343 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.178571 kubelet[3177]: W0425 01:26:04.178565 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.178674 kubelet[3177]: E0425 01:26:04.178582 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.181041 kubelet[3177]: E0425 01:26:04.180187 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.181041 kubelet[3177]: W0425 01:26:04.180204 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.181041 kubelet[3177]: E0425 01:26:04.180217 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.181916 kubelet[3177]: E0425 01:26:04.181896 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.182009 kubelet[3177]: W0425 01:26:04.181996 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.182067 kubelet[3177]: E0425 01:26:04.182057 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.182450 kubelet[3177]: E0425 01:26:04.182407 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.182574 kubelet[3177]: W0425 01:26:04.182559 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.182655 kubelet[3177]: E0425 01:26:04.182631 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.182956 kubelet[3177]: E0425 01:26:04.182936 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.183045 kubelet[3177]: W0425 01:26:04.183028 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.183198 kubelet[3177]: E0425 01:26:04.183094 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.183387 kubelet[3177]: E0425 01:26:04.183376 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.183588 kubelet[3177]: W0425 01:26:04.183525 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.183588 kubelet[3177]: E0425 01:26:04.183544 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.184015 kubelet[3177]: E0425 01:26:04.183913 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.184015 kubelet[3177]: W0425 01:26:04.183927 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.184015 kubelet[3177]: E0425 01:26:04.183938 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.184269 kubelet[3177]: E0425 01:26:04.184257 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.184409 kubelet[3177]: W0425 01:26:04.184333 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.184409 kubelet[3177]: E0425 01:26:04.184347 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.184766 kubelet[3177]: E0425 01:26:04.184662 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.184766 kubelet[3177]: W0425 01:26:04.184673 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.184766 kubelet[3177]: E0425 01:26:04.184695 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.185051 kubelet[3177]: E0425 01:26:04.185039 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.185216 kubelet[3177]: W0425 01:26:04.185082 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.185216 kubelet[3177]: E0425 01:26:04.185096 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.185447 kubelet[3177]: E0425 01:26:04.185404 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.185447 kubelet[3177]: W0425 01:26:04.185416 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.185728 kubelet[3177]: E0425 01:26:04.185551 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.186248 kubelet[3177]: E0425 01:26:04.186233 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.186422 kubelet[3177]: W0425 01:26:04.186310 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.186422 kubelet[3177]: E0425 01:26:04.186327 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.186935 kubelet[3177]: E0425 01:26:04.186739 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.186935 kubelet[3177]: W0425 01:26:04.186752 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.186935 kubelet[3177]: E0425 01:26:04.186766 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.187316 kubelet[3177]: E0425 01:26:04.187301 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.187485 kubelet[3177]: W0425 01:26:04.187413 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.187825 kubelet[3177]: E0425 01:26:04.187715 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.188148 kubelet[3177]: E0425 01:26:04.188122 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.188257 kubelet[3177]: W0425 01:26:04.188244 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.188495 kubelet[3177]: E0425 01:26:04.188343 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.188798 kubelet[3177]: E0425 01:26:04.188785 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.189245 kubelet[3177]: W0425 01:26:04.189131 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.189245 kubelet[3177]: E0425 01:26:04.189153 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.189763 kubelet[3177]: E0425 01:26:04.189572 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.189763 kubelet[3177]: W0425 01:26:04.189586 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.189763 kubelet[3177]: E0425 01:26:04.189598 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.190690 kubelet[3177]: E0425 01:26:04.190568 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.190690 kubelet[3177]: W0425 01:26:04.190581 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.190690 kubelet[3177]: E0425 01:26:04.190594 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.190948 kubelet[3177]: E0425 01:26:04.190934 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.191015 kubelet[3177]: W0425 01:26:04.191004 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.191154 kubelet[3177]: E0425 01:26:04.191062 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.191521 kubelet[3177]: E0425 01:26:04.191433 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.191707 kubelet[3177]: W0425 01:26:04.191594 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.191707 kubelet[3177]: E0425 01:26:04.191616 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.192042 kubelet[3177]: E0425 01:26:04.191871 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.192042 kubelet[3177]: W0425 01:26:04.191885 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.192042 kubelet[3177]: E0425 01:26:04.191895 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.193288 kubelet[3177]: E0425 01:26:04.193139 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.193288 kubelet[3177]: W0425 01:26:04.193154 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.193288 kubelet[3177]: E0425 01:26:04.193170 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.193567 kubelet[3177]: E0425 01:26:04.193553 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.194495 kubelet[3177]: W0425 01:26:04.193635 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.194696 kubelet[3177]: E0425 01:26:04.194594 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.194944 kubelet[3177]: E0425 01:26:04.194860 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.194944 kubelet[3177]: W0425 01:26:04.194871 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.194944 kubelet[3177]: E0425 01:26:04.194884 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.197458 kubelet[3177]: E0425 01:26:04.197385 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.197458 kubelet[3177]: W0425 01:26:04.197401 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.197865 kubelet[3177]: E0425 01:26:04.197754 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.198571 kubelet[3177]: E0425 01:26:04.198420 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.198571 kubelet[3177]: W0425 01:26:04.198461 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.198571 kubelet[3177]: E0425 01:26:04.198474 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.198761 kubelet[3177]: E0425 01:26:04.198747 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.198819 kubelet[3177]: W0425 01:26:04.198808 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.198881 kubelet[3177]: E0425 01:26:04.198869 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.199637 kubelet[3177]: E0425 01:26:04.199618 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.200021 kubelet[3177]: W0425 01:26:04.199820 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.200021 kubelet[3177]: E0425 01:26:04.199843 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.201468 kubelet[3177]: E0425 01:26:04.200604 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.201468 kubelet[3177]: W0425 01:26:04.200620 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.201468 kubelet[3177]: E0425 01:26:04.200632 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.201668 kubelet[3177]: E0425 01:26:04.201654 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.201729 kubelet[3177]: W0425 01:26:04.201717 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.201783 kubelet[3177]: E0425 01:26:04.201773 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.202169 kubelet[3177]: E0425 01:26:04.202030 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.202169 kubelet[3177]: W0425 01:26:04.202043 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.202169 kubelet[3177]: E0425 01:26:04.202055 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.202338 kubelet[3177]: E0425 01:26:04.202327 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.202548 kubelet[3177]: W0425 01:26:04.202404 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.202648 kubelet[3177]: E0425 01:26:04.202636 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.204685 kubelet[3177]: E0425 01:26:04.204665 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.204917 kubelet[3177]: W0425 01:26:04.204769 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.204917 kubelet[3177]: E0425 01:26:04.204791 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.205457 kubelet[3177]: E0425 01:26:04.205257 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.205457 kubelet[3177]: W0425 01:26:04.205279 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.205457 kubelet[3177]: E0425 01:26:04.205292 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.206461 kubelet[3177]: E0425 01:26:04.206223 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.206461 kubelet[3177]: W0425 01:26:04.206244 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.206461 kubelet[3177]: E0425 01:26:04.206260 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.206930 kubelet[3177]: E0425 01:26:04.206835 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.207111 kubelet[3177]: W0425 01:26:04.207006 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.207209 kubelet[3177]: E0425 01:26:04.207186 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.207795 kubelet[3177]: E0425 01:26:04.207778 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.208605 kubelet[3177]: W0425 01:26:04.208471 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.208605 kubelet[3177]: E0425 01:26:04.208498 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.208754 kubelet[3177]: E0425 01:26:04.208742 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.208811 kubelet[3177]: W0425 01:26:04.208800 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.208873 kubelet[3177]: E0425 01:26:04.208862 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.209355 kubelet[3177]: E0425 01:26:04.209248 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.209780 kubelet[3177]: W0425 01:26:04.209549 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.209780 kubelet[3177]: E0425 01:26:04.209573 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.210228 kubelet[3177]: E0425 01:26:04.210209 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.210306 kubelet[3177]: W0425 01:26:04.210293 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.210365 kubelet[3177]: E0425 01:26:04.210355 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.211061 kubelet[3177]: E0425 01:26:04.210863 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.211061 kubelet[3177]: W0425 01:26:04.210881 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.211061 kubelet[3177]: E0425 01:26:04.210899 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.211263 kubelet[3177]: E0425 01:26:04.211181 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.211263 kubelet[3177]: W0425 01:26:04.211199 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.211263 kubelet[3177]: E0425 01:26:04.211212 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.231364 kubelet[3177]: E0425 01:26:04.231281 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.231364 kubelet[3177]: W0425 01:26:04.231302 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.231364 kubelet[3177]: E0425 01:26:04.231322 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.251514 containerd[1734]: time="2026-04-25T01:26:04.251472740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c4db9bcdc-ss8sx,Uid:db912044-2799-4b9a-9cb5-3c218cc8a2f9,Namespace:calico-system,Attempt:0,}" Apr 25 01:26:04.268153 kubelet[3177]: E0425 01:26:04.268125 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.268153 kubelet[3177]: W0425 01:26:04.268147 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.268392 kubelet[3177]: E0425 01:26:04.268169 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.268500 kubelet[3177]: E0425 01:26:04.268485 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.268555 kubelet[3177]: W0425 01:26:04.268500 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.268555 kubelet[3177]: E0425 01:26:04.268511 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.268752 kubelet[3177]: E0425 01:26:04.268739 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.268752 kubelet[3177]: W0425 01:26:04.268750 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.268835 kubelet[3177]: E0425 01:26:04.268759 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.268985 kubelet[3177]: E0425 01:26:04.268973 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.268985 kubelet[3177]: W0425 01:26:04.268984 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.269134 kubelet[3177]: E0425 01:26:04.268993 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.269220 kubelet[3177]: E0425 01:26:04.269209 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.269261 kubelet[3177]: W0425 01:26:04.269219 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.269261 kubelet[3177]: E0425 01:26:04.269231 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.269482 kubelet[3177]: E0425 01:26:04.269425 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.269482 kubelet[3177]: W0425 01:26:04.269444 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.269482 kubelet[3177]: E0425 01:26:04.269454 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.269764 kubelet[3177]: E0425 01:26:04.269752 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.269764 kubelet[3177]: W0425 01:26:04.269762 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.269841 kubelet[3177]: E0425 01:26:04.269771 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.270393 kubelet[3177]: E0425 01:26:04.270377 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.270393 kubelet[3177]: W0425 01:26:04.270392 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.270512 kubelet[3177]: E0425 01:26:04.270404 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.270609 kubelet[3177]: E0425 01:26:04.270596 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.270647 kubelet[3177]: W0425 01:26:04.270615 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.270647 kubelet[3177]: E0425 01:26:04.270624 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.270791 kubelet[3177]: E0425 01:26:04.270778 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.270791 kubelet[3177]: W0425 01:26:04.270789 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.270876 kubelet[3177]: E0425 01:26:04.270798 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.270947 kubelet[3177]: E0425 01:26:04.270935 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.270947 kubelet[3177]: W0425 01:26:04.270944 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.271026 kubelet[3177]: E0425 01:26:04.270952 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.271098 kubelet[3177]: E0425 01:26:04.271087 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.271098 kubelet[3177]: W0425 01:26:04.271096 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.271197 kubelet[3177]: E0425 01:26:04.271106 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.271255 kubelet[3177]: E0425 01:26:04.271242 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.271255 kubelet[3177]: W0425 01:26:04.271252 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.271324 kubelet[3177]: E0425 01:26:04.271261 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.271404 kubelet[3177]: E0425 01:26:04.271394 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.271404 kubelet[3177]: W0425 01:26:04.271403 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.271485 kubelet[3177]: E0425 01:26:04.271411 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.271681 kubelet[3177]: E0425 01:26:04.271666 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.271681 kubelet[3177]: W0425 01:26:04.271679 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.271743 kubelet[3177]: E0425 01:26:04.271690 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.271926 kubelet[3177]: E0425 01:26:04.271909 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.271926 kubelet[3177]: W0425 01:26:04.271924 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.272066 kubelet[3177]: E0425 01:26:04.271933 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.272154 kubelet[3177]: E0425 01:26:04.272098 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.272154 kubelet[3177]: W0425 01:26:04.272128 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.272154 kubelet[3177]: E0425 01:26:04.272140 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.272295 kubelet[3177]: E0425 01:26:04.272287 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.272350 kubelet[3177]: W0425 01:26:04.272295 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.272350 kubelet[3177]: E0425 01:26:04.272303 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.272529 kubelet[3177]: E0425 01:26:04.272516 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.272529 kubelet[3177]: W0425 01:26:04.272527 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.274046 kubelet[3177]: E0425 01:26:04.272535 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.274046 kubelet[3177]: E0425 01:26:04.272706 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.274046 kubelet[3177]: W0425 01:26:04.272714 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.274046 kubelet[3177]: E0425 01:26:04.272722 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.274046 kubelet[3177]: E0425 01:26:04.272866 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.274046 kubelet[3177]: W0425 01:26:04.272874 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.274046 kubelet[3177]: E0425 01:26:04.272881 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.274046 kubelet[3177]: E0425 01:26:04.273020 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.274046 kubelet[3177]: W0425 01:26:04.273027 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.274046 kubelet[3177]: E0425 01:26:04.273034 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.274720 kubelet[3177]: E0425 01:26:04.273182 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.274720 kubelet[3177]: W0425 01:26:04.273190 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.274720 kubelet[3177]: E0425 01:26:04.273199 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.274720 kubelet[3177]: E0425 01:26:04.273615 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.274720 kubelet[3177]: W0425 01:26:04.273629 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.274720 kubelet[3177]: E0425 01:26:04.273643 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.275352 kubelet[3177]: E0425 01:26:04.275272 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.275352 kubelet[3177]: W0425 01:26:04.275304 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.275352 kubelet[3177]: E0425 01:26:04.275320 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.289725 kubelet[3177]: E0425 01:26:04.289648 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:04.289725 kubelet[3177]: W0425 01:26:04.289668 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:04.289725 kubelet[3177]: E0425 01:26:04.289687 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:04.297702 containerd[1734]: time="2026-04-25T01:26:04.297599088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 01:26:04.297702 containerd[1734]: time="2026-04-25T01:26:04.297656888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 01:26:04.297702 containerd[1734]: time="2026-04-25T01:26:04.297671968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:26:04.297999 containerd[1734]: time="2026-04-25T01:26:04.297748928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:26:04.315638 systemd[1]: Started cri-containerd-d3f52dafbde042840a5130ce8ca907d4f486f67f086c6be19889357063abce45.scope - libcontainer container d3f52dafbde042840a5130ce8ca907d4f486f67f086c6be19889357063abce45. Apr 25 01:26:04.343316 containerd[1734]: time="2026-04-25T01:26:04.343278117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6cdl8,Uid:207e9921-348b-4272-8c2e-665ee7954782,Namespace:calico-system,Attempt:0,}" Apr 25 01:26:04.346851 containerd[1734]: time="2026-04-25T01:26:04.346816916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c4db9bcdc-ss8sx,Uid:db912044-2799-4b9a-9cb5-3c218cc8a2f9,Namespace:calico-system,Attempt:0,} returns sandbox id \"d3f52dafbde042840a5130ce8ca907d4f486f67f086c6be19889357063abce45\"" Apr 25 01:26:04.350143 containerd[1734]: time="2026-04-25T01:26:04.350097995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 25 01:26:04.384774 containerd[1734]: time="2026-04-25T01:26:04.384530947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 01:26:04.384774 containerd[1734]: time="2026-04-25T01:26:04.384601667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 01:26:04.384774 containerd[1734]: time="2026-04-25T01:26:04.384612827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:26:04.384774 containerd[1734]: time="2026-04-25T01:26:04.384701387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:26:04.400729 systemd[1]: Started cri-containerd-a2e7822c66385fbcda9f7bf2408b1c325cebe7e42e78ac065c8439a8d754d46e.scope - libcontainer container a2e7822c66385fbcda9f7bf2408b1c325cebe7e42e78ac065c8439a8d754d46e. Apr 25 01:26:04.422626 containerd[1734]: time="2026-04-25T01:26:04.422420737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6cdl8,Uid:207e9921-348b-4272-8c2e-665ee7954782,Namespace:calico-system,Attempt:0,} returns sandbox id \"a2e7822c66385fbcda9f7bf2408b1c325cebe7e42e78ac065c8439a8d754d46e\"" Apr 25 01:26:05.725335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1342117340.mount: Deactivated successfully. Apr 25 01:26:06.137602 kubelet[3177]: E0425 01:26:06.137553 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9jdr" podUID="824a5d45-7c15-4527-82b4-bbcfeeb63e50" Apr 25 01:26:06.933632 containerd[1734]: time="2026-04-25T01:26:06.933587036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:06.937460 containerd[1734]: time="2026-04-25T01:26:06.937409195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 25 01:26:06.940635 containerd[1734]: time="2026-04-25T01:26:06.940593594Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:06.945625 containerd[1734]: time="2026-04-25T01:26:06.945568793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:06.946516 containerd[1734]: time="2026-04-25T01:26:06.946382393Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.596245878s" Apr 25 01:26:06.946516 containerd[1734]: time="2026-04-25T01:26:06.946413913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 25 01:26:06.947936 containerd[1734]: time="2026-04-25T01:26:06.947534353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 25 01:26:06.966781 containerd[1734]: time="2026-04-25T01:26:06.966747188Z" level=info msg="CreateContainer within sandbox \"d3f52dafbde042840a5130ce8ca907d4f486f67f086c6be19889357063abce45\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 25 01:26:07.007906 containerd[1734]: time="2026-04-25T01:26:07.007858618Z" level=info msg="CreateContainer within sandbox \"d3f52dafbde042840a5130ce8ca907d4f486f67f086c6be19889357063abce45\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a4ef75e53a1440be7111b3fd015f0c6ea3f8d63f070bad2b28717e9de40f0194\"" Apr 25 01:26:07.009907 containerd[1734]: time="2026-04-25T01:26:07.009866537Z" level=info msg="StartContainer for \"a4ef75e53a1440be7111b3fd015f0c6ea3f8d63f070bad2b28717e9de40f0194\"" Apr 25 01:26:07.036639 systemd[1]: Started cri-containerd-a4ef75e53a1440be7111b3fd015f0c6ea3f8d63f070bad2b28717e9de40f0194.scope - libcontainer container a4ef75e53a1440be7111b3fd015f0c6ea3f8d63f070bad2b28717e9de40f0194. Apr 25 01:26:07.076096 containerd[1734]: time="2026-04-25T01:26:07.076051281Z" level=info msg="StartContainer for \"a4ef75e53a1440be7111b3fd015f0c6ea3f8d63f070bad2b28717e9de40f0194\" returns successfully" Apr 25 01:26:07.278284 kubelet[3177]: E0425 01:26:07.278244 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.278697 kubelet[3177]: W0425 01:26:07.278269 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.278750 kubelet[3177]: E0425 01:26:07.278703 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.279682 kubelet[3177]: E0425 01:26:07.279659 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.279682 kubelet[3177]: W0425 01:26:07.279676 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.280496 kubelet[3177]: E0425 01:26:07.279689 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.280893 kubelet[3177]: E0425 01:26:07.280874 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.280893 kubelet[3177]: W0425 01:26:07.280888 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.280989 kubelet[3177]: E0425 01:26:07.280903 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.283527 kubelet[3177]: E0425 01:26:07.281482 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.283597 kubelet[3177]: W0425 01:26:07.283527 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.283597 kubelet[3177]: E0425 01:26:07.283548 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.284750 kubelet[3177]: E0425 01:26:07.284281 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.284750 kubelet[3177]: W0425 01:26:07.284297 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.284750 kubelet[3177]: E0425 01:26:07.284408 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.284750 kubelet[3177]: E0425 01:26:07.284674 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.284750 kubelet[3177]: W0425 01:26:07.284684 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.284750 kubelet[3177]: E0425 01:26:07.284711 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.285094 kubelet[3177]: E0425 01:26:07.285072 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.285094 kubelet[3177]: W0425 01:26:07.285086 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.285162 kubelet[3177]: E0425 01:26:07.285098 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.285327 kubelet[3177]: E0425 01:26:07.285309 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.285327 kubelet[3177]: W0425 01:26:07.285322 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.285400 kubelet[3177]: E0425 01:26:07.285332 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.285793 kubelet[3177]: E0425 01:26:07.285770 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.285836 kubelet[3177]: W0425 01:26:07.285786 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.285836 kubelet[3177]: E0425 01:26:07.285816 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.286245 kubelet[3177]: E0425 01:26:07.285992 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.286245 kubelet[3177]: W0425 01:26:07.286004 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.286245 kubelet[3177]: E0425 01:26:07.286014 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.286245 kubelet[3177]: E0425 01:26:07.286224 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.286245 kubelet[3177]: W0425 01:26:07.286233 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.286396 kubelet[3177]: E0425 01:26:07.286256 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.286568 kubelet[3177]: E0425 01:26:07.286475 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.286568 kubelet[3177]: W0425 01:26:07.286490 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.286568 kubelet[3177]: E0425 01:26:07.286516 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.288274 kubelet[3177]: E0425 01:26:07.286728 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.288274 kubelet[3177]: W0425 01:26:07.286749 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.288274 kubelet[3177]: E0425 01:26:07.286759 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.288274 kubelet[3177]: E0425 01:26:07.287952 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.288274 kubelet[3177]: W0425 01:26:07.287966 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.288274 kubelet[3177]: E0425 01:26:07.287978 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.288274 kubelet[3177]: E0425 01:26:07.288200 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.288274 kubelet[3177]: W0425 01:26:07.288208 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.288274 kubelet[3177]: E0425 01:26:07.288217 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.290899 kubelet[3177]: E0425 01:26:07.290525 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.290899 kubelet[3177]: W0425 01:26:07.290544 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.290899 kubelet[3177]: E0425 01:26:07.290571 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.290899 kubelet[3177]: E0425 01:26:07.290784 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.290899 kubelet[3177]: W0425 01:26:07.290791 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.290899 kubelet[3177]: E0425 01:26:07.290800 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.291363 kubelet[3177]: E0425 01:26:07.291135 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.291363 kubelet[3177]: W0425 01:26:07.291145 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.291495 kubelet[3177]: E0425 01:26:07.291470 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.292252 kubelet[3177]: E0425 01:26:07.292201 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.292252 kubelet[3177]: W0425 01:26:07.292221 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.292252 kubelet[3177]: E0425 01:26:07.292253 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.292760 kubelet[3177]: E0425 01:26:07.292731 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.292760 kubelet[3177]: W0425 01:26:07.292748 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.292963 kubelet[3177]: E0425 01:26:07.292759 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.293278 kubelet[3177]: E0425 01:26:07.293256 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.293278 kubelet[3177]: W0425 01:26:07.293271 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.293387 kubelet[3177]: E0425 01:26:07.293284 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.294248 kubelet[3177]: E0425 01:26:07.293955 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.294248 kubelet[3177]: W0425 01:26:07.293975 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.294248 kubelet[3177]: E0425 01:26:07.293988 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.294248 kubelet[3177]: E0425 01:26:07.294196 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.294248 kubelet[3177]: W0425 01:26:07.294205 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.294248 kubelet[3177]: E0425 01:26:07.294215 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.294426 kubelet[3177]: E0425 01:26:07.294374 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.294426 kubelet[3177]: W0425 01:26:07.294382 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.294426 kubelet[3177]: E0425 01:26:07.294392 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.294607 kubelet[3177]: E0425 01:26:07.294589 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.294607 kubelet[3177]: W0425 01:26:07.294600 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.294684 kubelet[3177]: E0425 01:26:07.294609 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.296881 kubelet[3177]: E0425 01:26:07.295722 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.296881 kubelet[3177]: W0425 01:26:07.295739 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.296881 kubelet[3177]: E0425 01:26:07.295752 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.297275 kubelet[3177]: E0425 01:26:07.297255 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.297275 kubelet[3177]: W0425 01:26:07.297272 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.297365 kubelet[3177]: E0425 01:26:07.297286 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.297528 kubelet[3177]: E0425 01:26:07.297512 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.297528 kubelet[3177]: W0425 01:26:07.297526 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.297595 kubelet[3177]: E0425 01:26:07.297536 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.297743 kubelet[3177]: E0425 01:26:07.297728 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.297743 kubelet[3177]: W0425 01:26:07.297741 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.297814 kubelet[3177]: E0425 01:26:07.297751 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.298252 kubelet[3177]: E0425 01:26:07.298228 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.298252 kubelet[3177]: W0425 01:26:07.298245 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.298320 kubelet[3177]: E0425 01:26:07.298256 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.298778 kubelet[3177]: E0425 01:26:07.298753 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.298778 kubelet[3177]: W0425 01:26:07.298771 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.298891 kubelet[3177]: E0425 01:26:07.298787 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.299154 kubelet[3177]: E0425 01:26:07.299138 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.299154 kubelet[3177]: W0425 01:26:07.299152 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.299229 kubelet[3177]: E0425 01:26:07.299163 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:07.301830 kubelet[3177]: E0425 01:26:07.301248 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:07.301830 kubelet[3177]: W0425 01:26:07.301269 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:07.301830 kubelet[3177]: E0425 01:26:07.301281 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.137947 kubelet[3177]: E0425 01:26:08.137608 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9jdr" podUID="824a5d45-7c15-4527-82b4-bbcfeeb63e50" Apr 25 01:26:08.223910 kubelet[3177]: I0425 01:26:08.223881 3177 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 25 01:26:08.294134 kubelet[3177]: E0425 01:26:08.293891 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.294134 kubelet[3177]: W0425 01:26:08.293917 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.294134 kubelet[3177]: E0425 01:26:08.293949 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.294815 kubelet[3177]: E0425 01:26:08.294400 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.294815 kubelet[3177]: W0425 01:26:08.294411 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.294815 kubelet[3177]: E0425 01:26:08.294430 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.295663 kubelet[3177]: E0425 01:26:08.295123 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.295663 kubelet[3177]: W0425 01:26:08.295140 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.295663 kubelet[3177]: E0425 01:26:08.295317 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.295663 kubelet[3177]: E0425 01:26:08.295651 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.295663 kubelet[3177]: W0425 01:26:08.295662 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.296012 kubelet[3177]: E0425 01:26:08.295674 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.296333 kubelet[3177]: E0425 01:26:08.296137 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.296333 kubelet[3177]: W0425 01:26:08.296148 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.296333 kubelet[3177]: E0425 01:26:08.296158 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.296880 kubelet[3177]: E0425 01:26:08.296362 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.296880 kubelet[3177]: W0425 01:26:08.296426 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.296880 kubelet[3177]: E0425 01:26:08.296458 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.297573 kubelet[3177]: E0425 01:26:08.297156 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.297573 kubelet[3177]: W0425 01:26:08.297170 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.297573 kubelet[3177]: E0425 01:26:08.297185 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.298009 kubelet[3177]: E0425 01:26:08.297992 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.298009 kubelet[3177]: W0425 01:26:08.298005 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.298079 kubelet[3177]: E0425 01:26:08.298018 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.298254 kubelet[3177]: E0425 01:26:08.298238 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.298298 kubelet[3177]: W0425 01:26:08.298254 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.298298 kubelet[3177]: E0425 01:26:08.298266 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.298893 kubelet[3177]: E0425 01:26:08.298740 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.298893 kubelet[3177]: W0425 01:26:08.298754 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.299123 kubelet[3177]: E0425 01:26:08.298902 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.299149 kubelet[3177]: E0425 01:26:08.299128 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.299149 kubelet[3177]: W0425 01:26:08.299137 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.299149 kubelet[3177]: E0425 01:26:08.299147 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.299468 kubelet[3177]: E0425 01:26:08.299373 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.299468 kubelet[3177]: W0425 01:26:08.299386 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.299468 kubelet[3177]: E0425 01:26:08.299420 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.299945 kubelet[3177]: E0425 01:26:08.299755 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.299945 kubelet[3177]: W0425 01:26:08.299768 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.299945 kubelet[3177]: E0425 01:26:08.299779 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.300166 kubelet[3177]: E0425 01:26:08.300150 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.300166 kubelet[3177]: W0425 01:26:08.300165 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.300225 kubelet[3177]: E0425 01:26:08.300178 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.300406 kubelet[3177]: E0425 01:26:08.300392 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.300406 kubelet[3177]: W0425 01:26:08.300404 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.300563 kubelet[3177]: E0425 01:26:08.300413 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.300999 kubelet[3177]: E0425 01:26:08.300983 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.300999 kubelet[3177]: W0425 01:26:08.300997 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.301156 kubelet[3177]: E0425 01:26:08.301008 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.301309 kubelet[3177]: E0425 01:26:08.301282 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.301309 kubelet[3177]: W0425 01:26:08.301294 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.301549 kubelet[3177]: E0425 01:26:08.301305 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.302186 kubelet[3177]: E0425 01:26:08.302052 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.302186 kubelet[3177]: W0425 01:26:08.302070 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.302186 kubelet[3177]: E0425 01:26:08.302088 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.302460 kubelet[3177]: E0425 01:26:08.302341 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.302460 kubelet[3177]: W0425 01:26:08.302353 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.302460 kubelet[3177]: E0425 01:26:08.302364 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.302634 kubelet[3177]: E0425 01:26:08.302623 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.302780 kubelet[3177]: W0425 01:26:08.302679 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.302780 kubelet[3177]: E0425 01:26:08.302695 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.302918 kubelet[3177]: E0425 01:26:08.302906 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.303496 kubelet[3177]: W0425 01:26:08.303477 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.303548 kubelet[3177]: E0425 01:26:08.303500 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.303886 kubelet[3177]: E0425 01:26:08.303838 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.303886 kubelet[3177]: W0425 01:26:08.303884 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.303956 kubelet[3177]: E0425 01:26:08.303902 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.304358 kubelet[3177]: E0425 01:26:08.304335 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.304358 kubelet[3177]: W0425 01:26:08.304356 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.304434 kubelet[3177]: E0425 01:26:08.304368 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.305098 kubelet[3177]: E0425 01:26:08.305078 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.305143 kubelet[3177]: W0425 01:26:08.305097 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.305143 kubelet[3177]: E0425 01:26:08.305111 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.305786 kubelet[3177]: E0425 01:26:08.305754 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.305786 kubelet[3177]: W0425 01:26:08.305769 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.305786 kubelet[3177]: E0425 01:26:08.305782 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.306261 kubelet[3177]: E0425 01:26:08.306134 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.306261 kubelet[3177]: W0425 01:26:08.306148 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.306261 kubelet[3177]: E0425 01:26:08.306159 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.306632 kubelet[3177]: E0425 01:26:08.306615 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.306632 kubelet[3177]: W0425 01:26:08.306630 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.306710 kubelet[3177]: E0425 01:26:08.306642 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.307306 kubelet[3177]: E0425 01:26:08.307287 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.307306 kubelet[3177]: W0425 01:26:08.307304 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.307383 kubelet[3177]: E0425 01:26:08.307317 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.307635 kubelet[3177]: E0425 01:26:08.307615 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.307677 kubelet[3177]: W0425 01:26:08.307636 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.307677 kubelet[3177]: E0425 01:26:08.307651 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.308054 kubelet[3177]: E0425 01:26:08.307947 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.308054 kubelet[3177]: W0425 01:26:08.307964 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.308054 kubelet[3177]: E0425 01:26:08.307977 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.308457 kubelet[3177]: E0425 01:26:08.308374 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.308457 kubelet[3177]: W0425 01:26:08.308387 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.308457 kubelet[3177]: E0425 01:26:08.308398 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.308810 kubelet[3177]: E0425 01:26:08.308794 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.308810 kubelet[3177]: W0425 01:26:08.308808 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.308878 kubelet[3177]: E0425 01:26:08.308820 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.309790 kubelet[3177]: E0425 01:26:08.309766 3177 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 25 01:26:08.309790 kubelet[3177]: W0425 01:26:08.309785 3177 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 25 01:26:08.309882 kubelet[3177]: E0425 01:26:08.309798 3177 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 25 01:26:08.375404 containerd[1734]: time="2026-04-25T01:26:08.375358599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:08.379194 containerd[1734]: time="2026-04-25T01:26:08.379162879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 25 01:26:08.382640 containerd[1734]: time="2026-04-25T01:26:08.382590838Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:08.387305 containerd[1734]: time="2026-04-25T01:26:08.387202557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:08.388503 containerd[1734]: time="2026-04-25T01:26:08.387873636Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.440309123s" Apr 25 01:26:08.388503 containerd[1734]: time="2026-04-25T01:26:08.387912476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 25 01:26:08.398258 containerd[1734]: time="2026-04-25T01:26:08.398170434Z" level=info msg="CreateContainer within sandbox \"a2e7822c66385fbcda9f7bf2408b1c325cebe7e42e78ac065c8439a8d754d46e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 25 01:26:08.436902 containerd[1734]: time="2026-04-25T01:26:08.436709384Z" level=info msg="CreateContainer within sandbox \"a2e7822c66385fbcda9f7bf2408b1c325cebe7e42e78ac065c8439a8d754d46e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"43defad1dc4a087cca22735a08188676e996c442a1804b7c896c40770d18a7d0\"" Apr 25 01:26:08.437633 containerd[1734]: time="2026-04-25T01:26:08.437591064Z" level=info msg="StartContainer for \"43defad1dc4a087cca22735a08188676e996c442a1804b7c896c40770d18a7d0\"" Apr 25 01:26:08.473596 systemd[1]: Started cri-containerd-43defad1dc4a087cca22735a08188676e996c442a1804b7c896c40770d18a7d0.scope - libcontainer container 43defad1dc4a087cca22735a08188676e996c442a1804b7c896c40770d18a7d0. Apr 25 01:26:08.506129 containerd[1734]: time="2026-04-25T01:26:08.505724767Z" level=info msg="StartContainer for \"43defad1dc4a087cca22735a08188676e996c442a1804b7c896c40770d18a7d0\" returns successfully" Apr 25 01:26:08.511813 systemd[1]: cri-containerd-43defad1dc4a087cca22735a08188676e996c442a1804b7c896c40770d18a7d0.scope: Deactivated successfully. Apr 25 01:26:08.535348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43defad1dc4a087cca22735a08188676e996c442a1804b7c896c40770d18a7d0-rootfs.mount: Deactivated successfully. Apr 25 01:26:09.248786 kubelet[3177]: I0425 01:26:09.248729 3177 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-6c4db9bcdc-ss8sx" podStartSLOduration=3.650601466 podStartE2EDuration="6.248715423s" podCreationTimestamp="2026-04-25 01:26:03 +0000 UTC" firstStartedPulling="2026-04-25 01:26:04.349257636 +0000 UTC m=+22.327879363" lastFinishedPulling="2026-04-25 01:26:06.947371593 +0000 UTC m=+24.925993320" observedRunningTime="2026-04-25 01:26:07.270713273 +0000 UTC m=+25.249335000" watchObservedRunningTime="2026-04-25 01:26:09.248715423 +0000 UTC m=+27.227337150" Apr 25 01:26:09.634683 containerd[1734]: time="2026-04-25T01:26:09.634612408Z" level=info msg="shim disconnected" id=43defad1dc4a087cca22735a08188676e996c442a1804b7c896c40770d18a7d0 namespace=k8s.io Apr 25 01:26:09.634683 containerd[1734]: time="2026-04-25T01:26:09.634676088Z" level=warning msg="cleaning up after shim disconnected" id=43defad1dc4a087cca22735a08188676e996c442a1804b7c896c40770d18a7d0 namespace=k8s.io Apr 25 01:26:09.634683 containerd[1734]: time="2026-04-25T01:26:09.634684088Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 01:26:10.137590 kubelet[3177]: E0425 01:26:10.137105 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9jdr" podUID="824a5d45-7c15-4527-82b4-bbcfeeb63e50" Apr 25 01:26:10.232390 containerd[1734]: time="2026-04-25T01:26:10.232278060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 25 01:26:12.139225 kubelet[3177]: E0425 01:26:12.138737 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9jdr" podUID="824a5d45-7c15-4527-82b4-bbcfeeb63e50" Apr 25 01:26:14.141898 kubelet[3177]: E0425 01:26:14.141013 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9jdr" podUID="824a5d45-7c15-4527-82b4-bbcfeeb63e50" Apr 25 01:26:16.139299 kubelet[3177]: E0425 01:26:16.139229 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9jdr" podUID="824a5d45-7c15-4527-82b4-bbcfeeb63e50" Apr 25 01:26:18.137679 kubelet[3177]: E0425 01:26:18.136706 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9jdr" podUID="824a5d45-7c15-4527-82b4-bbcfeeb63e50" Apr 25 01:26:19.463409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3051329507.mount: Deactivated successfully. Apr 25 01:26:20.137824 kubelet[3177]: E0425 01:26:20.137784 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9jdr" podUID="824a5d45-7c15-4527-82b4-bbcfeeb63e50" Apr 25 01:26:20.350468 containerd[1734]: time="2026-04-25T01:26:20.350257619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:20.353362 containerd[1734]: time="2026-04-25T01:26:20.353328578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 25 01:26:20.376139 containerd[1734]: time="2026-04-25T01:26:20.376093571Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:20.380668 containerd[1734]: time="2026-04-25T01:26:20.380603250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:20.382247 containerd[1734]: time="2026-04-25T01:26:20.381826170Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 10.14940295s" Apr 25 01:26:20.382247 containerd[1734]: time="2026-04-25T01:26:20.381863610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 25 01:26:20.390926 containerd[1734]: time="2026-04-25T01:26:20.390679047Z" level=info msg="CreateContainer within sandbox \"a2e7822c66385fbcda9f7bf2408b1c325cebe7e42e78ac065c8439a8d754d46e\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 25 01:26:20.433417 containerd[1734]: time="2026-04-25T01:26:20.433335194Z" level=info msg="CreateContainer within sandbox \"a2e7822c66385fbcda9f7bf2408b1c325cebe7e42e78ac065c8439a8d754d46e\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"e67cfd3eaecd3485e24b67c5a4c1e17a79a2e1703508b33c5d3683e4f33746e7\"" Apr 25 01:26:20.434991 containerd[1734]: time="2026-04-25T01:26:20.433923394Z" level=info msg="StartContainer for \"e67cfd3eaecd3485e24b67c5a4c1e17a79a2e1703508b33c5d3683e4f33746e7\"" Apr 25 01:26:20.467646 systemd[1]: Started cri-containerd-e67cfd3eaecd3485e24b67c5a4c1e17a79a2e1703508b33c5d3683e4f33746e7.scope - libcontainer container e67cfd3eaecd3485e24b67c5a4c1e17a79a2e1703508b33c5d3683e4f33746e7. Apr 25 01:26:20.498376 containerd[1734]: time="2026-04-25T01:26:20.498261655Z" level=info msg="StartContainer for \"e67cfd3eaecd3485e24b67c5a4c1e17a79a2e1703508b33c5d3683e4f33746e7\" returns successfully" Apr 25 01:26:20.535904 systemd[1]: cri-containerd-e67cfd3eaecd3485e24b67c5a4c1e17a79a2e1703508b33c5d3683e4f33746e7.scope: Deactivated successfully. Apr 25 01:26:20.554566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e67cfd3eaecd3485e24b67c5a4c1e17a79a2e1703508b33c5d3683e4f33746e7-rootfs.mount: Deactivated successfully. Apr 25 01:26:21.290910 containerd[1734]: time="2026-04-25T01:26:21.290835899Z" level=info msg="shim disconnected" id=e67cfd3eaecd3485e24b67c5a4c1e17a79a2e1703508b33c5d3683e4f33746e7 namespace=k8s.io Apr 25 01:26:21.290910 containerd[1734]: time="2026-04-25T01:26:21.290903699Z" level=warning msg="cleaning up after shim disconnected" id=e67cfd3eaecd3485e24b67c5a4c1e17a79a2e1703508b33c5d3683e4f33746e7 namespace=k8s.io Apr 25 01:26:21.290910 containerd[1734]: time="2026-04-25T01:26:21.290913299Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 01:26:22.137037 kubelet[3177]: E0425 01:26:22.136990 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9jdr" podUID="824a5d45-7c15-4527-82b4-bbcfeeb63e50" Apr 25 01:26:22.259865 containerd[1734]: time="2026-04-25T01:26:22.259755251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 25 01:26:24.139298 kubelet[3177]: E0425 01:26:24.139262 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9jdr" podUID="824a5d45-7c15-4527-82b4-bbcfeeb63e50" Apr 25 01:26:26.137726 kubelet[3177]: E0425 01:26:26.137687 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9jdr" podUID="824a5d45-7c15-4527-82b4-bbcfeeb63e50" Apr 25 01:26:26.153293 containerd[1734]: time="2026-04-25T01:26:26.153240054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:26.156354 containerd[1734]: time="2026-04-25T01:26:26.156218013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 25 01:26:26.159872 containerd[1734]: time="2026-04-25T01:26:26.159843532Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:26.164376 containerd[1734]: time="2026-04-25T01:26:26.164326211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:26.165582 containerd[1734]: time="2026-04-25T01:26:26.165002291Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.90520448s" Apr 25 01:26:26.165582 containerd[1734]: time="2026-04-25T01:26:26.165033971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 25 01:26:26.174452 containerd[1734]: time="2026-04-25T01:26:26.174399368Z" level=info msg="CreateContainer within sandbox \"a2e7822c66385fbcda9f7bf2408b1c325cebe7e42e78ac065c8439a8d754d46e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 25 01:26:26.212736 containerd[1734]: time="2026-04-25T01:26:26.212692797Z" level=info msg="CreateContainer within sandbox \"a2e7822c66385fbcda9f7bf2408b1c325cebe7e42e78ac065c8439a8d754d46e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"90a8758db4b590333f990be3c9a9c560fbcb16c166a06393e33927360a0853e3\"" Apr 25 01:26:26.213876 containerd[1734]: time="2026-04-25T01:26:26.213626676Z" level=info msg="StartContainer for \"90a8758db4b590333f990be3c9a9c560fbcb16c166a06393e33927360a0853e3\"" Apr 25 01:26:26.246640 systemd[1]: Started cri-containerd-90a8758db4b590333f990be3c9a9c560fbcb16c166a06393e33927360a0853e3.scope - libcontainer container 90a8758db4b590333f990be3c9a9c560fbcb16c166a06393e33927360a0853e3. Apr 25 01:26:26.280165 containerd[1734]: time="2026-04-25T01:26:26.280089657Z" level=info msg="StartContainer for \"90a8758db4b590333f990be3c9a9c560fbcb16c166a06393e33927360a0853e3\" returns successfully" Apr 25 01:26:27.617264 containerd[1734]: time="2026-04-25T01:26:27.617213973Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 25 01:26:27.620371 systemd[1]: cri-containerd-90a8758db4b590333f990be3c9a9c560fbcb16c166a06393e33927360a0853e3.scope: Deactivated successfully. Apr 25 01:26:27.645802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90a8758db4b590333f990be3c9a9c560fbcb16c166a06393e33927360a0853e3-rootfs.mount: Deactivated successfully. Apr 25 01:26:27.662772 kubelet[3177]: I0425 01:26:27.662744 3177 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 25 01:26:28.493748 systemd[1]: Created slice kubepods-besteffort-podd011a28b_5a3d_4c2d_8307_64cd488a9cdd.slice - libcontainer container kubepods-besteffort-podd011a28b_5a3d_4c2d_8307_64cd488a9cdd.slice. Apr 25 01:26:28.498089 containerd[1734]: time="2026-04-25T01:26:28.497743902Z" level=info msg="shim disconnected" id=90a8758db4b590333f990be3c9a9c560fbcb16c166a06393e33927360a0853e3 namespace=k8s.io Apr 25 01:26:28.498089 containerd[1734]: time="2026-04-25T01:26:28.497797582Z" level=warning msg="cleaning up after shim disconnected" id=90a8758db4b590333f990be3c9a9c560fbcb16c166a06393e33927360a0853e3 namespace=k8s.io Apr 25 01:26:28.498089 containerd[1734]: time="2026-04-25T01:26:28.497805822Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 01:26:28.503711 systemd[1]: Created slice kubepods-besteffort-pod824a5d45_7c15_4527_82b4_bbcfeeb63e50.slice - libcontainer container kubepods-besteffort-pod824a5d45_7c15_4527_82b4_bbcfeeb63e50.slice. Apr 25 01:26:28.514046 systemd[1]: Created slice kubepods-burstable-pod9a4c953d_e4d6_4586_907f_7af01091f4b3.slice - libcontainer container kubepods-burstable-pod9a4c953d_e4d6_4586_907f_7af01091f4b3.slice. Apr 25 01:26:28.518865 kubelet[3177]: I0425 01:26:28.518304 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbf2p\" (UniqueName: \"kubernetes.io/projected/533f9528-3ff0-42b3-85ca-983925f11ebc-kube-api-access-bbf2p\") pod \"coredns-7d764666f9-xz5zq\" (UID: \"533f9528-3ff0-42b3-85ca-983925f11ebc\") " pod="kube-system/coredns-7d764666f9-xz5zq" Apr 25 01:26:28.518865 kubelet[3177]: I0425 01:26:28.518340 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gbct\" (UniqueName: \"kubernetes.io/projected/9a4c953d-e4d6-4586-907f-7af01091f4b3-kube-api-access-8gbct\") pod \"coredns-7d764666f9-hmvfq\" (UID: \"9a4c953d-e4d6-4586-907f-7af01091f4b3\") " pod="kube-system/coredns-7d764666f9-hmvfq" Apr 25 01:26:28.518865 kubelet[3177]: I0425 01:26:28.518378 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/533f9528-3ff0-42b3-85ca-983925f11ebc-config-volume\") pod \"coredns-7d764666f9-xz5zq\" (UID: \"533f9528-3ff0-42b3-85ca-983925f11ebc\") " pod="kube-system/coredns-7d764666f9-xz5zq" Apr 25 01:26:28.518865 kubelet[3177]: I0425 01:26:28.518395 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d011a28b-5a3d-4c2d-8307-64cd488a9cdd-nginx-config\") pod \"whisker-68fdc8476-7bgd5\" (UID: \"d011a28b-5a3d-4c2d-8307-64cd488a9cdd\") " pod="calico-system/whisker-68fdc8476-7bgd5" Apr 25 01:26:28.518865 kubelet[3177]: I0425 01:26:28.518409 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhzc6\" (UniqueName: \"kubernetes.io/projected/d011a28b-5a3d-4c2d-8307-64cd488a9cdd-kube-api-access-bhzc6\") pod \"whisker-68fdc8476-7bgd5\" (UID: \"d011a28b-5a3d-4c2d-8307-64cd488a9cdd\") " pod="calico-system/whisker-68fdc8476-7bgd5" Apr 25 01:26:28.519088 kubelet[3177]: I0425 01:26:28.518427 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d011a28b-5a3d-4c2d-8307-64cd488a9cdd-whisker-backend-key-pair\") pod \"whisker-68fdc8476-7bgd5\" (UID: \"d011a28b-5a3d-4c2d-8307-64cd488a9cdd\") " pod="calico-system/whisker-68fdc8476-7bgd5" Apr 25 01:26:28.519088 kubelet[3177]: I0425 01:26:28.518465 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d011a28b-5a3d-4c2d-8307-64cd488a9cdd-whisker-ca-bundle\") pod \"whisker-68fdc8476-7bgd5\" (UID: \"d011a28b-5a3d-4c2d-8307-64cd488a9cdd\") " pod="calico-system/whisker-68fdc8476-7bgd5" Apr 25 01:26:28.519088 kubelet[3177]: I0425 01:26:28.518487 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a4c953d-e4d6-4586-907f-7af01091f4b3-config-volume\") pod \"coredns-7d764666f9-hmvfq\" (UID: \"9a4c953d-e4d6-4586-907f-7af01091f4b3\") " pod="kube-system/coredns-7d764666f9-hmvfq" Apr 25 01:26:28.526617 systemd[1]: Created slice kubepods-burstable-pod533f9528_3ff0_42b3_85ca_983925f11ebc.slice - libcontainer container kubepods-burstable-pod533f9528_3ff0_42b3_85ca_983925f11ebc.slice. Apr 25 01:26:28.548090 containerd[1734]: time="2026-04-25T01:26:28.548048096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m9jdr,Uid:824a5d45-7c15-4527-82b4-bbcfeeb63e50,Namespace:calico-system,Attempt:0,}" Apr 25 01:26:28.551548 systemd[1]: Created slice kubepods-besteffort-podbfc2469b_0460_4c01_a157_10152c3a426a.slice - libcontainer container kubepods-besteffort-podbfc2469b_0460_4c01_a157_10152c3a426a.slice. Apr 25 01:26:28.569357 systemd[1]: Created slice kubepods-besteffort-pod6fe3ea34_3f40_4037_a8dd_3ea381b9fa21.slice - libcontainer container kubepods-besteffort-pod6fe3ea34_3f40_4037_a8dd_3ea381b9fa21.slice. Apr 25 01:26:28.576354 systemd[1]: Created slice kubepods-besteffort-podcdad5612_4286_46a6_867f_878844a351de.slice - libcontainer container kubepods-besteffort-podcdad5612_4286_46a6_867f_878844a351de.slice. Apr 25 01:26:28.594906 systemd[1]: Created slice kubepods-besteffort-pod95e54b6a_50b6_4304_b325_75ea339a6594.slice - libcontainer container kubepods-besteffort-pod95e54b6a_50b6_4304_b325_75ea339a6594.slice. Apr 25 01:26:28.620216 kubelet[3177]: I0425 01:26:28.619556 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t42l\" (UniqueName: \"kubernetes.io/projected/bfc2469b-0460-4c01-a157-10152c3a426a-kube-api-access-9t42l\") pod \"calico-kube-controllers-85bd8bbdbd-vbd9d\" (UID: \"bfc2469b-0460-4c01-a157-10152c3a426a\") " pod="calico-system/calico-kube-controllers-85bd8bbdbd-vbd9d" Apr 25 01:26:28.620216 kubelet[3177]: I0425 01:26:28.619603 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cdad5612-4286-46a6-867f-878844a351de-goldmane-key-pair\") pod \"goldmane-9f7667bb8-s6dwd\" (UID: \"cdad5612-4286-46a6-867f-878844a351de\") " pod="calico-system/goldmane-9f7667bb8-s6dwd" Apr 25 01:26:28.620368 kubelet[3177]: I0425 01:26:28.620343 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfc2469b-0460-4c01-a157-10152c3a426a-tigera-ca-bundle\") pod \"calico-kube-controllers-85bd8bbdbd-vbd9d\" (UID: \"bfc2469b-0460-4c01-a157-10152c3a426a\") " pod="calico-system/calico-kube-controllers-85bd8bbdbd-vbd9d" Apr 25 01:26:28.620395 kubelet[3177]: I0425 01:26:28.620381 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6fe3ea34-3f40-4037-a8dd-3ea381b9fa21-calico-apiserver-certs\") pod \"calico-apiserver-684644c847-7pwrk\" (UID: \"6fe3ea34-3f40-4037-a8dd-3ea381b9fa21\") " pod="calico-system/calico-apiserver-684644c847-7pwrk" Apr 25 01:26:28.620430 kubelet[3177]: I0425 01:26:28.620398 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rxk2\" (UniqueName: \"kubernetes.io/projected/6fe3ea34-3f40-4037-a8dd-3ea381b9fa21-kube-api-access-6rxk2\") pod \"calico-apiserver-684644c847-7pwrk\" (UID: \"6fe3ea34-3f40-4037-a8dd-3ea381b9fa21\") " pod="calico-system/calico-apiserver-684644c847-7pwrk" Apr 25 01:26:28.620430 kubelet[3177]: I0425 01:26:28.620415 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/95e54b6a-50b6-4304-b325-75ea339a6594-calico-apiserver-certs\") pod \"calico-apiserver-684644c847-8lnh2\" (UID: \"95e54b6a-50b6-4304-b325-75ea339a6594\") " pod="calico-system/calico-apiserver-684644c847-8lnh2" Apr 25 01:26:28.622329 kubelet[3177]: I0425 01:26:28.620467 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdad5612-4286-46a6-867f-878844a351de-config\") pod \"goldmane-9f7667bb8-s6dwd\" (UID: \"cdad5612-4286-46a6-867f-878844a351de\") " pod="calico-system/goldmane-9f7667bb8-s6dwd" Apr 25 01:26:28.622329 kubelet[3177]: I0425 01:26:28.620506 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnhps\" (UniqueName: \"kubernetes.io/projected/95e54b6a-50b6-4304-b325-75ea339a6594-kube-api-access-rnhps\") pod \"calico-apiserver-684644c847-8lnh2\" (UID: \"95e54b6a-50b6-4304-b325-75ea339a6594\") " pod="calico-system/calico-apiserver-684644c847-8lnh2" Apr 25 01:26:28.622329 kubelet[3177]: I0425 01:26:28.620520 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdad5612-4286-46a6-867f-878844a351de-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-s6dwd\" (UID: \"cdad5612-4286-46a6-867f-878844a351de\") " pod="calico-system/goldmane-9f7667bb8-s6dwd" Apr 25 01:26:28.622329 kubelet[3177]: I0425 01:26:28.620538 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f977\" (UniqueName: \"kubernetes.io/projected/cdad5612-4286-46a6-867f-878844a351de-kube-api-access-4f977\") pod \"goldmane-9f7667bb8-s6dwd\" (UID: \"cdad5612-4286-46a6-867f-878844a351de\") " pod="calico-system/goldmane-9f7667bb8-s6dwd" Apr 25 01:26:28.695211 containerd[1734]: time="2026-04-25T01:26:28.695128797Z" level=error msg="Failed to destroy network for sandbox \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:28.696103 containerd[1734]: time="2026-04-25T01:26:28.695930637Z" level=error msg="encountered an error cleaning up failed sandbox \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:28.696103 containerd[1734]: time="2026-04-25T01:26:28.695999957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m9jdr,Uid:824a5d45-7c15-4527-82b4-bbcfeeb63e50,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:28.697521 kubelet[3177]: E0425 01:26:28.697481 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:28.697762 kubelet[3177]: E0425 01:26:28.697553 3177 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m9jdr" Apr 25 01:26:28.697762 kubelet[3177]: E0425 01:26:28.697572 3177 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m9jdr" Apr 25 01:26:28.697762 kubelet[3177]: E0425 01:26:28.697637 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-m9jdr_calico-system(824a5d45-7c15-4527-82b4-bbcfeeb63e50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-m9jdr_calico-system(824a5d45-7c15-4527-82b4-bbcfeeb63e50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m9jdr" podUID="824a5d45-7c15-4527-82b4-bbcfeeb63e50" Apr 25 01:26:28.699745 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869-shm.mount: Deactivated successfully. Apr 25 01:26:28.805951 containerd[1734]: time="2026-04-25T01:26:28.805911503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68fdc8476-7bgd5,Uid:d011a28b-5a3d-4c2d-8307-64cd488a9cdd,Namespace:calico-system,Attempt:0,}" Apr 25 01:26:28.826948 containerd[1734]: time="2026-04-25T01:26:28.826671821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-hmvfq,Uid:9a4c953d-e4d6-4586-907f-7af01091f4b3,Namespace:kube-system,Attempt:0,}" Apr 25 01:26:28.836675 containerd[1734]: time="2026-04-25T01:26:28.836630660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-xz5zq,Uid:533f9528-3ff0-42b3-85ca-983925f11ebc,Namespace:kube-system,Attempt:0,}" Apr 25 01:26:28.876970 containerd[1734]: time="2026-04-25T01:26:28.876663815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85bd8bbdbd-vbd9d,Uid:bfc2469b-0460-4c01-a157-10152c3a426a,Namespace:calico-system,Attempt:0,}" Apr 25 01:26:28.881727 containerd[1734]: time="2026-04-25T01:26:28.881678694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684644c847-7pwrk,Uid:6fe3ea34-3f40-4037-a8dd-3ea381b9fa21,Namespace:calico-system,Attempt:0,}" Apr 25 01:26:28.887632 containerd[1734]: time="2026-04-25T01:26:28.887594693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-s6dwd,Uid:cdad5612-4286-46a6-867f-878844a351de,Namespace:calico-system,Attempt:0,}" Apr 25 01:26:28.892751 containerd[1734]: time="2026-04-25T01:26:28.892639453Z" level=error msg="Failed to destroy network for sandbox \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:28.893147 containerd[1734]: time="2026-04-25T01:26:28.893040292Z" level=error msg="encountered an error cleaning up failed sandbox \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:28.893147 containerd[1734]: time="2026-04-25T01:26:28.893105932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68fdc8476-7bgd5,Uid:d011a28b-5a3d-4c2d-8307-64cd488a9cdd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:28.893642 kubelet[3177]: E0425 01:26:28.893421 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:28.893642 kubelet[3177]: E0425 01:26:28.893523 3177 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68fdc8476-7bgd5" Apr 25 01:26:28.893642 kubelet[3177]: E0425 01:26:28.893553 3177 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68fdc8476-7bgd5" Apr 25 01:26:28.894780 kubelet[3177]: E0425 01:26:28.893825 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-68fdc8476-7bgd5_calico-system(d011a28b-5a3d-4c2d-8307-64cd488a9cdd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-68fdc8476-7bgd5_calico-system(d011a28b-5a3d-4c2d-8307-64cd488a9cdd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68fdc8476-7bgd5" podUID="d011a28b-5a3d-4c2d-8307-64cd488a9cdd" Apr 25 01:26:28.904665 containerd[1734]: time="2026-04-25T01:26:28.904629011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684644c847-8lnh2,Uid:95e54b6a-50b6-4304-b325-75ea339a6594,Namespace:calico-system,Attempt:0,}" Apr 25 01:26:28.975466 containerd[1734]: time="2026-04-25T01:26:28.975294962Z" level=error msg="Failed to destroy network for sandbox \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:28.975629 containerd[1734]: time="2026-04-25T01:26:28.975602762Z" level=error msg="encountered an error cleaning up failed sandbox \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:28.975847 containerd[1734]: time="2026-04-25T01:26:28.975655482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-xz5zq,Uid:533f9528-3ff0-42b3-85ca-983925f11ebc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:28.975910 kubelet[3177]: E0425 01:26:28.975870 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:28.975952 kubelet[3177]: E0425 01:26:28.975920 3177 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-xz5zq" Apr 25 01:26:28.975952 kubelet[3177]: E0425 01:26:28.975938 3177 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-xz5zq" Apr 25 01:26:28.976004 kubelet[3177]: E0425 01:26:28.975984 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-xz5zq_kube-system(533f9528-3ff0-42b3-85ca-983925f11ebc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-xz5zq_kube-system(533f9528-3ff0-42b3-85ca-983925f11ebc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-xz5zq" podUID="533f9528-3ff0-42b3-85ca-983925f11ebc" Apr 25 01:26:28.992995 containerd[1734]: time="2026-04-25T01:26:28.992945200Z" level=error msg="Failed to destroy network for sandbox \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:28.993799 containerd[1734]: time="2026-04-25T01:26:28.993766880Z" level=error msg="encountered an error cleaning up failed sandbox \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:28.993845 containerd[1734]: time="2026-04-25T01:26:28.993822760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-hmvfq,Uid:9a4c953d-e4d6-4586-907f-7af01091f4b3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:28.995719 kubelet[3177]: E0425 01:26:28.994035 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:28.995719 kubelet[3177]: E0425 01:26:28.994102 3177 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-hmvfq" Apr 25 01:26:28.995719 kubelet[3177]: E0425 01:26:28.994121 3177 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-hmvfq" Apr 25 01:26:28.995870 kubelet[3177]: E0425 01:26:28.994169 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-hmvfq_kube-system(9a4c953d-e4d6-4586-907f-7af01091f4b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-hmvfq_kube-system(9a4c953d-e4d6-4586-907f-7af01091f4b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-hmvfq" podUID="9a4c953d-e4d6-4586-907f-7af01091f4b3" Apr 25 01:26:29.113824 containerd[1734]: time="2026-04-25T01:26:29.113541625Z" level=error msg="Failed to destroy network for sandbox \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.115795 containerd[1734]: time="2026-04-25T01:26:29.115411544Z" level=error msg="encountered an error cleaning up failed sandbox \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.115795 containerd[1734]: time="2026-04-25T01:26:29.115475304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684644c847-7pwrk,Uid:6fe3ea34-3f40-4037-a8dd-3ea381b9fa21,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.116608 kubelet[3177]: E0425 01:26:29.116402 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.116608 kubelet[3177]: E0425 01:26:29.116516 3177 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-684644c847-7pwrk" Apr 25 01:26:29.117876 kubelet[3177]: E0425 01:26:29.117764 3177 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-684644c847-7pwrk" Apr 25 01:26:29.119463 containerd[1734]: time="2026-04-25T01:26:29.119331224Z" level=error msg="Failed to destroy network for sandbox \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.119927 kubelet[3177]: E0425 01:26:29.119842 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-684644c847-7pwrk_calico-system(6fe3ea34-3f40-4037-a8dd-3ea381b9fa21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-684644c847-7pwrk_calico-system(6fe3ea34-3f40-4037-a8dd-3ea381b9fa21)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-684644c847-7pwrk" podUID="6fe3ea34-3f40-4037-a8dd-3ea381b9fa21" Apr 25 01:26:29.120884 containerd[1734]: time="2026-04-25T01:26:29.120775184Z" level=error msg="encountered an error cleaning up failed sandbox \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.121578 containerd[1734]: time="2026-04-25T01:26:29.121498344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85bd8bbdbd-vbd9d,Uid:bfc2469b-0460-4c01-a157-10152c3a426a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.121790 kubelet[3177]: E0425 01:26:29.121749 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.121790 kubelet[3177]: E0425 01:26:29.121788 3177 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85bd8bbdbd-vbd9d" Apr 25 01:26:29.121967 kubelet[3177]: E0425 01:26:29.121803 3177 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85bd8bbdbd-vbd9d" Apr 25 01:26:29.121967 kubelet[3177]: E0425 01:26:29.121852 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85bd8bbdbd-vbd9d_calico-system(bfc2469b-0460-4c01-a157-10152c3a426a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85bd8bbdbd-vbd9d_calico-system(bfc2469b-0460-4c01-a157-10152c3a426a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85bd8bbdbd-vbd9d" podUID="bfc2469b-0460-4c01-a157-10152c3a426a" Apr 25 01:26:29.127820 containerd[1734]: time="2026-04-25T01:26:29.127461503Z" level=error msg="Failed to destroy network for sandbox \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.129238 containerd[1734]: time="2026-04-25T01:26:29.129089223Z" level=error msg="encountered an error cleaning up failed sandbox \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.129238 containerd[1734]: time="2026-04-25T01:26:29.129145343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-s6dwd,Uid:cdad5612-4286-46a6-867f-878844a351de,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.129761 kubelet[3177]: E0425 01:26:29.129568 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.129761 kubelet[3177]: E0425 01:26:29.129632 3177 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-s6dwd" Apr 25 01:26:29.129761 kubelet[3177]: E0425 01:26:29.129652 3177 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-s6dwd" Apr 25 01:26:29.129942 kubelet[3177]: E0425 01:26:29.129723 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-s6dwd_calico-system(cdad5612-4286-46a6-867f-878844a351de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-s6dwd_calico-system(cdad5612-4286-46a6-867f-878844a351de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-s6dwd" podUID="cdad5612-4286-46a6-867f-878844a351de" Apr 25 01:26:29.143519 containerd[1734]: time="2026-04-25T01:26:29.143457381Z" level=error msg="Failed to destroy network for sandbox \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.143802 containerd[1734]: time="2026-04-25T01:26:29.143776621Z" level=error msg="encountered an error cleaning up failed sandbox \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.143847 containerd[1734]: time="2026-04-25T01:26:29.143829461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684644c847-8lnh2,Uid:95e54b6a-50b6-4304-b325-75ea339a6594,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.144090 kubelet[3177]: E0425 01:26:29.144042 3177 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.144150 kubelet[3177]: E0425 01:26:29.144098 3177 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-684644c847-8lnh2" Apr 25 01:26:29.144150 kubelet[3177]: E0425 01:26:29.144126 3177 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-684644c847-8lnh2" Apr 25 01:26:29.144228 kubelet[3177]: E0425 01:26:29.144188 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-684644c847-8lnh2_calico-system(95e54b6a-50b6-4304-b325-75ea339a6594)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-684644c847-8lnh2_calico-system(95e54b6a-50b6-4304-b325-75ea339a6594)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-684644c847-8lnh2" podUID="95e54b6a-50b6-4304-b325-75ea339a6594" Apr 25 01:26:29.281135 kubelet[3177]: I0425 01:26:29.281106 3177 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Apr 25 01:26:29.283681 containerd[1734]: time="2026-04-25T01:26:29.282555843Z" level=info msg="StopPodSandbox for \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\"" Apr 25 01:26:29.283681 containerd[1734]: time="2026-04-25T01:26:29.282741883Z" level=info msg="Ensure that sandbox 360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150 in task-service has been cleanup successfully" Apr 25 01:26:29.283936 kubelet[3177]: I0425 01:26:29.282671 3177 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Apr 25 01:26:29.284706 containerd[1734]: time="2026-04-25T01:26:29.284117963Z" level=info msg="StopPodSandbox for \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\"" Apr 25 01:26:29.284706 containerd[1734]: time="2026-04-25T01:26:29.284263203Z" level=info msg="Ensure that sandbox 4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869 in task-service has been cleanup successfully" Apr 25 01:26:29.287004 kubelet[3177]: I0425 01:26:29.286835 3177 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Apr 25 01:26:29.289350 containerd[1734]: time="2026-04-25T01:26:29.289307763Z" level=info msg="StopPodSandbox for \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\"" Apr 25 01:26:29.289570 containerd[1734]: time="2026-04-25T01:26:29.289549322Z" level=info msg="Ensure that sandbox b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17 in task-service has been cleanup successfully" Apr 25 01:26:29.295780 kubelet[3177]: I0425 01:26:29.295753 3177 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Apr 25 01:26:29.297311 containerd[1734]: time="2026-04-25T01:26:29.296965642Z" level=info msg="StopPodSandbox for \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\"" Apr 25 01:26:29.297311 containerd[1734]: time="2026-04-25T01:26:29.297153042Z" level=info msg="Ensure that sandbox 1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d in task-service has been cleanup successfully" Apr 25 01:26:29.316220 kubelet[3177]: I0425 01:26:29.313903 3177 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Apr 25 01:26:29.316369 containerd[1734]: time="2026-04-25T01:26:29.315015119Z" level=info msg="StopPodSandbox for \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\"" Apr 25 01:26:29.324071 containerd[1734]: time="2026-04-25T01:26:29.324028198Z" level=info msg="Ensure that sandbox adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238 in task-service has been cleanup successfully" Apr 25 01:26:29.329659 kubelet[3177]: I0425 01:26:29.329053 3177 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Apr 25 01:26:29.332784 containerd[1734]: time="2026-04-25T01:26:29.332745917Z" level=info msg="StopPodSandbox for \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\"" Apr 25 01:26:29.333083 containerd[1734]: time="2026-04-25T01:26:29.333064037Z" level=info msg="Ensure that sandbox cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da in task-service has been cleanup successfully" Apr 25 01:26:29.346019 containerd[1734]: time="2026-04-25T01:26:29.345981075Z" level=info msg="CreateContainer within sandbox \"a2e7822c66385fbcda9f7bf2408b1c325cebe7e42e78ac065c8439a8d754d46e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 25 01:26:29.350371 kubelet[3177]: I0425 01:26:29.349766 3177 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Apr 25 01:26:29.351747 containerd[1734]: time="2026-04-25T01:26:29.351700315Z" level=info msg="StopPodSandbox for \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\"" Apr 25 01:26:29.353257 kubelet[3177]: I0425 01:26:29.353233 3177 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Apr 25 01:26:29.353339 containerd[1734]: time="2026-04-25T01:26:29.353307394Z" level=info msg="Ensure that sandbox 6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1 in task-service has been cleanup successfully" Apr 25 01:26:29.359990 containerd[1734]: time="2026-04-25T01:26:29.359950274Z" level=error msg="StopPodSandbox for \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\" failed" error="failed to destroy network for sandbox \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.360359 kubelet[3177]: E0425 01:26:29.360129 3177 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Apr 25 01:26:29.360359 kubelet[3177]: E0425 01:26:29.360176 3177 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150"} Apr 25 01:26:29.360359 kubelet[3177]: E0425 01:26:29.360235 3177 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bfc2469b-0460-4c01-a157-10152c3a426a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 25 01:26:29.360359 kubelet[3177]: E0425 01:26:29.360260 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bfc2469b-0460-4c01-a157-10152c3a426a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85bd8bbdbd-vbd9d" podUID="bfc2469b-0460-4c01-a157-10152c3a426a" Apr 25 01:26:29.364887 containerd[1734]: time="2026-04-25T01:26:29.364755833Z" level=info msg="StopPodSandbox for \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\"" Apr 25 01:26:29.364954 containerd[1734]: time="2026-04-25T01:26:29.364933473Z" level=info msg="Ensure that sandbox 940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62 in task-service has been cleanup successfully" Apr 25 01:26:29.408838 containerd[1734]: time="2026-04-25T01:26:29.408702747Z" level=error msg="StopPodSandbox for \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\" failed" error="failed to destroy network for sandbox \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.409176 kubelet[3177]: E0425 01:26:29.408926 3177 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Apr 25 01:26:29.409176 kubelet[3177]: E0425 01:26:29.408971 3177 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17"} Apr 25 01:26:29.409176 kubelet[3177]: E0425 01:26:29.409001 3177 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"95e54b6a-50b6-4304-b325-75ea339a6594\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 25 01:26:29.409176 kubelet[3177]: E0425 01:26:29.409038 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"95e54b6a-50b6-4304-b325-75ea339a6594\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-684644c847-8lnh2" podUID="95e54b6a-50b6-4304-b325-75ea339a6594" Apr 25 01:26:29.409836 containerd[1734]: time="2026-04-25T01:26:29.409420667Z" level=error msg="StopPodSandbox for \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\" failed" error="failed to destroy network for sandbox \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.411339 kubelet[3177]: E0425 01:26:29.409675 3177 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Apr 25 01:26:29.411339 kubelet[3177]: E0425 01:26:29.409712 3177 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869"} Apr 25 01:26:29.411339 kubelet[3177]: E0425 01:26:29.409734 3177 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"824a5d45-7c15-4527-82b4-bbcfeeb63e50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 25 01:26:29.411339 kubelet[3177]: E0425 01:26:29.409794 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"824a5d45-7c15-4527-82b4-bbcfeeb63e50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m9jdr" podUID="824a5d45-7c15-4527-82b4-bbcfeeb63e50" Apr 25 01:26:29.413823 containerd[1734]: time="2026-04-25T01:26:29.412927587Z" level=info msg="CreateContainer within sandbox \"a2e7822c66385fbcda9f7bf2408b1c325cebe7e42e78ac065c8439a8d754d46e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ab265a602c2cab2ecf59d06b88b0cfd07dfeb2356bfa09286ff724bd4d1e0e15\"" Apr 25 01:26:29.415721 containerd[1734]: time="2026-04-25T01:26:29.415674507Z" level=info msg="StartContainer for \"ab265a602c2cab2ecf59d06b88b0cfd07dfeb2356bfa09286ff724bd4d1e0e15\"" Apr 25 01:26:29.426407 containerd[1734]: time="2026-04-25T01:26:29.424544705Z" level=error msg="StopPodSandbox for \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\" failed" error="failed to destroy network for sandbox \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.426572 kubelet[3177]: E0425 01:26:29.425234 3177 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Apr 25 01:26:29.426572 kubelet[3177]: E0425 01:26:29.425270 3177 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d"} Apr 25 01:26:29.426572 kubelet[3177]: E0425 01:26:29.425299 3177 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d011a28b-5a3d-4c2d-8307-64cd488a9cdd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 25 01:26:29.426572 kubelet[3177]: E0425 01:26:29.425323 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d011a28b-5a3d-4c2d-8307-64cd488a9cdd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68fdc8476-7bgd5" podUID="d011a28b-5a3d-4c2d-8307-64cd488a9cdd" Apr 25 01:26:29.437737 containerd[1734]: time="2026-04-25T01:26:29.437665464Z" level=error msg="StopPodSandbox for \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\" failed" error="failed to destroy network for sandbox \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.438061 kubelet[3177]: E0425 01:26:29.438020 3177 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Apr 25 01:26:29.438250 kubelet[3177]: E0425 01:26:29.438227 3177 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1"} Apr 25 01:26:29.438486 kubelet[3177]: E0425 01:26:29.438365 3177 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cdad5612-4286-46a6-867f-878844a351de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 25 01:26:29.438486 kubelet[3177]: E0425 01:26:29.438400 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cdad5612-4286-46a6-867f-878844a351de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-s6dwd" podUID="cdad5612-4286-46a6-867f-878844a351de" Apr 25 01:26:29.449268 containerd[1734]: time="2026-04-25T01:26:29.448143142Z" level=error msg="StopPodSandbox for \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\" failed" error="failed to destroy network for sandbox \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.449268 containerd[1734]: time="2026-04-25T01:26:29.448794542Z" level=error msg="StopPodSandbox for \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\" failed" error="failed to destroy network for sandbox \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.449424 kubelet[3177]: E0425 01:26:29.448579 3177 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Apr 25 01:26:29.449424 kubelet[3177]: E0425 01:26:29.448630 3177 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238"} Apr 25 01:26:29.449424 kubelet[3177]: E0425 01:26:29.448659 3177 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"533f9528-3ff0-42b3-85ca-983925f11ebc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 25 01:26:29.449424 kubelet[3177]: E0425 01:26:29.448686 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"533f9528-3ff0-42b3-85ca-983925f11ebc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-xz5zq" podUID="533f9528-3ff0-42b3-85ca-983925f11ebc" Apr 25 01:26:29.451016 kubelet[3177]: E0425 01:26:29.450746 3177 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Apr 25 01:26:29.451016 kubelet[3177]: E0425 01:26:29.450801 3177 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da"} Apr 25 01:26:29.451016 kubelet[3177]: E0425 01:26:29.450831 3177 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a4c953d-e4d6-4586-907f-7af01091f4b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 25 01:26:29.451016 kubelet[3177]: E0425 01:26:29.450952 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a4c953d-e4d6-4586-907f-7af01091f4b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-hmvfq" podUID="9a4c953d-e4d6-4586-907f-7af01091f4b3" Apr 25 01:26:29.455450 containerd[1734]: time="2026-04-25T01:26:29.455398142Z" level=error msg="StopPodSandbox for \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\" failed" error="failed to destroy network for sandbox \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 25 01:26:29.455815 kubelet[3177]: E0425 01:26:29.455609 3177 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Apr 25 01:26:29.455815 kubelet[3177]: E0425 01:26:29.455645 3177 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62"} Apr 25 01:26:29.455815 kubelet[3177]: E0425 01:26:29.455669 3177 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6fe3ea34-3f40-4037-a8dd-3ea381b9fa21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 25 01:26:29.455815 kubelet[3177]: E0425 01:26:29.455691 3177 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6fe3ea34-3f40-4037-a8dd-3ea381b9fa21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-684644c847-7pwrk" podUID="6fe3ea34-3f40-4037-a8dd-3ea381b9fa21" Apr 25 01:26:29.473616 systemd[1]: Started cri-containerd-ab265a602c2cab2ecf59d06b88b0cfd07dfeb2356bfa09286ff724bd4d1e0e15.scope - libcontainer container ab265a602c2cab2ecf59d06b88b0cfd07dfeb2356bfa09286ff724bd4d1e0e15. Apr 25 01:26:29.503867 containerd[1734]: time="2026-04-25T01:26:29.503825215Z" level=info msg="StartContainer for \"ab265a602c2cab2ecf59d06b88b0cfd07dfeb2356bfa09286ff724bd4d1e0e15\" returns successfully" Apr 25 01:26:30.345468 kubelet[3177]: I0425 01:26:30.344463 3177 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 25 01:26:30.360923 containerd[1734]: time="2026-04-25T01:26:30.359524108Z" level=info msg="StopPodSandbox for \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\"" Apr 25 01:26:30.395344 kubelet[3177]: I0425 01:26:30.394173 3177 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-6cdl8" podStartSLOduration=2.51279596 podStartE2EDuration="27.394158663s" podCreationTimestamp="2026-04-25 01:26:03 +0000 UTC" firstStartedPulling="2026-04-25 01:26:04.424990057 +0000 UTC m=+22.403611784" lastFinishedPulling="2026-04-25 01:26:29.3063528 +0000 UTC m=+47.284974487" observedRunningTime="2026-04-25 01:26:30.392935183 +0000 UTC m=+48.371556910" watchObservedRunningTime="2026-04-25 01:26:30.394158663 +0000 UTC m=+48.372780390" Apr 25 01:26:30.408591 systemd[1]: run-containerd-runc-k8s.io-ab265a602c2cab2ecf59d06b88b0cfd07dfeb2356bfa09286ff724bd4d1e0e15-runc.SnLpAZ.mount: Deactivated successfully. Apr 25 01:26:30.513982 containerd[1734]: 2026-04-25 01:26:30.468 [INFO][4471] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Apr 25 01:26:30.513982 containerd[1734]: 2026-04-25 01:26:30.468 [INFO][4471] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" iface="eth0" netns="/var/run/netns/cni-993d5862-b24d-9bad-c8cd-88b76b114e40" Apr 25 01:26:30.513982 containerd[1734]: 2026-04-25 01:26:30.468 [INFO][4471] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" iface="eth0" netns="/var/run/netns/cni-993d5862-b24d-9bad-c8cd-88b76b114e40" Apr 25 01:26:30.513982 containerd[1734]: 2026-04-25 01:26:30.469 [INFO][4471] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" iface="eth0" netns="/var/run/netns/cni-993d5862-b24d-9bad-c8cd-88b76b114e40" Apr 25 01:26:30.513982 containerd[1734]: 2026-04-25 01:26:30.469 [INFO][4471] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Apr 25 01:26:30.513982 containerd[1734]: 2026-04-25 01:26:30.469 [INFO][4471] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Apr 25 01:26:30.513982 containerd[1734]: 2026-04-25 01:26:30.500 [INFO][4501] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" HandleID="k8s-pod-network.1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--68fdc8476--7bgd5-eth0" Apr 25 01:26:30.513982 containerd[1734]: 2026-04-25 01:26:30.500 [INFO][4501] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:30.513982 containerd[1734]: 2026-04-25 01:26:30.500 [INFO][4501] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:30.513982 containerd[1734]: 2026-04-25 01:26:30.508 [WARNING][4501] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" HandleID="k8s-pod-network.1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--68fdc8476--7bgd5-eth0" Apr 25 01:26:30.513982 containerd[1734]: 2026-04-25 01:26:30.508 [INFO][4501] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" HandleID="k8s-pod-network.1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--68fdc8476--7bgd5-eth0" Apr 25 01:26:30.513982 containerd[1734]: 2026-04-25 01:26:30.510 [INFO][4501] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:30.513982 containerd[1734]: 2026-04-25 01:26:30.512 [INFO][4471] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Apr 25 01:26:30.516586 containerd[1734]: time="2026-04-25T01:26:30.514107528Z" level=info msg="TearDown network for sandbox \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\" successfully" Apr 25 01:26:30.516586 containerd[1734]: time="2026-04-25T01:26:30.514133408Z" level=info msg="StopPodSandbox for \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\" returns successfully" Apr 25 01:26:30.516949 systemd[1]: run-netns-cni\x2d993d5862\x2db24d\x2d9bad\x2dc8cd\x2d88b76b114e40.mount: Deactivated successfully. Apr 25 01:26:30.637694 kubelet[3177]: I0425 01:26:30.636976 3177 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/d011a28b-5a3d-4c2d-8307-64cd488a9cdd-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d011a28b-5a3d-4c2d-8307-64cd488a9cdd-whisker-backend-key-pair\") pod \"d011a28b-5a3d-4c2d-8307-64cd488a9cdd\" (UID: \"d011a28b-5a3d-4c2d-8307-64cd488a9cdd\") " Apr 25 01:26:30.637694 kubelet[3177]: I0425 01:26:30.637039 3177 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/d011a28b-5a3d-4c2d-8307-64cd488a9cdd-nginx-config\" (UniqueName: \"kubernetes.io/configmap/d011a28b-5a3d-4c2d-8307-64cd488a9cdd-nginx-config\") pod \"d011a28b-5a3d-4c2d-8307-64cd488a9cdd\" (UID: \"d011a28b-5a3d-4c2d-8307-64cd488a9cdd\") " Apr 25 01:26:30.637694 kubelet[3177]: I0425 01:26:30.637059 3177 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/d011a28b-5a3d-4c2d-8307-64cd488a9cdd-kube-api-access-bhzc6\" (UniqueName: \"kubernetes.io/projected/d011a28b-5a3d-4c2d-8307-64cd488a9cdd-kube-api-access-bhzc6\") pod \"d011a28b-5a3d-4c2d-8307-64cd488a9cdd\" (UID: \"d011a28b-5a3d-4c2d-8307-64cd488a9cdd\") " Apr 25 01:26:30.637694 kubelet[3177]: I0425 01:26:30.637077 3177 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/d011a28b-5a3d-4c2d-8307-64cd488a9cdd-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d011a28b-5a3d-4c2d-8307-64cd488a9cdd-whisker-ca-bundle\") pod \"d011a28b-5a3d-4c2d-8307-64cd488a9cdd\" (UID: \"d011a28b-5a3d-4c2d-8307-64cd488a9cdd\") " Apr 25 01:26:30.637694 kubelet[3177]: I0425 01:26:30.637372 3177 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d011a28b-5a3d-4c2d-8307-64cd488a9cdd-nginx-config" pod "d011a28b-5a3d-4c2d-8307-64cd488a9cdd" (UID: "d011a28b-5a3d-4c2d-8307-64cd488a9cdd"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 25 01:26:30.637915 kubelet[3177]: I0425 01:26:30.637392 3177 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d011a28b-5a3d-4c2d-8307-64cd488a9cdd-whisker-ca-bundle" pod "d011a28b-5a3d-4c2d-8307-64cd488a9cdd" (UID: "d011a28b-5a3d-4c2d-8307-64cd488a9cdd"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 25 01:26:30.643305 systemd[1]: var-lib-kubelet-pods-d011a28b\x2d5a3d\x2d4c2d\x2d8307\x2d64cd488a9cdd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbhzc6.mount: Deactivated successfully. Apr 25 01:26:30.644100 kubelet[3177]: I0425 01:26:30.644064 3177 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d011a28b-5a3d-4c2d-8307-64cd488a9cdd-kube-api-access-bhzc6" pod "d011a28b-5a3d-4c2d-8307-64cd488a9cdd" (UID: "d011a28b-5a3d-4c2d-8307-64cd488a9cdd"). InnerVolumeSpecName "kube-api-access-bhzc6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 25 01:26:30.644205 kubelet[3177]: I0425 01:26:30.644056 3177 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d011a28b-5a3d-4c2d-8307-64cd488a9cdd-whisker-backend-key-pair" pod "d011a28b-5a3d-4c2d-8307-64cd488a9cdd" (UID: "d011a28b-5a3d-4c2d-8307-64cd488a9cdd"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 25 01:26:30.644371 systemd[1]: var-lib-kubelet-pods-d011a28b\x2d5a3d\x2d4c2d\x2d8307\x2d64cd488a9cdd-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 25 01:26:30.737445 kubelet[3177]: I0425 01:26:30.737395 3177 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d011a28b-5a3d-4c2d-8307-64cd488a9cdd-nginx-config\") on node \"ci-4081.3.6-n-cf3dcbc0ec\" DevicePath \"\"" Apr 25 01:26:30.737445 kubelet[3177]: I0425 01:26:30.737425 3177 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bhzc6\" (UniqueName: \"kubernetes.io/projected/d011a28b-5a3d-4c2d-8307-64cd488a9cdd-kube-api-access-bhzc6\") on node \"ci-4081.3.6-n-cf3dcbc0ec\" DevicePath \"\"" Apr 25 01:26:30.737445 kubelet[3177]: I0425 01:26:30.737450 3177 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d011a28b-5a3d-4c2d-8307-64cd488a9cdd-whisker-ca-bundle\") on node \"ci-4081.3.6-n-cf3dcbc0ec\" DevicePath \"\"" Apr 25 01:26:30.737445 kubelet[3177]: I0425 01:26:30.737459 3177 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d011a28b-5a3d-4c2d-8307-64cd488a9cdd-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-cf3dcbc0ec\" DevicePath \"\"" Apr 25 01:26:31.163460 kernel: calico-node[4581]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 25 01:26:31.375370 systemd[1]: Removed slice kubepods-besteffort-podd011a28b_5a3d_4c2d_8307_64cd488a9cdd.slice - libcontainer container kubepods-besteffort-podd011a28b_5a3d_4c2d_8307_64cd488a9cdd.slice. Apr 25 01:26:31.410269 systemd[1]: run-containerd-runc-k8s.io-ab265a602c2cab2ecf59d06b88b0cfd07dfeb2356bfa09286ff724bd4d1e0e15-runc.KRjIUu.mount: Deactivated successfully. Apr 25 01:26:31.475321 systemd[1]: Created slice kubepods-besteffort-pod76fc8b37_887a_4805_97a7_9e56808c2da1.slice - libcontainer container kubepods-besteffort-pod76fc8b37_887a_4805_97a7_9e56808c2da1.slice. Apr 25 01:26:31.542893 kubelet[3177]: I0425 01:26:31.542851 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7dt2\" (UniqueName: \"kubernetes.io/projected/76fc8b37-887a-4805-97a7-9e56808c2da1-kube-api-access-f7dt2\") pod \"whisker-7598fbcbd9-mnndd\" (UID: \"76fc8b37-887a-4805-97a7-9e56808c2da1\") " pod="calico-system/whisker-7598fbcbd9-mnndd" Apr 25 01:26:31.542893 kubelet[3177]: I0425 01:26:31.542897 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/76fc8b37-887a-4805-97a7-9e56808c2da1-whisker-backend-key-pair\") pod \"whisker-7598fbcbd9-mnndd\" (UID: \"76fc8b37-887a-4805-97a7-9e56808c2da1\") " pod="calico-system/whisker-7598fbcbd9-mnndd" Apr 25 01:26:31.543307 kubelet[3177]: I0425 01:26:31.542918 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/76fc8b37-887a-4805-97a7-9e56808c2da1-nginx-config\") pod \"whisker-7598fbcbd9-mnndd\" (UID: \"76fc8b37-887a-4805-97a7-9e56808c2da1\") " pod="calico-system/whisker-7598fbcbd9-mnndd" Apr 25 01:26:31.543307 kubelet[3177]: I0425 01:26:31.542937 3177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76fc8b37-887a-4805-97a7-9e56808c2da1-whisker-ca-bundle\") pod \"whisker-7598fbcbd9-mnndd\" (UID: \"76fc8b37-887a-4805-97a7-9e56808c2da1\") " pod="calico-system/whisker-7598fbcbd9-mnndd" Apr 25 01:26:31.786465 containerd[1734]: time="2026-04-25T01:26:31.786416568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7598fbcbd9-mnndd,Uid:76fc8b37-887a-4805-97a7-9e56808c2da1,Namespace:calico-system,Attempt:0,}" Apr 25 01:26:31.827027 systemd-networkd[1367]: vxlan.calico: Link UP Apr 25 01:26:31.827034 systemd-networkd[1367]: vxlan.calico: Gained carrier Apr 25 01:26:31.956566 systemd-networkd[1367]: calic1841ffce1a: Link UP Apr 25 01:26:31.957425 systemd-networkd[1367]: calic1841ffce1a: Gained carrier Apr 25 01:26:31.982096 containerd[1734]: 2026-04-25 01:26:31.877 [INFO][4702] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--7598fbcbd9--mnndd-eth0 whisker-7598fbcbd9- calico-system 76fc8b37-887a-4805-97a7-9e56808c2da1 964 0 2026-04-25 01:26:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7598fbcbd9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-cf3dcbc0ec whisker-7598fbcbd9-mnndd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic1841ffce1a [] [] }} ContainerID="b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec" Namespace="calico-system" Pod="whisker-7598fbcbd9-mnndd" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--7598fbcbd9--mnndd-" Apr 25 01:26:31.982096 containerd[1734]: 2026-04-25 01:26:31.878 [INFO][4702] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec" Namespace="calico-system" Pod="whisker-7598fbcbd9-mnndd" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--7598fbcbd9--mnndd-eth0" Apr 25 01:26:31.982096 containerd[1734]: 2026-04-25 01:26:31.906 [INFO][4716] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec" HandleID="k8s-pod-network.b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--7598fbcbd9--mnndd-eth0" Apr 25 01:26:31.982096 containerd[1734]: 2026-04-25 01:26:31.917 [INFO][4716] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec" HandleID="k8s-pod-network.b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--7598fbcbd9--mnndd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb580), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-cf3dcbc0ec", "pod":"whisker-7598fbcbd9-mnndd", "timestamp":"2026-04-25 01:26:31.906814193 +0000 UTC"}, Hostname:"ci-4081.3.6-n-cf3dcbc0ec", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003071e0)} Apr 25 01:26:31.982096 containerd[1734]: 2026-04-25 01:26:31.917 [INFO][4716] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:31.982096 containerd[1734]: 2026-04-25 01:26:31.917 [INFO][4716] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:31.982096 containerd[1734]: 2026-04-25 01:26:31.917 [INFO][4716] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-cf3dcbc0ec' Apr 25 01:26:31.982096 containerd[1734]: 2026-04-25 01:26:31.919 [INFO][4716] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:31.982096 containerd[1734]: 2026-04-25 01:26:31.923 [INFO][4716] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:31.982096 containerd[1734]: 2026-04-25 01:26:31.927 [INFO][4716] ipam/ipam.go 526: Trying affinity for 192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:31.982096 containerd[1734]: 2026-04-25 01:26:31.929 [INFO][4716] ipam/ipam.go 160: Attempting to load block cidr=192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:31.982096 containerd[1734]: 2026-04-25 01:26:31.931 [INFO][4716] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:31.982096 containerd[1734]: 2026-04-25 01:26:31.931 [INFO][4716] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:31.982096 containerd[1734]: 2026-04-25 01:26:31.932 [INFO][4716] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec Apr 25 01:26:31.982096 containerd[1734]: 2026-04-25 01:26:31.937 [INFO][4716] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:31.982096 containerd[1734]: 2026-04-25 01:26:31.945 [INFO][4716] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.111.65/26] block=192.168.111.64/26 handle="k8s-pod-network.b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:31.982096 containerd[1734]: 2026-04-25 01:26:31.946 [INFO][4716] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.111.65/26] handle="k8s-pod-network.b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:31.982096 containerd[1734]: 2026-04-25 01:26:31.946 [INFO][4716] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:31.982096 containerd[1734]: 2026-04-25 01:26:31.946 [INFO][4716] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.111.65/26] IPv6=[] ContainerID="b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec" HandleID="k8s-pod-network.b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--7598fbcbd9--mnndd-eth0" Apr 25 01:26:31.982968 containerd[1734]: 2026-04-25 01:26:31.948 [INFO][4702] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec" Namespace="calico-system" Pod="whisker-7598fbcbd9-mnndd" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--7598fbcbd9--mnndd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--7598fbcbd9--mnndd-eth0", GenerateName:"whisker-7598fbcbd9-", Namespace:"calico-system", SelfLink:"", UID:"76fc8b37-887a-4805-97a7-9e56808c2da1", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7598fbcbd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"", Pod:"whisker-7598fbcbd9-mnndd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.111.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic1841ffce1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:26:31.982968 containerd[1734]: 2026-04-25 01:26:31.949 [INFO][4702] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.65/32] ContainerID="b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec" Namespace="calico-system" Pod="whisker-7598fbcbd9-mnndd" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--7598fbcbd9--mnndd-eth0" Apr 25 01:26:31.982968 containerd[1734]: 2026-04-25 01:26:31.949 [INFO][4702] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1841ffce1a ContainerID="b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec" Namespace="calico-system" Pod="whisker-7598fbcbd9-mnndd" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--7598fbcbd9--mnndd-eth0" Apr 25 01:26:31.982968 containerd[1734]: 2026-04-25 01:26:31.959 [INFO][4702] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec" Namespace="calico-system" Pod="whisker-7598fbcbd9-mnndd" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--7598fbcbd9--mnndd-eth0" Apr 25 01:26:31.982968 containerd[1734]: 2026-04-25 01:26:31.960 [INFO][4702] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec" Namespace="calico-system" Pod="whisker-7598fbcbd9-mnndd" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--7598fbcbd9--mnndd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--7598fbcbd9--mnndd-eth0", GenerateName:"whisker-7598fbcbd9-", Namespace:"calico-system", SelfLink:"", UID:"76fc8b37-887a-4805-97a7-9e56808c2da1", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7598fbcbd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec", Pod:"whisker-7598fbcbd9-mnndd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.111.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic1841ffce1a", MAC:"d6:41:2b:eb:94:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:26:31.982968 containerd[1734]: 2026-04-25 01:26:31.978 [INFO][4702] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec" Namespace="calico-system" Pod="whisker-7598fbcbd9-mnndd" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--7598fbcbd9--mnndd-eth0" Apr 25 01:26:32.012691 containerd[1734]: time="2026-04-25T01:26:32.012415579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 01:26:32.012691 containerd[1734]: time="2026-04-25T01:26:32.012488339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 01:26:32.012691 containerd[1734]: time="2026-04-25T01:26:32.012504259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:26:32.012691 containerd[1734]: time="2026-04-25T01:26:32.012584139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:26:32.029636 systemd[1]: Started cri-containerd-b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec.scope - libcontainer container b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec. Apr 25 01:26:32.059521 containerd[1734]: time="2026-04-25T01:26:32.059471253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7598fbcbd9-mnndd,Uid:76fc8b37-887a-4805-97a7-9e56808c2da1,Namespace:calico-system,Attempt:0,} returns sandbox id \"b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec\"" Apr 25 01:26:32.062603 containerd[1734]: time="2026-04-25T01:26:32.062073653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 25 01:26:32.139107 kubelet[3177]: I0425 01:26:32.139071 3177 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d011a28b-5a3d-4c2d-8307-64cd488a9cdd" path="/var/lib/kubelet/pods/d011a28b-5a3d-4c2d-8307-64cd488a9cdd/volumes" Apr 25 01:26:33.571058 containerd[1734]: time="2026-04-25T01:26:33.571004143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:33.574057 containerd[1734]: time="2026-04-25T01:26:33.573855582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 25 01:26:33.578138 containerd[1734]: time="2026-04-25T01:26:33.577795422Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:33.583296 containerd[1734]: time="2026-04-25T01:26:33.583251181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:33.584125 containerd[1734]: time="2026-04-25T01:26:33.584092341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.521981288s" Apr 25 01:26:33.584224 containerd[1734]: time="2026-04-25T01:26:33.584208621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 25 01:26:33.592287 containerd[1734]: time="2026-04-25T01:26:33.592247460Z" level=info msg="CreateContainer within sandbox \"b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 25 01:26:33.629340 containerd[1734]: time="2026-04-25T01:26:33.629222614Z" level=info msg="CreateContainer within sandbox \"b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"bd29d90c066bcc471b6b0ecfa87af4d896924adbd8ea06106de1b1138efec29b\"" Apr 25 01:26:33.630519 containerd[1734]: time="2026-04-25T01:26:33.629983174Z" level=info msg="StartContainer for \"bd29d90c066bcc471b6b0ecfa87af4d896924adbd8ea06106de1b1138efec29b\"" Apr 25 01:26:33.661622 systemd[1]: Started cri-containerd-bd29d90c066bcc471b6b0ecfa87af4d896924adbd8ea06106de1b1138efec29b.scope - libcontainer container bd29d90c066bcc471b6b0ecfa87af4d896924adbd8ea06106de1b1138efec29b. Apr 25 01:26:33.695768 containerd[1734]: time="2026-04-25T01:26:33.695702636Z" level=info msg="StartContainer for \"bd29d90c066bcc471b6b0ecfa87af4d896924adbd8ea06106de1b1138efec29b\" returns successfully" Apr 25 01:26:33.697575 containerd[1734]: time="2026-04-25T01:26:33.697405156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 25 01:26:33.698720 systemd-networkd[1367]: vxlan.calico: Gained IPv6LL Apr 25 01:26:33.699250 systemd-networkd[1367]: calic1841ffce1a: Gained IPv6LL Apr 25 01:26:35.577355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3468590495.mount: Deactivated successfully. Apr 25 01:26:35.640371 containerd[1734]: time="2026-04-25T01:26:35.639582204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:35.642671 containerd[1734]: time="2026-04-25T01:26:35.642641283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 25 01:26:35.646399 containerd[1734]: time="2026-04-25T01:26:35.646370002Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:35.651397 containerd[1734]: time="2026-04-25T01:26:35.651360640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:35.652574 containerd[1734]: time="2026-04-25T01:26:35.652545760Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.955102404s" Apr 25 01:26:35.652619 containerd[1734]: time="2026-04-25T01:26:35.652578960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 25 01:26:35.660619 containerd[1734]: time="2026-04-25T01:26:35.660585998Z" level=info msg="CreateContainer within sandbox \"b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 25 01:26:35.698918 containerd[1734]: time="2026-04-25T01:26:35.698875908Z" level=info msg="CreateContainer within sandbox \"b2d9b261f7b8f8d51ea42084605a43d97fc0440ead8b1dfc2b6d74e5237eb3ec\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d0faa43ff5c153e8042b0980f35ffd68cc085ec4e44e34293755a8b3ce2654fc\"" Apr 25 01:26:35.700895 containerd[1734]: time="2026-04-25T01:26:35.700861708Z" level=info msg="StartContainer for \"d0faa43ff5c153e8042b0980f35ffd68cc085ec4e44e34293755a8b3ce2654fc\"" Apr 25 01:26:35.729609 systemd[1]: Started cri-containerd-d0faa43ff5c153e8042b0980f35ffd68cc085ec4e44e34293755a8b3ce2654fc.scope - libcontainer container d0faa43ff5c153e8042b0980f35ffd68cc085ec4e44e34293755a8b3ce2654fc. Apr 25 01:26:35.770895 containerd[1734]: time="2026-04-25T01:26:35.770572129Z" level=info msg="StartContainer for \"d0faa43ff5c153e8042b0980f35ffd68cc085ec4e44e34293755a8b3ce2654fc\" returns successfully" Apr 25 01:26:40.138612 containerd[1734]: time="2026-04-25T01:26:40.138557504Z" level=info msg="StopPodSandbox for \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\"" Apr 25 01:26:40.192579 kubelet[3177]: I0425 01:26:40.191474 3177 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-7598fbcbd9-mnndd" podStartSLOduration=5.599914223 podStartE2EDuration="9.19145877s" podCreationTimestamp="2026-04-25 01:26:31 +0000 UTC" firstStartedPulling="2026-04-25 01:26:32.061680973 +0000 UTC m=+50.040302660" lastFinishedPulling="2026-04-25 01:26:35.65322548 +0000 UTC m=+53.631847207" observedRunningTime="2026-04-25 01:26:36.400871244 +0000 UTC m=+54.379492971" watchObservedRunningTime="2026-04-25 01:26:40.19145877 +0000 UTC m=+58.170080497" Apr 25 01:26:40.225343 containerd[1734]: 2026-04-25 01:26:40.191 [INFO][4946] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Apr 25 01:26:40.225343 containerd[1734]: 2026-04-25 01:26:40.192 [INFO][4946] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" iface="eth0" netns="/var/run/netns/cni-e65b365b-1193-22a8-f2ae-ded3297d5930" Apr 25 01:26:40.225343 containerd[1734]: 2026-04-25 01:26:40.192 [INFO][4946] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" iface="eth0" netns="/var/run/netns/cni-e65b365b-1193-22a8-f2ae-ded3297d5930" Apr 25 01:26:40.225343 containerd[1734]: 2026-04-25 01:26:40.193 [INFO][4946] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" iface="eth0" netns="/var/run/netns/cni-e65b365b-1193-22a8-f2ae-ded3297d5930" Apr 25 01:26:40.225343 containerd[1734]: 2026-04-25 01:26:40.193 [INFO][4946] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Apr 25 01:26:40.225343 containerd[1734]: 2026-04-25 01:26:40.193 [INFO][4946] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Apr 25 01:26:40.225343 containerd[1734]: 2026-04-25 01:26:40.211 [INFO][4953] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" HandleID="k8s-pod-network.6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0" Apr 25 01:26:40.225343 containerd[1734]: 2026-04-25 01:26:40.211 [INFO][4953] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:40.225343 containerd[1734]: 2026-04-25 01:26:40.211 [INFO][4953] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:40.225343 containerd[1734]: 2026-04-25 01:26:40.220 [WARNING][4953] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" HandleID="k8s-pod-network.6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0" Apr 25 01:26:40.225343 containerd[1734]: 2026-04-25 01:26:40.220 [INFO][4953] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" HandleID="k8s-pod-network.6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0" Apr 25 01:26:40.225343 containerd[1734]: 2026-04-25 01:26:40.221 [INFO][4953] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:40.225343 containerd[1734]: 2026-04-25 01:26:40.223 [INFO][4946] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Apr 25 01:26:40.228895 containerd[1734]: time="2026-04-25T01:26:40.228852121Z" level=info msg="TearDown network for sandbox \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\" successfully" Apr 25 01:26:40.228895 containerd[1734]: time="2026-04-25T01:26:40.228887601Z" level=info msg="StopPodSandbox for \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\" returns successfully" Apr 25 01:26:40.231050 systemd[1]: run-netns-cni\x2de65b365b\x2d1193\x2d22a8\x2df2ae\x2dded3297d5930.mount: Deactivated successfully. Apr 25 01:26:40.235406 containerd[1734]: time="2026-04-25T01:26:40.235357079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-s6dwd,Uid:cdad5612-4286-46a6-867f-878844a351de,Namespace:calico-system,Attempt:1,}" Apr 25 01:26:40.389947 systemd-networkd[1367]: cali4c14fa76731: Link UP Apr 25 01:26:40.390662 systemd-networkd[1367]: cali4c14fa76731: Gained carrier Apr 25 01:26:40.416878 containerd[1734]: 2026-04-25 01:26:40.316 [INFO][4963] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0 goldmane-9f7667bb8- calico-system cdad5612-4286-46a6-867f-878844a351de 1001 0 2026-04-25 01:26:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-cf3dcbc0ec goldmane-9f7667bb8-s6dwd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4c14fa76731 [] [] }} ContainerID="282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d" Namespace="calico-system" Pod="goldmane-9f7667bb8-s6dwd" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-" Apr 25 01:26:40.416878 containerd[1734]: 2026-04-25 01:26:40.316 [INFO][4963] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d" Namespace="calico-system" Pod="goldmane-9f7667bb8-s6dwd" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0" Apr 25 01:26:40.416878 containerd[1734]: 2026-04-25 01:26:40.341 [INFO][4971] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d" HandleID="k8s-pod-network.282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0" Apr 25 01:26:40.416878 containerd[1734]: 2026-04-25 01:26:40.352 [INFO][4971] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d" HandleID="k8s-pod-network.282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000377d70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-cf3dcbc0ec", "pod":"goldmane-9f7667bb8-s6dwd", "timestamp":"2026-04-25 01:26:40.341373091 +0000 UTC"}, Hostname:"ci-4081.3.6-n-cf3dcbc0ec", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400026a000)} Apr 25 01:26:40.416878 containerd[1734]: 2026-04-25 01:26:40.352 [INFO][4971] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:40.416878 containerd[1734]: 2026-04-25 01:26:40.352 [INFO][4971] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:40.416878 containerd[1734]: 2026-04-25 01:26:40.352 [INFO][4971] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-cf3dcbc0ec' Apr 25 01:26:40.416878 containerd[1734]: 2026-04-25 01:26:40.354 [INFO][4971] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:40.416878 containerd[1734]: 2026-04-25 01:26:40.358 [INFO][4971] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:40.416878 containerd[1734]: 2026-04-25 01:26:40.362 [INFO][4971] ipam/ipam.go 526: Trying affinity for 192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:40.416878 containerd[1734]: 2026-04-25 01:26:40.363 [INFO][4971] ipam/ipam.go 160: Attempting to load block cidr=192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:40.416878 containerd[1734]: 2026-04-25 01:26:40.365 [INFO][4971] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:40.416878 containerd[1734]: 2026-04-25 01:26:40.365 [INFO][4971] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:40.416878 containerd[1734]: 2026-04-25 01:26:40.366 [INFO][4971] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d Apr 25 01:26:40.416878 containerd[1734]: 2026-04-25 01:26:40.371 [INFO][4971] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:40.416878 containerd[1734]: 2026-04-25 01:26:40.382 [INFO][4971] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.111.66/26] block=192.168.111.64/26 handle="k8s-pod-network.282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:40.416878 containerd[1734]: 2026-04-25 01:26:40.382 [INFO][4971] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.111.66/26] handle="k8s-pod-network.282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:40.416878 containerd[1734]: 2026-04-25 01:26:40.382 [INFO][4971] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:40.416878 containerd[1734]: 2026-04-25 01:26:40.382 [INFO][4971] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.111.66/26] IPv6=[] ContainerID="282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d" HandleID="k8s-pod-network.282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0" Apr 25 01:26:40.418006 containerd[1734]: 2026-04-25 01:26:40.384 [INFO][4963] cni-plugin/k8s.go 418: Populated endpoint ContainerID="282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d" Namespace="calico-system" Pod="goldmane-9f7667bb8-s6dwd" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"cdad5612-4286-46a6-867f-878844a351de", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"", Pod:"goldmane-9f7667bb8-s6dwd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.111.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4c14fa76731", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:26:40.418006 containerd[1734]: 2026-04-25 01:26:40.384 [INFO][4963] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.66/32] ContainerID="282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d" Namespace="calico-system" Pod="goldmane-9f7667bb8-s6dwd" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0" Apr 25 01:26:40.418006 containerd[1734]: 2026-04-25 01:26:40.384 [INFO][4963] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c14fa76731 ContainerID="282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d" Namespace="calico-system" Pod="goldmane-9f7667bb8-s6dwd" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0" Apr 25 01:26:40.418006 containerd[1734]: 2026-04-25 01:26:40.390 [INFO][4963] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d" Namespace="calico-system" Pod="goldmane-9f7667bb8-s6dwd" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0" Apr 25 01:26:40.418006 containerd[1734]: 2026-04-25 01:26:40.391 [INFO][4963] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d" Namespace="calico-system" Pod="goldmane-9f7667bb8-s6dwd" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"cdad5612-4286-46a6-867f-878844a351de", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d", Pod:"goldmane-9f7667bb8-s6dwd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.111.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4c14fa76731", MAC:"9e:51:26:f9:50:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:26:40.418006 containerd[1734]: 2026-04-25 01:26:40.413 [INFO][4963] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d" Namespace="calico-system" Pod="goldmane-9f7667bb8-s6dwd" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0" Apr 25 01:26:40.440044 containerd[1734]: time="2026-04-25T01:26:40.439657465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 01:26:40.440044 containerd[1734]: time="2026-04-25T01:26:40.439723865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 01:26:40.440044 containerd[1734]: time="2026-04-25T01:26:40.439739305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:26:40.440044 containerd[1734]: time="2026-04-25T01:26:40.439829585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:26:40.467587 systemd[1]: Started cri-containerd-282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d.scope - libcontainer container 282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d. Apr 25 01:26:40.500711 containerd[1734]: time="2026-04-25T01:26:40.500631969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-s6dwd,Uid:cdad5612-4286-46a6-867f-878844a351de,Namespace:calico-system,Attempt:1,} returns sandbox id \"282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d\"" Apr 25 01:26:40.503507 containerd[1734]: time="2026-04-25T01:26:40.503387849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 25 01:26:41.137456 containerd[1734]: time="2026-04-25T01:26:41.137046803Z" level=info msg="StopPodSandbox for \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\"" Apr 25 01:26:41.233678 containerd[1734]: 2026-04-25 01:26:41.200 [INFO][5052] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Apr 25 01:26:41.233678 containerd[1734]: 2026-04-25 01:26:41.200 [INFO][5052] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" iface="eth0" netns="/var/run/netns/cni-40c22706-dbfc-c1a8-bfc7-23b3a2793eb3" Apr 25 01:26:41.233678 containerd[1734]: 2026-04-25 01:26:41.200 [INFO][5052] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" iface="eth0" netns="/var/run/netns/cni-40c22706-dbfc-c1a8-bfc7-23b3a2793eb3" Apr 25 01:26:41.233678 containerd[1734]: 2026-04-25 01:26:41.201 [INFO][5052] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" iface="eth0" netns="/var/run/netns/cni-40c22706-dbfc-c1a8-bfc7-23b3a2793eb3" Apr 25 01:26:41.233678 containerd[1734]: 2026-04-25 01:26:41.201 [INFO][5052] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Apr 25 01:26:41.233678 containerd[1734]: 2026-04-25 01:26:41.201 [INFO][5052] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Apr 25 01:26:41.233678 containerd[1734]: 2026-04-25 01:26:41.220 [INFO][5059] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" HandleID="k8s-pod-network.940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0" Apr 25 01:26:41.233678 containerd[1734]: 2026-04-25 01:26:41.220 [INFO][5059] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:41.233678 containerd[1734]: 2026-04-25 01:26:41.220 [INFO][5059] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:41.233678 containerd[1734]: 2026-04-25 01:26:41.228 [WARNING][5059] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" HandleID="k8s-pod-network.940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0" Apr 25 01:26:41.233678 containerd[1734]: 2026-04-25 01:26:41.228 [INFO][5059] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" HandleID="k8s-pod-network.940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0" Apr 25 01:26:41.233678 containerd[1734]: 2026-04-25 01:26:41.230 [INFO][5059] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:41.233678 containerd[1734]: 2026-04-25 01:26:41.231 [INFO][5052] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Apr 25 01:26:41.236364 containerd[1734]: time="2026-04-25T01:26:41.233809017Z" level=info msg="TearDown network for sandbox \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\" successfully" Apr 25 01:26:41.236364 containerd[1734]: time="2026-04-25T01:26:41.233836737Z" level=info msg="StopPodSandbox for \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\" returns successfully" Apr 25 01:26:41.236590 systemd[1]: run-netns-cni\x2d40c22706\x2ddbfc\x2dc1a8\x2dbfc7\x2d23b3a2793eb3.mount: Deactivated successfully. Apr 25 01:26:41.240726 containerd[1734]: time="2026-04-25T01:26:41.240333055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684644c847-7pwrk,Uid:6fe3ea34-3f40-4037-a8dd-3ea381b9fa21,Namespace:calico-system,Attempt:1,}" Apr 25 01:26:41.412151 systemd-networkd[1367]: cali32027003cc3: Link UP Apr 25 01:26:41.412365 systemd-networkd[1367]: cali32027003cc3: Gained carrier Apr 25 01:26:41.435238 containerd[1734]: 2026-04-25 01:26:41.343 [INFO][5066] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0 calico-apiserver-684644c847- calico-system 6fe3ea34-3f40-4037-a8dd-3ea381b9fa21 1013 0 2026-04-25 01:26:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:684644c847 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-cf3dcbc0ec calico-apiserver-684644c847-7pwrk eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali32027003cc3 [] [] }} ContainerID="534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a" Namespace="calico-system" Pod="calico-apiserver-684644c847-7pwrk" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-" Apr 25 01:26:41.435238 containerd[1734]: 2026-04-25 01:26:41.343 [INFO][5066] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a" Namespace="calico-system" Pod="calico-apiserver-684644c847-7pwrk" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0" Apr 25 01:26:41.435238 containerd[1734]: 2026-04-25 01:26:41.364 [INFO][5077] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a" HandleID="k8s-pod-network.534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0" Apr 25 01:26:41.435238 containerd[1734]: 2026-04-25 01:26:41.374 [INFO][5077] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a" HandleID="k8s-pod-network.534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002737d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-cf3dcbc0ec", "pod":"calico-apiserver-684644c847-7pwrk", "timestamp":"2026-04-25 01:26:41.364991743 +0000 UTC"}, Hostname:"ci-4081.3.6-n-cf3dcbc0ec", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003131e0)} Apr 25 01:26:41.435238 containerd[1734]: 2026-04-25 01:26:41.374 [INFO][5077] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:41.435238 containerd[1734]: 2026-04-25 01:26:41.374 [INFO][5077] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:41.435238 containerd[1734]: 2026-04-25 01:26:41.374 [INFO][5077] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-cf3dcbc0ec' Apr 25 01:26:41.435238 containerd[1734]: 2026-04-25 01:26:41.376 [INFO][5077] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:41.435238 containerd[1734]: 2026-04-25 01:26:41.380 [INFO][5077] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:41.435238 containerd[1734]: 2026-04-25 01:26:41.384 [INFO][5077] ipam/ipam.go 526: Trying affinity for 192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:41.435238 containerd[1734]: 2026-04-25 01:26:41.386 [INFO][5077] ipam/ipam.go 160: Attempting to load block cidr=192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:41.435238 containerd[1734]: 2026-04-25 01:26:41.388 [INFO][5077] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:41.435238 containerd[1734]: 2026-04-25 01:26:41.388 [INFO][5077] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:41.435238 containerd[1734]: 2026-04-25 01:26:41.389 [INFO][5077] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a Apr 25 01:26:41.435238 containerd[1734]: 2026-04-25 01:26:41.394 [INFO][5077] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:41.435238 containerd[1734]: 2026-04-25 01:26:41.404 [INFO][5077] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.111.67/26] block=192.168.111.64/26 handle="k8s-pod-network.534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:41.435238 containerd[1734]: 2026-04-25 01:26:41.404 [INFO][5077] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.111.67/26] handle="k8s-pod-network.534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:41.435238 containerd[1734]: 2026-04-25 01:26:41.404 [INFO][5077] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:41.435238 containerd[1734]: 2026-04-25 01:26:41.404 [INFO][5077] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.111.67/26] IPv6=[] ContainerID="534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a" HandleID="k8s-pod-network.534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0" Apr 25 01:26:41.435911 containerd[1734]: 2026-04-25 01:26:41.407 [INFO][5066] cni-plugin/k8s.go 418: Populated endpoint ContainerID="534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a" Namespace="calico-system" Pod="calico-apiserver-684644c847-7pwrk" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0", GenerateName:"calico-apiserver-684644c847-", Namespace:"calico-system", SelfLink:"", UID:"6fe3ea34-3f40-4037-a8dd-3ea381b9fa21", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684644c847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"", Pod:"calico-apiserver-684644c847-7pwrk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali32027003cc3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:26:41.435911 containerd[1734]: 2026-04-25 01:26:41.407 [INFO][5066] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.67/32] ContainerID="534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a" Namespace="calico-system" Pod="calico-apiserver-684644c847-7pwrk" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0" Apr 25 01:26:41.435911 containerd[1734]: 2026-04-25 01:26:41.407 [INFO][5066] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32027003cc3 ContainerID="534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a" Namespace="calico-system" Pod="calico-apiserver-684644c847-7pwrk" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0" Apr 25 01:26:41.435911 containerd[1734]: 2026-04-25 01:26:41.412 [INFO][5066] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a" Namespace="calico-system" Pod="calico-apiserver-684644c847-7pwrk" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0" Apr 25 01:26:41.435911 containerd[1734]: 2026-04-25 01:26:41.414 [INFO][5066] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a" Namespace="calico-system" Pod="calico-apiserver-684644c847-7pwrk" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0", GenerateName:"calico-apiserver-684644c847-", Namespace:"calico-system", SelfLink:"", UID:"6fe3ea34-3f40-4037-a8dd-3ea381b9fa21", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684644c847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a", Pod:"calico-apiserver-684644c847-7pwrk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali32027003cc3", MAC:"ca:e2:fa:16:21:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:26:41.435911 containerd[1734]: 2026-04-25 01:26:41.432 [INFO][5066] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a" Namespace="calico-system" Pod="calico-apiserver-684644c847-7pwrk" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0" Apr 25 01:26:41.462980 containerd[1734]: time="2026-04-25T01:26:41.462402037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 01:26:41.462980 containerd[1734]: time="2026-04-25T01:26:41.462488077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 01:26:41.462980 containerd[1734]: time="2026-04-25T01:26:41.462504157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:26:41.462980 containerd[1734]: time="2026-04-25T01:26:41.462585397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:26:41.491605 systemd[1]: Started cri-containerd-534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a.scope - libcontainer container 534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a. Apr 25 01:26:41.528850 containerd[1734]: time="2026-04-25T01:26:41.528802100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684644c847-7pwrk,Uid:6fe3ea34-3f40-4037-a8dd-3ea381b9fa21,Namespace:calico-system,Attempt:1,} returns sandbox id \"534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a\"" Apr 25 01:26:41.634566 systemd-networkd[1367]: cali4c14fa76731: Gained IPv6LL Apr 25 01:26:42.113891 containerd[1734]: time="2026-04-25T01:26:42.113853626Z" level=info msg="StopPodSandbox for \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\"" Apr 25 01:26:42.139376 containerd[1734]: time="2026-04-25T01:26:42.139101460Z" level=info msg="StopPodSandbox for \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\"" Apr 25 01:26:42.269882 containerd[1734]: 2026-04-25 01:26:42.160 [WARNING][5160] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--68fdc8476--7bgd5-eth0" Apr 25 01:26:42.269882 containerd[1734]: 2026-04-25 01:26:42.160 [INFO][5160] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Apr 25 01:26:42.269882 containerd[1734]: 2026-04-25 01:26:42.160 [INFO][5160] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" iface="eth0" netns="" Apr 25 01:26:42.269882 containerd[1734]: 2026-04-25 01:26:42.160 [INFO][5160] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Apr 25 01:26:42.269882 containerd[1734]: 2026-04-25 01:26:42.160 [INFO][5160] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Apr 25 01:26:42.269882 containerd[1734]: 2026-04-25 01:26:42.244 [INFO][5184] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" HandleID="k8s-pod-network.1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--68fdc8476--7bgd5-eth0" Apr 25 01:26:42.269882 containerd[1734]: 2026-04-25 01:26:42.246 [INFO][5184] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:42.269882 containerd[1734]: 2026-04-25 01:26:42.246 [INFO][5184] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:42.269882 containerd[1734]: 2026-04-25 01:26:42.257 [WARNING][5184] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" HandleID="k8s-pod-network.1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--68fdc8476--7bgd5-eth0" Apr 25 01:26:42.269882 containerd[1734]: 2026-04-25 01:26:42.261 [INFO][5184] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" HandleID="k8s-pod-network.1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--68fdc8476--7bgd5-eth0" Apr 25 01:26:42.269882 containerd[1734]: 2026-04-25 01:26:42.264 [INFO][5184] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:42.269882 containerd[1734]: 2026-04-25 01:26:42.266 [INFO][5160] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Apr 25 01:26:42.271710 containerd[1734]: time="2026-04-25T01:26:42.269911826Z" level=info msg="TearDown network for sandbox \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\" successfully" Apr 25 01:26:42.271710 containerd[1734]: time="2026-04-25T01:26:42.269935346Z" level=info msg="StopPodSandbox for \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\" returns successfully" Apr 25 01:26:42.271710 containerd[1734]: time="2026-04-25T01:26:42.271668265Z" level=info msg="RemovePodSandbox for \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\"" Apr 25 01:26:42.271710 containerd[1734]: time="2026-04-25T01:26:42.271709625Z" level=info msg="Forcibly stopping sandbox \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\"" Apr 25 01:26:42.309779 containerd[1734]: 2026-04-25 01:26:42.220 [INFO][5177] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Apr 25 01:26:42.309779 containerd[1734]: 2026-04-25 01:26:42.221 [INFO][5177] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" iface="eth0" netns="/var/run/netns/cni-4747fbee-21ba-a2b8-7bdd-77aa0fe5f273" Apr 25 01:26:42.309779 containerd[1734]: 2026-04-25 01:26:42.222 [INFO][5177] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" iface="eth0" netns="/var/run/netns/cni-4747fbee-21ba-a2b8-7bdd-77aa0fe5f273" Apr 25 01:26:42.309779 containerd[1734]: 2026-04-25 01:26:42.222 [INFO][5177] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" iface="eth0" netns="/var/run/netns/cni-4747fbee-21ba-a2b8-7bdd-77aa0fe5f273" Apr 25 01:26:42.309779 containerd[1734]: 2026-04-25 01:26:42.222 [INFO][5177] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Apr 25 01:26:42.309779 containerd[1734]: 2026-04-25 01:26:42.222 [INFO][5177] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Apr 25 01:26:42.309779 containerd[1734]: 2026-04-25 01:26:42.273 [INFO][5190] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" HandleID="k8s-pod-network.cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0" Apr 25 01:26:42.309779 containerd[1734]: 2026-04-25 01:26:42.274 [INFO][5190] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:42.309779 containerd[1734]: 2026-04-25 01:26:42.274 [INFO][5190] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:42.309779 containerd[1734]: 2026-04-25 01:26:42.296 [WARNING][5190] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" HandleID="k8s-pod-network.cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0" Apr 25 01:26:42.309779 containerd[1734]: 2026-04-25 01:26:42.296 [INFO][5190] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" HandleID="k8s-pod-network.cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0" Apr 25 01:26:42.309779 containerd[1734]: 2026-04-25 01:26:42.299 [INFO][5190] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:42.309779 containerd[1734]: 2026-04-25 01:26:42.305 [INFO][5177] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Apr 25 01:26:42.312154 containerd[1734]: time="2026-04-25T01:26:42.310117175Z" level=info msg="TearDown network for sandbox \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\" successfully" Apr 25 01:26:42.312154 containerd[1734]: time="2026-04-25T01:26:42.310142815Z" level=info msg="StopPodSandbox for \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\" returns successfully" Apr 25 01:26:42.313327 systemd[1]: run-netns-cni\x2d4747fbee\x2d21ba\x2da2b8\x2d7bdd\x2d77aa0fe5f273.mount: Deactivated successfully. Apr 25 01:26:42.371199 containerd[1734]: 2026-04-25 01:26:42.340 [WARNING][5213] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--68fdc8476--7bgd5-eth0" Apr 25 01:26:42.371199 containerd[1734]: 2026-04-25 01:26:42.340 [INFO][5213] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Apr 25 01:26:42.371199 containerd[1734]: 2026-04-25 01:26:42.340 [INFO][5213] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" iface="eth0" netns="" Apr 25 01:26:42.371199 containerd[1734]: 2026-04-25 01:26:42.340 [INFO][5213] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Apr 25 01:26:42.371199 containerd[1734]: 2026-04-25 01:26:42.340 [INFO][5213] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Apr 25 01:26:42.371199 containerd[1734]: 2026-04-25 01:26:42.357 [INFO][5223] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" HandleID="k8s-pod-network.1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--68fdc8476--7bgd5-eth0" Apr 25 01:26:42.371199 containerd[1734]: 2026-04-25 01:26:42.357 [INFO][5223] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:42.371199 containerd[1734]: 2026-04-25 01:26:42.357 [INFO][5223] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:42.371199 containerd[1734]: 2026-04-25 01:26:42.366 [WARNING][5223] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" HandleID="k8s-pod-network.1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--68fdc8476--7bgd5-eth0" Apr 25 01:26:42.371199 containerd[1734]: 2026-04-25 01:26:42.366 [INFO][5223] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" HandleID="k8s-pod-network.1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-whisker--68fdc8476--7bgd5-eth0" Apr 25 01:26:42.371199 containerd[1734]: 2026-04-25 01:26:42.367 [INFO][5223] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:42.371199 containerd[1734]: 2026-04-25 01:26:42.369 [INFO][5213] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d" Apr 25 01:26:42.371199 containerd[1734]: time="2026-04-25T01:26:42.371175799Z" level=info msg="TearDown network for sandbox \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\" successfully" Apr 25 01:26:42.394225 containerd[1734]: time="2026-04-25T01:26:42.394180073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-hmvfq,Uid:9a4c953d-e4d6-4586-907f-7af01091f4b3,Namespace:kube-system,Attempt:1,}" Apr 25 01:26:42.454618 containerd[1734]: time="2026-04-25T01:26:42.454573937Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 01:26:42.454747 containerd[1734]: time="2026-04-25T01:26:42.454652057Z" level=info msg="RemovePodSandbox \"1d0e11e92d62c3c34ada2f65fcd213cfefa492d6df06ff704e590fbd239b790d\" returns successfully" Apr 25 01:26:42.456282 containerd[1734]: time="2026-04-25T01:26:42.456239417Z" level=info msg="StopPodSandbox for \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\"" Apr 25 01:26:42.530673 systemd-networkd[1367]: cali32027003cc3: Gained IPv6LL Apr 25 01:26:42.607222 containerd[1734]: 2026-04-25 01:26:42.525 [WARNING][5241] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"cdad5612-4286-46a6-867f-878844a351de", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d", Pod:"goldmane-9f7667bb8-s6dwd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.111.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4c14fa76731", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:26:42.607222 containerd[1734]: 2026-04-25 01:26:42.529 [INFO][5241] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Apr 25 01:26:42.607222 containerd[1734]: 2026-04-25 01:26:42.529 [INFO][5241] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" iface="eth0" netns="" Apr 25 01:26:42.607222 containerd[1734]: 2026-04-25 01:26:42.529 [INFO][5241] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Apr 25 01:26:42.607222 containerd[1734]: 2026-04-25 01:26:42.529 [INFO][5241] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Apr 25 01:26:42.607222 containerd[1734]: 2026-04-25 01:26:42.587 [INFO][5258] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" HandleID="k8s-pod-network.6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0" Apr 25 01:26:42.607222 containerd[1734]: 2026-04-25 01:26:42.588 [INFO][5258] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:42.607222 containerd[1734]: 2026-04-25 01:26:42.588 [INFO][5258] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:42.607222 containerd[1734]: 2026-04-25 01:26:42.602 [WARNING][5258] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" HandleID="k8s-pod-network.6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0" Apr 25 01:26:42.607222 containerd[1734]: 2026-04-25 01:26:42.602 [INFO][5258] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" HandleID="k8s-pod-network.6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0" Apr 25 01:26:42.607222 containerd[1734]: 2026-04-25 01:26:42.603 [INFO][5258] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:42.607222 containerd[1734]: 2026-04-25 01:26:42.604 [INFO][5241] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Apr 25 01:26:42.607222 containerd[1734]: time="2026-04-25T01:26:42.607002057Z" level=info msg="TearDown network for sandbox \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\" successfully" Apr 25 01:26:42.607222 containerd[1734]: time="2026-04-25T01:26:42.607029057Z" level=info msg="StopPodSandbox for \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\" returns successfully" Apr 25 01:26:42.608731 containerd[1734]: time="2026-04-25T01:26:42.608244097Z" level=info msg="RemovePodSandbox for \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\"" Apr 25 01:26:42.608731 containerd[1734]: time="2026-04-25T01:26:42.608282417Z" level=info msg="Forcibly stopping sandbox \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\"" Apr 25 01:26:42.694498 systemd-networkd[1367]: cali3f92490ec7e: Link UP Apr 25 01:26:42.694740 systemd-networkd[1367]: cali3f92490ec7e: Gained carrier Apr 25 01:26:42.721699 containerd[1734]: 2026-04-25 01:26:42.572 [INFO][5245] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0 coredns-7d764666f9- kube-system 9a4c953d-e4d6-4586-907f-7af01091f4b3 1020 0 2026-04-25 01:25:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-cf3dcbc0ec coredns-7d764666f9-hmvfq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3f92490ec7e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896" Namespace="kube-system" Pod="coredns-7d764666f9-hmvfq" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-" Apr 25 01:26:42.721699 containerd[1734]: 2026-04-25 01:26:42.574 [INFO][5245] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896" Namespace="kube-system" Pod="coredns-7d764666f9-hmvfq" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0" Apr 25 01:26:42.721699 containerd[1734]: 2026-04-25 01:26:42.620 [INFO][5269] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896" HandleID="k8s-pod-network.6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0" Apr 25 01:26:42.721699 containerd[1734]: 2026-04-25 01:26:42.634 [INFO][5269] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896" HandleID="k8s-pod-network.6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273af0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-cf3dcbc0ec", "pod":"coredns-7d764666f9-hmvfq", "timestamp":"2026-04-25 01:26:42.620952574 +0000 UTC"}, Hostname:"ci-4081.3.6-n-cf3dcbc0ec", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400025edc0)} Apr 25 01:26:42.721699 containerd[1734]: 2026-04-25 01:26:42.634 [INFO][5269] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:42.721699 containerd[1734]: 2026-04-25 01:26:42.634 [INFO][5269] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:42.721699 containerd[1734]: 2026-04-25 01:26:42.634 [INFO][5269] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-cf3dcbc0ec' Apr 25 01:26:42.721699 containerd[1734]: 2026-04-25 01:26:42.637 [INFO][5269] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:42.721699 containerd[1734]: 2026-04-25 01:26:42.644 [INFO][5269] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:42.721699 containerd[1734]: 2026-04-25 01:26:42.653 [INFO][5269] ipam/ipam.go 526: Trying affinity for 192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:42.721699 containerd[1734]: 2026-04-25 01:26:42.658 [INFO][5269] ipam/ipam.go 160: Attempting to load block cidr=192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:42.721699 containerd[1734]: 2026-04-25 01:26:42.661 [INFO][5269] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:42.721699 containerd[1734]: 2026-04-25 01:26:42.661 [INFO][5269] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:42.721699 containerd[1734]: 2026-04-25 01:26:42.664 [INFO][5269] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896 Apr 25 01:26:42.721699 containerd[1734]: 2026-04-25 01:26:42.671 [INFO][5269] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:42.721699 containerd[1734]: 2026-04-25 01:26:42.684 [INFO][5269] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.111.68/26] block=192.168.111.64/26 handle="k8s-pod-network.6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:42.721699 containerd[1734]: 2026-04-25 01:26:42.684 [INFO][5269] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.111.68/26] handle="k8s-pod-network.6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:42.721699 containerd[1734]: 2026-04-25 01:26:42.684 [INFO][5269] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:42.721699 containerd[1734]: 2026-04-25 01:26:42.684 [INFO][5269] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.111.68/26] IPv6=[] ContainerID="6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896" HandleID="k8s-pod-network.6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0" Apr 25 01:26:42.723005 containerd[1734]: 2026-04-25 01:26:42.690 [INFO][5245] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896" Namespace="kube-system" Pod="coredns-7d764666f9-hmvfq" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"9a4c953d-e4d6-4586-907f-7af01091f4b3", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"", Pod:"coredns-7d764666f9-hmvfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f92490ec7e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:26:42.723005 containerd[1734]: 2026-04-25 01:26:42.690 [INFO][5245] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.68/32] ContainerID="6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896" Namespace="kube-system" Pod="coredns-7d764666f9-hmvfq" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0" Apr 25 01:26:42.723005 containerd[1734]: 2026-04-25 01:26:42.690 [INFO][5245] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f92490ec7e ContainerID="6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896" Namespace="kube-system" Pod="coredns-7d764666f9-hmvfq" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0" Apr 25 01:26:42.723005 containerd[1734]: 2026-04-25 01:26:42.693 [INFO][5245] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896" Namespace="kube-system" Pod="coredns-7d764666f9-hmvfq" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0" Apr 25 01:26:42.723005 containerd[1734]: 2026-04-25 01:26:42.693 [INFO][5245] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896" Namespace="kube-system" Pod="coredns-7d764666f9-hmvfq" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"9a4c953d-e4d6-4586-907f-7af01091f4b3", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896", Pod:"coredns-7d764666f9-hmvfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f92490ec7e", MAC:"5a:ee:1e:19:88:45", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:26:42.723263 containerd[1734]: 2026-04-25 01:26:42.716 [INFO][5245] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896" Namespace="kube-system" Pod="coredns-7d764666f9-hmvfq" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0" Apr 25 01:26:42.750875 containerd[1734]: time="2026-04-25T01:26:42.750641019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 01:26:42.750875 containerd[1734]: time="2026-04-25T01:26:42.750696779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 01:26:42.750875 containerd[1734]: time="2026-04-25T01:26:42.750712099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:26:42.750875 containerd[1734]: time="2026-04-25T01:26:42.750802579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:26:42.763366 containerd[1734]: 2026-04-25 01:26:42.672 [WARNING][5286] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"cdad5612-4286-46a6-867f-878844a351de", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d", Pod:"goldmane-9f7667bb8-s6dwd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.111.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4c14fa76731", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:26:42.763366 containerd[1734]: 2026-04-25 01:26:42.673 [INFO][5286] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Apr 25 01:26:42.763366 containerd[1734]: 2026-04-25 01:26:42.673 [INFO][5286] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" iface="eth0" netns="" Apr 25 01:26:42.763366 containerd[1734]: 2026-04-25 01:26:42.673 [INFO][5286] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Apr 25 01:26:42.763366 containerd[1734]: 2026-04-25 01:26:42.673 [INFO][5286] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Apr 25 01:26:42.763366 containerd[1734]: 2026-04-25 01:26:42.724 [INFO][5294] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" HandleID="k8s-pod-network.6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0" Apr 25 01:26:42.763366 containerd[1734]: 2026-04-25 01:26:42.725 [INFO][5294] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:42.763366 containerd[1734]: 2026-04-25 01:26:42.725 [INFO][5294] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:42.763366 containerd[1734]: 2026-04-25 01:26:42.744 [WARNING][5294] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" HandleID="k8s-pod-network.6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0" Apr 25 01:26:42.763366 containerd[1734]: 2026-04-25 01:26:42.744 [INFO][5294] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" HandleID="k8s-pod-network.6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-goldmane--9f7667bb8--s6dwd-eth0" Apr 25 01:26:42.763366 containerd[1734]: 2026-04-25 01:26:42.749 [INFO][5294] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:42.763366 containerd[1734]: 2026-04-25 01:26:42.757 [INFO][5286] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1" Apr 25 01:26:42.763366 containerd[1734]: time="2026-04-25T01:26:42.761806017Z" level=info msg="TearDown network for sandbox \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\" successfully" Apr 25 01:26:42.770775 containerd[1734]: time="2026-04-25T01:26:42.770721654Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 01:26:42.770998 containerd[1734]: time="2026-04-25T01:26:42.770979494Z" level=info msg="RemovePodSandbox \"6c531ffb1759535f7b86ee582f48f87585d2eeca8ac8d05b08f64656b8e181c1\" returns successfully" Apr 25 01:26:42.772078 containerd[1734]: time="2026-04-25T01:26:42.772051134Z" level=info msg="StopPodSandbox for \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\"" Apr 25 01:26:42.791725 systemd[1]: Started cri-containerd-6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896.scope - libcontainer container 6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896. Apr 25 01:26:42.858886 containerd[1734]: time="2026-04-25T01:26:42.858849031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-hmvfq,Uid:9a4c953d-e4d6-4586-907f-7af01091f4b3,Namespace:kube-system,Attempt:1,} returns sandbox id \"6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896\"" Apr 25 01:26:42.869920 containerd[1734]: time="2026-04-25T01:26:42.869885828Z" level=info msg="CreateContainer within sandbox \"6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 25 01:26:42.912398 containerd[1734]: 2026-04-25 01:26:42.845 [WARNING][5346] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0", GenerateName:"calico-apiserver-684644c847-", Namespace:"calico-system", SelfLink:"", UID:"6fe3ea34-3f40-4037-a8dd-3ea381b9fa21", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684644c847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a", Pod:"calico-apiserver-684644c847-7pwrk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali32027003cc3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:26:42.912398 containerd[1734]: 2026-04-25 01:26:42.846 [INFO][5346] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Apr 25 01:26:42.912398 containerd[1734]: 2026-04-25 01:26:42.846 [INFO][5346] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" iface="eth0" netns="" Apr 25 01:26:42.912398 containerd[1734]: 2026-04-25 01:26:42.846 [INFO][5346] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Apr 25 01:26:42.912398 containerd[1734]: 2026-04-25 01:26:42.846 [INFO][5346] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Apr 25 01:26:42.912398 containerd[1734]: 2026-04-25 01:26:42.889 [INFO][5372] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" HandleID="k8s-pod-network.940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0" Apr 25 01:26:42.912398 containerd[1734]: 2026-04-25 01:26:42.889 [INFO][5372] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:42.912398 containerd[1734]: 2026-04-25 01:26:42.889 [INFO][5372] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:42.912398 containerd[1734]: 2026-04-25 01:26:42.904 [WARNING][5372] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" HandleID="k8s-pod-network.940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0" Apr 25 01:26:42.912398 containerd[1734]: 2026-04-25 01:26:42.904 [INFO][5372] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" HandleID="k8s-pod-network.940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0" Apr 25 01:26:42.912398 containerd[1734]: 2026-04-25 01:26:42.905 [INFO][5372] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:42.912398 containerd[1734]: 2026-04-25 01:26:42.908 [INFO][5346] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Apr 25 01:26:42.913641 containerd[1734]: time="2026-04-25T01:26:42.913374377Z" level=info msg="TearDown network for sandbox \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\" successfully" Apr 25 01:26:42.913641 containerd[1734]: time="2026-04-25T01:26:42.913465377Z" level=info msg="StopPodSandbox for \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\" returns successfully" Apr 25 01:26:42.915664 containerd[1734]: time="2026-04-25T01:26:42.914323417Z" level=info msg="CreateContainer within sandbox \"6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"99eb0a901f2c2deb6c9eb78adc322b1460f6ce1a387c6b1e01a27974a1e0b788\"" Apr 25 01:26:42.915664 containerd[1734]: time="2026-04-25T01:26:42.914517616Z" level=info msg="RemovePodSandbox for \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\"" Apr 25 01:26:42.915664 containerd[1734]: time="2026-04-25T01:26:42.914536896Z" level=info msg="Forcibly stopping sandbox \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\"" Apr 25 01:26:42.917599 containerd[1734]: time="2026-04-25T01:26:42.917568296Z" level=info msg="StartContainer for \"99eb0a901f2c2deb6c9eb78adc322b1460f6ce1a387c6b1e01a27974a1e0b788\"" Apr 25 01:26:42.955807 systemd[1]: Started cri-containerd-99eb0a901f2c2deb6c9eb78adc322b1460f6ce1a387c6b1e01a27974a1e0b788.scope - libcontainer container 99eb0a901f2c2deb6c9eb78adc322b1460f6ce1a387c6b1e01a27974a1e0b788. Apr 25 01:26:43.035561 containerd[1734]: 2026-04-25 01:26:42.976 [WARNING][5393] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0", GenerateName:"calico-apiserver-684644c847-", Namespace:"calico-system", SelfLink:"", UID:"6fe3ea34-3f40-4037-a8dd-3ea381b9fa21", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684644c847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a", Pod:"calico-apiserver-684644c847-7pwrk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali32027003cc3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:26:43.035561 containerd[1734]: 2026-04-25 01:26:42.977 [INFO][5393] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Apr 25 01:26:43.035561 containerd[1734]: 2026-04-25 01:26:42.977 [INFO][5393] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" iface="eth0" netns="" Apr 25 01:26:43.035561 containerd[1734]: 2026-04-25 01:26:42.977 [INFO][5393] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Apr 25 01:26:43.035561 containerd[1734]: 2026-04-25 01:26:42.977 [INFO][5393] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Apr 25 01:26:43.035561 containerd[1734]: 2026-04-25 01:26:43.017 [INFO][5421] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" HandleID="k8s-pod-network.940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0" Apr 25 01:26:43.035561 containerd[1734]: 2026-04-25 01:26:43.017 [INFO][5421] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:43.035561 containerd[1734]: 2026-04-25 01:26:43.017 [INFO][5421] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:43.035561 containerd[1734]: 2026-04-25 01:26:43.028 [WARNING][5421] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" HandleID="k8s-pod-network.940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0" Apr 25 01:26:43.035561 containerd[1734]: 2026-04-25 01:26:43.028 [INFO][5421] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" HandleID="k8s-pod-network.940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--7pwrk-eth0" Apr 25 01:26:43.035561 containerd[1734]: 2026-04-25 01:26:43.030 [INFO][5421] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:43.035561 containerd[1734]: 2026-04-25 01:26:43.032 [INFO][5393] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62" Apr 25 01:26:43.035998 containerd[1734]: time="2026-04-25T01:26:43.035601505Z" level=info msg="TearDown network for sandbox \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\" successfully" Apr 25 01:26:43.045184 containerd[1734]: time="2026-04-25T01:26:43.044662702Z" level=info msg="StartContainer for \"99eb0a901f2c2deb6c9eb78adc322b1460f6ce1a387c6b1e01a27974a1e0b788\" returns successfully" Apr 25 01:26:43.049511 containerd[1734]: time="2026-04-25T01:26:43.049298061Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 01:26:43.049511 containerd[1734]: time="2026-04-25T01:26:43.049363541Z" level=info msg="RemovePodSandbox \"940056ec92f8ee2042a1380c25a49a6a2a1fb015860fb022b5a2b3edd2766b62\" returns successfully" Apr 25 01:26:43.138632 containerd[1734]: time="2026-04-25T01:26:43.138496398Z" level=info msg="StopPodSandbox for \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\"" Apr 25 01:26:43.139454 containerd[1734]: time="2026-04-25T01:26:43.139144278Z" level=info msg="StopPodSandbox for \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\"" Apr 25 01:26:43.318142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1030897603.mount: Deactivated successfully. Apr 25 01:26:43.319907 containerd[1734]: 2026-04-25 01:26:43.240 [INFO][5457] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Apr 25 01:26:43.319907 containerd[1734]: 2026-04-25 01:26:43.240 [INFO][5457] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" iface="eth0" netns="/var/run/netns/cni-559cfd8c-a279-e04b-e42d-bb9868512d80" Apr 25 01:26:43.319907 containerd[1734]: 2026-04-25 01:26:43.240 [INFO][5457] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" iface="eth0" netns="/var/run/netns/cni-559cfd8c-a279-e04b-e42d-bb9868512d80" Apr 25 01:26:43.319907 containerd[1734]: 2026-04-25 01:26:43.242 [INFO][5457] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" iface="eth0" netns="/var/run/netns/cni-559cfd8c-a279-e04b-e42d-bb9868512d80" Apr 25 01:26:43.319907 containerd[1734]: 2026-04-25 01:26:43.242 [INFO][5457] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Apr 25 01:26:43.319907 containerd[1734]: 2026-04-25 01:26:43.242 [INFO][5457] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Apr 25 01:26:43.319907 containerd[1734]: 2026-04-25 01:26:43.288 [INFO][5478] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" HandleID="k8s-pod-network.b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0" Apr 25 01:26:43.319907 containerd[1734]: 2026-04-25 01:26:43.290 [INFO][5478] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:43.319907 containerd[1734]: 2026-04-25 01:26:43.290 [INFO][5478] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:43.319907 containerd[1734]: 2026-04-25 01:26:43.305 [WARNING][5478] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" HandleID="k8s-pod-network.b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0" Apr 25 01:26:43.319907 containerd[1734]: 2026-04-25 01:26:43.305 [INFO][5478] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" HandleID="k8s-pod-network.b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0" Apr 25 01:26:43.319907 containerd[1734]: 2026-04-25 01:26:43.309 [INFO][5478] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:43.319907 containerd[1734]: 2026-04-25 01:26:43.311 [INFO][5457] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Apr 25 01:26:43.324135 containerd[1734]: time="2026-04-25T01:26:43.321099270Z" level=info msg="TearDown network for sandbox \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\" successfully" Apr 25 01:26:43.324135 containerd[1734]: time="2026-04-25T01:26:43.321225790Z" level=info msg="StopPodSandbox for \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\" returns successfully" Apr 25 01:26:43.326913 systemd[1]: run-netns-cni\x2d559cfd8c\x2da279\x2de04b\x2de42d\x2dbb9868512d80.mount: Deactivated successfully. Apr 25 01:26:43.331846 containerd[1734]: time="2026-04-25T01:26:43.331663947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684644c847-8lnh2,Uid:95e54b6a-50b6-4304-b325-75ea339a6594,Namespace:calico-system,Attempt:1,}" Apr 25 01:26:43.345540 containerd[1734]: 2026-04-25 01:26:43.261 [INFO][5459] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Apr 25 01:26:43.345540 containerd[1734]: 2026-04-25 01:26:43.262 [INFO][5459] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" iface="eth0" netns="/var/run/netns/cni-6eeeb41a-f1fd-7ea6-4d17-eda63189b8e5" Apr 25 01:26:43.345540 containerd[1734]: 2026-04-25 01:26:43.262 [INFO][5459] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" iface="eth0" netns="/var/run/netns/cni-6eeeb41a-f1fd-7ea6-4d17-eda63189b8e5" Apr 25 01:26:43.345540 containerd[1734]: 2026-04-25 01:26:43.264 [INFO][5459] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" iface="eth0" netns="/var/run/netns/cni-6eeeb41a-f1fd-7ea6-4d17-eda63189b8e5" Apr 25 01:26:43.345540 containerd[1734]: 2026-04-25 01:26:43.264 [INFO][5459] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Apr 25 01:26:43.345540 containerd[1734]: 2026-04-25 01:26:43.264 [INFO][5459] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Apr 25 01:26:43.345540 containerd[1734]: 2026-04-25 01:26:43.307 [INFO][5483] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" HandleID="k8s-pod-network.adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0" Apr 25 01:26:43.345540 containerd[1734]: 2026-04-25 01:26:43.309 [INFO][5483] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:43.345540 containerd[1734]: 2026-04-25 01:26:43.309 [INFO][5483] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:43.345540 containerd[1734]: 2026-04-25 01:26:43.332 [WARNING][5483] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" HandleID="k8s-pod-network.adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0" Apr 25 01:26:43.345540 containerd[1734]: 2026-04-25 01:26:43.332 [INFO][5483] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" HandleID="k8s-pod-network.adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0" Apr 25 01:26:43.345540 containerd[1734]: 2026-04-25 01:26:43.335 [INFO][5483] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:43.345540 containerd[1734]: 2026-04-25 01:26:43.339 [INFO][5459] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Apr 25 01:26:43.345540 containerd[1734]: time="2026-04-25T01:26:43.343159424Z" level=info msg="TearDown network for sandbox \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\" successfully" Apr 25 01:26:43.345540 containerd[1734]: time="2026-04-25T01:26:43.343186264Z" level=info msg="StopPodSandbox for \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\" returns successfully" Apr 25 01:26:43.346306 systemd[1]: run-netns-cni\x2d6eeeb41a\x2df1fd\x2d7ea6\x2d4d17\x2deda63189b8e5.mount: Deactivated successfully. Apr 25 01:26:43.351256 containerd[1734]: time="2026-04-25T01:26:43.351224342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-xz5zq,Uid:533f9528-3ff0-42b3-85ca-983925f11ebc,Namespace:kube-system,Attempt:1,}" Apr 25 01:26:43.452727 kubelet[3177]: I0425 01:26:43.452391 3177 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-hmvfq" podStartSLOduration=55.452375235 podStartE2EDuration="55.452375235s" podCreationTimestamp="2026-04-25 01:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 01:26:43.449994516 +0000 UTC m=+61.428616243" watchObservedRunningTime="2026-04-25 01:26:43.452375235 +0000 UTC m=+61.430996962" Apr 25 01:26:43.617093 systemd-networkd[1367]: cali4054c2c0d9d: Link UP Apr 25 01:26:43.783233 containerd[1734]: 2026-04-25 01:26:43.455 [INFO][5493] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0 calico-apiserver-684644c847- calico-system 95e54b6a-50b6-4304-b325-75ea339a6594 1031 0 2026-04-25 01:26:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:684644c847 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-cf3dcbc0ec calico-apiserver-684644c847-8lnh2 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali4054c2c0d9d [] [] }} ContainerID="bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc" Namespace="calico-system" Pod="calico-apiserver-684644c847-8lnh2" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-" Apr 25 01:26:43.783233 containerd[1734]: 2026-04-25 01:26:43.456 [INFO][5493] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc" Namespace="calico-system" Pod="calico-apiserver-684644c847-8lnh2" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0" Apr 25 01:26:43.783233 containerd[1734]: 2026-04-25 01:26:43.521 [INFO][5516] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc" HandleID="k8s-pod-network.bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0" Apr 25 01:26:43.783233 containerd[1734]: 2026-04-25 01:26:43.538 [INFO][5516] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc" HandleID="k8s-pod-network.bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f99d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-cf3dcbc0ec", "pod":"calico-apiserver-684644c847-8lnh2", "timestamp":"2026-04-25 01:26:43.521373817 +0000 UTC"}, Hostname:"ci-4081.3.6-n-cf3dcbc0ec", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004171e0)} Apr 25 01:26:43.783233 containerd[1734]: 2026-04-25 01:26:43.538 [INFO][5516] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:43.783233 containerd[1734]: 2026-04-25 01:26:43.539 [INFO][5516] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:43.783233 containerd[1734]: 2026-04-25 01:26:43.539 [INFO][5516] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-cf3dcbc0ec' Apr 25 01:26:43.783233 containerd[1734]: 2026-04-25 01:26:43.542 [INFO][5516] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:43.783233 containerd[1734]: 2026-04-25 01:26:43.553 [INFO][5516] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:43.783233 containerd[1734]: 2026-04-25 01:26:43.565 [INFO][5516] ipam/ipam.go 526: Trying affinity for 192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:43.783233 containerd[1734]: 2026-04-25 01:26:43.568 [INFO][5516] ipam/ipam.go 160: Attempting to load block cidr=192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:43.783233 containerd[1734]: 2026-04-25 01:26:43.573 [INFO][5516] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:43.783233 containerd[1734]: 2026-04-25 01:26:43.573 [INFO][5516] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:43.783233 containerd[1734]: 2026-04-25 01:26:43.575 [INFO][5516] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc Apr 25 01:26:43.783233 containerd[1734]: 2026-04-25 01:26:43.582 [INFO][5516] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:43.783233 containerd[1734]: 2026-04-25 01:26:43.607 [INFO][5516] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.111.69/26] block=192.168.111.64/26 handle="k8s-pod-network.bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:43.783233 containerd[1734]: 2026-04-25 01:26:43.607 [INFO][5516] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.111.69/26] handle="k8s-pod-network.bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:43.783233 containerd[1734]: 2026-04-25 01:26:43.607 [INFO][5516] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:43.783233 containerd[1734]: 2026-04-25 01:26:43.607 [INFO][5516] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.111.69/26] IPv6=[] ContainerID="bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc" HandleID="k8s-pod-network.bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0" Apr 25 01:26:43.617717 systemd-networkd[1367]: cali4054c2c0d9d: Gained carrier Apr 25 01:26:43.783883 containerd[1734]: 2026-04-25 01:26:43.612 [INFO][5493] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc" Namespace="calico-system" Pod="calico-apiserver-684644c847-8lnh2" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0", GenerateName:"calico-apiserver-684644c847-", Namespace:"calico-system", SelfLink:"", UID:"95e54b6a-50b6-4304-b325-75ea339a6594", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684644c847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"", Pod:"calico-apiserver-684644c847-8lnh2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4054c2c0d9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:26:43.783883 containerd[1734]: 2026-04-25 01:26:43.612 [INFO][5493] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.69/32] ContainerID="bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc" Namespace="calico-system" Pod="calico-apiserver-684644c847-8lnh2" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0" Apr 25 01:26:43.783883 containerd[1734]: 2026-04-25 01:26:43.612 [INFO][5493] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4054c2c0d9d ContainerID="bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc" Namespace="calico-system" Pod="calico-apiserver-684644c847-8lnh2" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0" Apr 25 01:26:43.783883 containerd[1734]: 2026-04-25 01:26:43.618 [INFO][5493] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc" Namespace="calico-system" Pod="calico-apiserver-684644c847-8lnh2" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0" Apr 25 01:26:43.783883 containerd[1734]: 2026-04-25 01:26:43.620 [INFO][5493] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc" Namespace="calico-system" Pod="calico-apiserver-684644c847-8lnh2" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0", GenerateName:"calico-apiserver-684644c847-", Namespace:"calico-system", SelfLink:"", UID:"95e54b6a-50b6-4304-b325-75ea339a6594", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684644c847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc", Pod:"calico-apiserver-684644c847-8lnh2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4054c2c0d9d", MAC:"0a:37:e5:c5:2b:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:26:43.783883 containerd[1734]: 2026-04-25 01:26:43.640 [INFO][5493] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc" Namespace="calico-system" Pod="calico-apiserver-684644c847-8lnh2" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0" Apr 25 01:26:43.783883 containerd[1734]: 2026-04-25 01:26:43.517 [INFO][5504] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0 coredns-7d764666f9- kube-system 533f9528-3ff0-42b3-85ca-983925f11ebc 1032 0 2026-04-25 01:25:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-cf3dcbc0ec coredns-7d764666f9-xz5zq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali40dae98f140 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be" Namespace="kube-system" Pod="coredns-7d764666f9-xz5zq" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-" Apr 25 01:26:43.703500 systemd-networkd[1367]: cali40dae98f140: Link UP Apr 25 01:26:43.784176 containerd[1734]: 2026-04-25 01:26:43.519 [INFO][5504] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be" Namespace="kube-system" Pod="coredns-7d764666f9-xz5zq" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0" Apr 25 01:26:43.784176 containerd[1734]: 2026-04-25 01:26:43.591 [INFO][5526] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be" HandleID="k8s-pod-network.5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0" Apr 25 01:26:43.784176 containerd[1734]: 2026-04-25 01:26:43.606 [INFO][5526] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be" HandleID="k8s-pod-network.5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000380310), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-cf3dcbc0ec", "pod":"coredns-7d764666f9-xz5zq", "timestamp":"2026-04-25 01:26:43.591095199 +0000 UTC"}, Hostname:"ci-4081.3.6-n-cf3dcbc0ec", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000435760)} Apr 25 01:26:43.784176 containerd[1734]: 2026-04-25 01:26:43.606 [INFO][5526] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:43.784176 containerd[1734]: 2026-04-25 01:26:43.608 [INFO][5526] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:43.784176 containerd[1734]: 2026-04-25 01:26:43.608 [INFO][5526] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-cf3dcbc0ec' Apr 25 01:26:43.784176 containerd[1734]: 2026-04-25 01:26:43.643 [INFO][5526] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:43.784176 containerd[1734]: 2026-04-25 01:26:43.655 [INFO][5526] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:43.784176 containerd[1734]: 2026-04-25 01:26:43.663 [INFO][5526] ipam/ipam.go 526: Trying affinity for 192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:43.784176 containerd[1734]: 2026-04-25 01:26:43.668 [INFO][5526] ipam/ipam.go 160: Attempting to load block cidr=192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:43.784176 containerd[1734]: 2026-04-25 01:26:43.670 [INFO][5526] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:43.784176 containerd[1734]: 2026-04-25 01:26:43.670 [INFO][5526] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:43.784176 containerd[1734]: 2026-04-25 01:26:43.673 [INFO][5526] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be Apr 25 01:26:43.784176 containerd[1734]: 2026-04-25 01:26:43.680 [INFO][5526] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:43.784176 containerd[1734]: 2026-04-25 01:26:43.693 [INFO][5526] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.111.70/26] block=192.168.111.64/26 handle="k8s-pod-network.5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:43.784176 containerd[1734]: 2026-04-25 01:26:43.693 [INFO][5526] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.111.70/26] handle="k8s-pod-network.5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:43.784176 containerd[1734]: 2026-04-25 01:26:43.693 [INFO][5526] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:43.784176 containerd[1734]: 2026-04-25 01:26:43.693 [INFO][5526] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.111.70/26] IPv6=[] ContainerID="5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be" HandleID="k8s-pod-network.5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0" Apr 25 01:26:43.704647 systemd-networkd[1367]: cali40dae98f140: Gained carrier Apr 25 01:26:43.791156 containerd[1734]: 2026-04-25 01:26:43.699 [INFO][5504] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be" Namespace="kube-system" Pod="coredns-7d764666f9-xz5zq" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"533f9528-3ff0-42b3-85ca-983925f11ebc", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"", Pod:"coredns-7d764666f9-xz5zq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40dae98f140", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:26:43.791156 containerd[1734]: 2026-04-25 01:26:43.699 [INFO][5504] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.70/32] ContainerID="5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be" Namespace="kube-system" Pod="coredns-7d764666f9-xz5zq" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0" Apr 25 01:26:43.791156 containerd[1734]: 2026-04-25 01:26:43.699 [INFO][5504] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40dae98f140 ContainerID="5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be" Namespace="kube-system" Pod="coredns-7d764666f9-xz5zq" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0" Apr 25 01:26:43.791156 containerd[1734]: 2026-04-25 01:26:43.703 [INFO][5504] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be" Namespace="kube-system" Pod="coredns-7d764666f9-xz5zq" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0" Apr 25 01:26:43.791156 containerd[1734]: 2026-04-25 01:26:43.705 [INFO][5504] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be" Namespace="kube-system" Pod="coredns-7d764666f9-xz5zq" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"533f9528-3ff0-42b3-85ca-983925f11ebc", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be", Pod:"coredns-7d764666f9-xz5zq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40dae98f140", MAC:"62:8b:99:a9:a5:21", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:26:43.791367 containerd[1734]: 2026-04-25 01:26:43.720 [INFO][5504] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be" Namespace="kube-system" Pod="coredns-7d764666f9-xz5zq" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0" Apr 25 01:26:43.896158 containerd[1734]: time="2026-04-25T01:26:43.894399479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 01:26:43.896158 containerd[1734]: time="2026-04-25T01:26:43.895890679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 01:26:43.896158 containerd[1734]: time="2026-04-25T01:26:43.895908759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:26:43.896815 containerd[1734]: time="2026-04-25T01:26:43.896720039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:26:43.921641 systemd[1]: Started cri-containerd-bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc.scope - libcontainer container bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc. Apr 25 01:26:43.950119 containerd[1734]: time="2026-04-25T01:26:43.950081545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-684644c847-8lnh2,Uid:95e54b6a-50b6-4304-b325-75ea339a6594,Namespace:calico-system,Attempt:1,} returns sandbox id \"bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc\"" Apr 25 01:26:44.002357 containerd[1734]: time="2026-04-25T01:26:44.002259451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 01:26:44.002357 containerd[1734]: time="2026-04-25T01:26:44.002321011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 01:26:44.002357 containerd[1734]: time="2026-04-25T01:26:44.002337331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:26:44.002823 containerd[1734]: time="2026-04-25T01:26:44.002510251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:26:44.021604 systemd[1]: Started cri-containerd-5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be.scope - libcontainer container 5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be. Apr 25 01:26:44.042531 containerd[1734]: time="2026-04-25T01:26:44.042312721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:44.047031 containerd[1734]: time="2026-04-25T01:26:44.045914720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 25 01:26:44.052262 containerd[1734]: time="2026-04-25T01:26:44.052214998Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:44.053817 containerd[1734]: time="2026-04-25T01:26:44.053781718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-xz5zq,Uid:533f9528-3ff0-42b3-85ca-983925f11ebc,Namespace:kube-system,Attempt:1,} returns sandbox id \"5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be\"" Apr 25 01:26:44.060785 containerd[1734]: time="2026-04-25T01:26:44.060649196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:44.062340 containerd[1734]: time="2026-04-25T01:26:44.062208315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.558775146s" Apr 25 01:26:44.062340 containerd[1734]: time="2026-04-25T01:26:44.062248875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 25 01:26:44.064345 containerd[1734]: time="2026-04-25T01:26:44.064305275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 25 01:26:44.066464 containerd[1734]: time="2026-04-25T01:26:44.066323474Z" level=info msg="CreateContainer within sandbox \"5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 25 01:26:44.083188 containerd[1734]: time="2026-04-25T01:26:44.083042430Z" level=info msg="CreateContainer within sandbox \"282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 25 01:26:44.130872 systemd-networkd[1367]: cali3f92490ec7e: Gained IPv6LL Apr 25 01:26:44.133185 containerd[1734]: time="2026-04-25T01:26:44.133135257Z" level=info msg="CreateContainer within sandbox \"282efb1f340a69dfa2b04af18c9cf795b6e789ec8a66d09953f26ecc34aa0c9d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b6534db61ac425295878fe37b0f4ead1cf384515ab40ef1d5e929119aa3464ff\"" Apr 25 01:26:44.135192 containerd[1734]: time="2026-04-25T01:26:44.133882656Z" level=info msg="StartContainer for \"b6534db61ac425295878fe37b0f4ead1cf384515ab40ef1d5e929119aa3464ff\"" Apr 25 01:26:44.142063 containerd[1734]: time="2026-04-25T01:26:44.142026054Z" level=info msg="StopPodSandbox for \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\"" Apr 25 01:26:44.145472 containerd[1734]: time="2026-04-25T01:26:44.145388893Z" level=info msg="StopPodSandbox for \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\"" Apr 25 01:26:44.159792 containerd[1734]: time="2026-04-25T01:26:44.159693170Z" level=info msg="CreateContainer within sandbox \"5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"636a846746de1cc87297e5e56bcab643ddc7bd0b54cfc684dc7804aafabe2f84\"" Apr 25 01:26:44.163916 containerd[1734]: time="2026-04-25T01:26:44.161700969Z" level=info msg="StartContainer for \"636a846746de1cc87297e5e56bcab643ddc7bd0b54cfc684dc7804aafabe2f84\"" Apr 25 01:26:44.217655 systemd[1]: Started cri-containerd-b6534db61ac425295878fe37b0f4ead1cf384515ab40ef1d5e929119aa3464ff.scope - libcontainer container b6534db61ac425295878fe37b0f4ead1cf384515ab40ef1d5e929119aa3464ff. Apr 25 01:26:44.307147 containerd[1734]: 2026-04-25 01:26:44.260 [INFO][5686] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Apr 25 01:26:44.307147 containerd[1734]: 2026-04-25 01:26:44.261 [INFO][5686] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" iface="eth0" netns="/var/run/netns/cni-f727209e-2437-058a-cd0a-92282e3bf5f2" Apr 25 01:26:44.307147 containerd[1734]: 2026-04-25 01:26:44.262 [INFO][5686] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" iface="eth0" netns="/var/run/netns/cni-f727209e-2437-058a-cd0a-92282e3bf5f2" Apr 25 01:26:44.307147 containerd[1734]: 2026-04-25 01:26:44.262 [INFO][5686] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" iface="eth0" netns="/var/run/netns/cni-f727209e-2437-058a-cd0a-92282e3bf5f2" Apr 25 01:26:44.307147 containerd[1734]: 2026-04-25 01:26:44.262 [INFO][5686] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Apr 25 01:26:44.307147 containerd[1734]: 2026-04-25 01:26:44.262 [INFO][5686] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Apr 25 01:26:44.307147 containerd[1734]: 2026-04-25 01:26:44.292 [INFO][5726] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" HandleID="k8s-pod-network.360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0" Apr 25 01:26:44.307147 containerd[1734]: 2026-04-25 01:26:44.293 [INFO][5726] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:44.307147 containerd[1734]: 2026-04-25 01:26:44.293 [INFO][5726] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:44.307147 containerd[1734]: 2026-04-25 01:26:44.302 [WARNING][5726] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" HandleID="k8s-pod-network.360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0" Apr 25 01:26:44.307147 containerd[1734]: 2026-04-25 01:26:44.302 [INFO][5726] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" HandleID="k8s-pod-network.360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0" Apr 25 01:26:44.307147 containerd[1734]: 2026-04-25 01:26:44.304 [INFO][5726] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:44.307147 containerd[1734]: 2026-04-25 01:26:44.305 [INFO][5686] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Apr 25 01:26:44.325234 containerd[1734]: 2026-04-25 01:26:44.260 [INFO][5695] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Apr 25 01:26:44.325234 containerd[1734]: 2026-04-25 01:26:44.261 [INFO][5695] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" iface="eth0" netns="/var/run/netns/cni-8213ec87-c826-b7da-8cab-a82a1690a257" Apr 25 01:26:44.325234 containerd[1734]: 2026-04-25 01:26:44.261 [INFO][5695] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" iface="eth0" netns="/var/run/netns/cni-8213ec87-c826-b7da-8cab-a82a1690a257" Apr 25 01:26:44.325234 containerd[1734]: 2026-04-25 01:26:44.261 [INFO][5695] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" iface="eth0" netns="/var/run/netns/cni-8213ec87-c826-b7da-8cab-a82a1690a257" Apr 25 01:26:44.325234 containerd[1734]: 2026-04-25 01:26:44.261 [INFO][5695] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Apr 25 01:26:44.325234 containerd[1734]: 2026-04-25 01:26:44.261 [INFO][5695] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Apr 25 01:26:44.325234 containerd[1734]: 2026-04-25 01:26:44.299 [INFO][5724] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" HandleID="k8s-pod-network.4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0" Apr 25 01:26:44.325234 containerd[1734]: 2026-04-25 01:26:44.300 [INFO][5724] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:44.325234 containerd[1734]: 2026-04-25 01:26:44.304 [INFO][5724] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:44.325234 containerd[1734]: 2026-04-25 01:26:44.318 [WARNING][5724] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" HandleID="k8s-pod-network.4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0" Apr 25 01:26:44.325234 containerd[1734]: 2026-04-25 01:26:44.318 [INFO][5724] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" HandleID="k8s-pod-network.4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0" Apr 25 01:26:44.325234 containerd[1734]: 2026-04-25 01:26:44.320 [INFO][5724] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:44.325234 containerd[1734]: 2026-04-25 01:26:44.322 [INFO][5695] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Apr 25 01:26:44.341568 containerd[1734]: time="2026-04-25T01:26:44.339582443Z" level=info msg="TearDown network for sandbox \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\" successfully" Apr 25 01:26:44.341568 containerd[1734]: time="2026-04-25T01:26:44.339618723Z" level=info msg="StopPodSandbox for \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\" returns successfully" Apr 25 01:26:44.342130 containerd[1734]: time="2026-04-25T01:26:44.342091922Z" level=info msg="TearDown network for sandbox \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\" successfully" Apr 25 01:26:44.342289 containerd[1734]: time="2026-04-25T01:26:44.342260042Z" level=info msg="StopPodSandbox for \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\" returns successfully" Apr 25 01:26:44.342678 systemd[1]: run-netns-cni\x2df727209e\x2d2437\x2d058a\x2dcd0a\x2d92282e3bf5f2.mount: Deactivated successfully. Apr 25 01:26:44.351575 systemd[1]: run-netns-cni\x2d8213ec87\x2dc826\x2db7da\x2d8cab\x2da82a1690a257.mount: Deactivated successfully. Apr 25 01:26:44.366735 systemd[1]: Started cri-containerd-636a846746de1cc87297e5e56bcab643ddc7bd0b54cfc684dc7804aafabe2f84.scope - libcontainer container 636a846746de1cc87297e5e56bcab643ddc7bd0b54cfc684dc7804aafabe2f84. Apr 25 01:26:44.524823 containerd[1734]: time="2026-04-25T01:26:44.524591394Z" level=info msg="StartContainer for \"b6534db61ac425295878fe37b0f4ead1cf384515ab40ef1d5e929119aa3464ff\" returns successfully" Apr 25 01:26:44.526470 containerd[1734]: time="2026-04-25T01:26:44.525589834Z" level=info msg="StartContainer for \"636a846746de1cc87297e5e56bcab643ddc7bd0b54cfc684dc7804aafabe2f84\" returns successfully" Apr 25 01:26:44.533253 containerd[1734]: time="2026-04-25T01:26:44.533223272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85bd8bbdbd-vbd9d,Uid:bfc2469b-0460-4c01-a157-10152c3a426a,Namespace:calico-system,Attempt:1,}" Apr 25 01:26:44.541709 containerd[1734]: time="2026-04-25T01:26:44.541670189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m9jdr,Uid:824a5d45-7c15-4527-82b4-bbcfeeb63e50,Namespace:calico-system,Attempt:1,}" Apr 25 01:26:44.748845 systemd-networkd[1367]: calie931fe8acc6: Link UP Apr 25 01:26:44.750030 systemd-networkd[1367]: calie931fe8acc6: Gained carrier Apr 25 01:26:44.770617 containerd[1734]: 2026-04-25 01:26:44.650 [INFO][5781] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0 calico-kube-controllers-85bd8bbdbd- calico-system bfc2469b-0460-4c01-a157-10152c3a426a 1058 0 2026-04-25 01:26:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85bd8bbdbd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-cf3dcbc0ec calico-kube-controllers-85bd8bbdbd-vbd9d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie931fe8acc6 [] [] }} ContainerID="1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b" Namespace="calico-system" Pod="calico-kube-controllers-85bd8bbdbd-vbd9d" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-" Apr 25 01:26:44.770617 containerd[1734]: 2026-04-25 01:26:44.650 [INFO][5781] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b" Namespace="calico-system" Pod="calico-kube-controllers-85bd8bbdbd-vbd9d" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0" Apr 25 01:26:44.770617 containerd[1734]: 2026-04-25 01:26:44.692 [INFO][5805] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b" HandleID="k8s-pod-network.1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0" Apr 25 01:26:44.770617 containerd[1734]: 2026-04-25 01:26:44.702 [INFO][5805] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b" HandleID="k8s-pod-network.1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000381d80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-cf3dcbc0ec", "pod":"calico-kube-controllers-85bd8bbdbd-vbd9d", "timestamp":"2026-04-25 01:26:44.69273407 +0000 UTC"}, Hostname:"ci-4081.3.6-n-cf3dcbc0ec", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000186dc0)} Apr 25 01:26:44.770617 containerd[1734]: 2026-04-25 01:26:44.702 [INFO][5805] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:44.770617 containerd[1734]: 2026-04-25 01:26:44.703 [INFO][5805] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:44.770617 containerd[1734]: 2026-04-25 01:26:44.703 [INFO][5805] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-cf3dcbc0ec' Apr 25 01:26:44.770617 containerd[1734]: 2026-04-25 01:26:44.705 [INFO][5805] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:44.770617 containerd[1734]: 2026-04-25 01:26:44.710 [INFO][5805] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:44.770617 containerd[1734]: 2026-04-25 01:26:44.716 [INFO][5805] ipam/ipam.go 526: Trying affinity for 192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:44.770617 containerd[1734]: 2026-04-25 01:26:44.718 [INFO][5805] ipam/ipam.go 160: Attempting to load block cidr=192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:44.770617 containerd[1734]: 2026-04-25 01:26:44.720 [INFO][5805] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:44.770617 containerd[1734]: 2026-04-25 01:26:44.720 [INFO][5805] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:44.770617 containerd[1734]: 2026-04-25 01:26:44.722 [INFO][5805] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b Apr 25 01:26:44.770617 containerd[1734]: 2026-04-25 01:26:44.733 [INFO][5805] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:44.770617 containerd[1734]: 2026-04-25 01:26:44.743 [INFO][5805] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.111.71/26] block=192.168.111.64/26 handle="k8s-pod-network.1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:44.770617 containerd[1734]: 2026-04-25 01:26:44.743 [INFO][5805] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.111.71/26] handle="k8s-pod-network.1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:44.770617 containerd[1734]: 2026-04-25 01:26:44.743 [INFO][5805] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:44.770617 containerd[1734]: 2026-04-25 01:26:44.743 [INFO][5805] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.111.71/26] IPv6=[] ContainerID="1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b" HandleID="k8s-pod-network.1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0" Apr 25 01:26:44.772071 containerd[1734]: 2026-04-25 01:26:44.745 [INFO][5781] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b" Namespace="calico-system" Pod="calico-kube-controllers-85bd8bbdbd-vbd9d" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0", GenerateName:"calico-kube-controllers-85bd8bbdbd-", Namespace:"calico-system", SelfLink:"", UID:"bfc2469b-0460-4c01-a157-10152c3a426a", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85bd8bbdbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"", Pod:"calico-kube-controllers-85bd8bbdbd-vbd9d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.111.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie931fe8acc6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:26:44.772071 containerd[1734]: 2026-04-25 01:26:44.745 [INFO][5781] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.71/32] ContainerID="1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b" Namespace="calico-system" Pod="calico-kube-controllers-85bd8bbdbd-vbd9d" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0" Apr 25 01:26:44.772071 containerd[1734]: 2026-04-25 01:26:44.745 [INFO][5781] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie931fe8acc6 ContainerID="1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b" Namespace="calico-system" Pod="calico-kube-controllers-85bd8bbdbd-vbd9d" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0" Apr 25 01:26:44.772071 containerd[1734]: 2026-04-25 01:26:44.749 [INFO][5781] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b" Namespace="calico-system" Pod="calico-kube-controllers-85bd8bbdbd-vbd9d" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0" Apr 25 01:26:44.772071 containerd[1734]: 2026-04-25 01:26:44.751 [INFO][5781] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b" Namespace="calico-system" Pod="calico-kube-controllers-85bd8bbdbd-vbd9d" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0", GenerateName:"calico-kube-controllers-85bd8bbdbd-", Namespace:"calico-system", SelfLink:"", UID:"bfc2469b-0460-4c01-a157-10152c3a426a", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85bd8bbdbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b", Pod:"calico-kube-controllers-85bd8bbdbd-vbd9d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.111.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie931fe8acc6", MAC:"d6:75:34:0a:9e:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:26:44.772071 containerd[1734]: 2026-04-25 01:26:44.765 [INFO][5781] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b" Namespace="calico-system" Pod="calico-kube-controllers-85bd8bbdbd-vbd9d" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0" Apr 25 01:26:44.842020 containerd[1734]: time="2026-04-25T01:26:44.841713111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 01:26:44.842020 containerd[1734]: time="2026-04-25T01:26:44.841765631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 01:26:44.842020 containerd[1734]: time="2026-04-25T01:26:44.841783991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:26:44.842020 containerd[1734]: time="2026-04-25T01:26:44.841867951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:26:44.865139 systemd-networkd[1367]: calic1f058b605f: Link UP Apr 25 01:26:44.866960 systemd-networkd[1367]: calic1f058b605f: Gained carrier Apr 25 01:26:44.877696 systemd[1]: Started cri-containerd-1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b.scope - libcontainer container 1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b. Apr 25 01:26:44.910140 containerd[1734]: 2026-04-25 01:26:44.658 [INFO][5790] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0 csi-node-driver- calico-system 824a5d45-7c15-4527-82b4-bbcfeeb63e50 1057 0 2026-04-25 01:26:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-cf3dcbc0ec csi-node-driver-m9jdr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic1f058b605f [] [] }} ContainerID="5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428" Namespace="calico-system" Pod="csi-node-driver-m9jdr" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-" Apr 25 01:26:44.910140 containerd[1734]: 2026-04-25 01:26:44.659 [INFO][5790] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428" Namespace="calico-system" Pod="csi-node-driver-m9jdr" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0" Apr 25 01:26:44.910140 containerd[1734]: 2026-04-25 01:26:44.694 [INFO][5810] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428" HandleID="k8s-pod-network.5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0" Apr 25 01:26:44.910140 containerd[1734]: 2026-04-25 01:26:44.709 [INFO][5810] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428" HandleID="k8s-pod-network.5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbba0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-cf3dcbc0ec", "pod":"csi-node-driver-m9jdr", "timestamp":"2026-04-25 01:26:44.694253949 +0000 UTC"}, Hostname:"ci-4081.3.6-n-cf3dcbc0ec", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000281600)} Apr 25 01:26:44.910140 containerd[1734]: 2026-04-25 01:26:44.709 [INFO][5810] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:26:44.910140 containerd[1734]: 2026-04-25 01:26:44.743 [INFO][5810] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:26:44.910140 containerd[1734]: 2026-04-25 01:26:44.743 [INFO][5810] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-cf3dcbc0ec' Apr 25 01:26:44.910140 containerd[1734]: 2026-04-25 01:26:44.806 [INFO][5810] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:44.910140 containerd[1734]: 2026-04-25 01:26:44.811 [INFO][5810] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:44.910140 containerd[1734]: 2026-04-25 01:26:44.816 [INFO][5810] ipam/ipam.go 526: Trying affinity for 192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:44.910140 containerd[1734]: 2026-04-25 01:26:44.818 [INFO][5810] ipam/ipam.go 160: Attempting to load block cidr=192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:44.910140 containerd[1734]: 2026-04-25 01:26:44.820 [INFO][5810] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:44.910140 containerd[1734]: 2026-04-25 01:26:44.820 [INFO][5810] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:44.910140 containerd[1734]: 2026-04-25 01:26:44.821 [INFO][5810] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428 Apr 25 01:26:44.910140 containerd[1734]: 2026-04-25 01:26:44.830 [INFO][5810] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:44.910140 containerd[1734]: 2026-04-25 01:26:44.846 [INFO][5810] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.111.72/26] block=192.168.111.64/26 handle="k8s-pod-network.5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:44.910140 containerd[1734]: 2026-04-25 01:26:44.846 [INFO][5810] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.111.72/26] handle="k8s-pod-network.5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428" host="ci-4081.3.6-n-cf3dcbc0ec" Apr 25 01:26:44.910140 containerd[1734]: 2026-04-25 01:26:44.846 [INFO][5810] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:26:44.910140 containerd[1734]: 2026-04-25 01:26:44.846 [INFO][5810] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.111.72/26] IPv6=[] ContainerID="5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428" HandleID="k8s-pod-network.5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0" Apr 25 01:26:44.911813 containerd[1734]: 2026-04-25 01:26:44.849 [INFO][5790] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428" Namespace="calico-system" Pod="csi-node-driver-m9jdr" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"824a5d45-7c15-4527-82b4-bbcfeeb63e50", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"", Pod:"csi-node-driver-m9jdr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic1f058b605f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:26:44.911813 containerd[1734]: 2026-04-25 01:26:44.849 [INFO][5790] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.72/32] ContainerID="5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428" Namespace="calico-system" Pod="csi-node-driver-m9jdr" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0" Apr 25 01:26:44.911813 containerd[1734]: 2026-04-25 01:26:44.849 [INFO][5790] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1f058b605f ContainerID="5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428" Namespace="calico-system" Pod="csi-node-driver-m9jdr" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0" Apr 25 01:26:44.911813 containerd[1734]: 2026-04-25 01:26:44.866 [INFO][5790] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428" Namespace="calico-system" Pod="csi-node-driver-m9jdr" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0" Apr 25 01:26:44.911813 containerd[1734]: 2026-04-25 01:26:44.876 [INFO][5790] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428" Namespace="calico-system" Pod="csi-node-driver-m9jdr" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"824a5d45-7c15-4527-82b4-bbcfeeb63e50", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428", Pod:"csi-node-driver-m9jdr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic1f058b605f", MAC:"e6:3e:70:27:88:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:26:44.911813 containerd[1734]: 2026-04-25 01:26:44.904 [INFO][5790] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428" Namespace="calico-system" Pod="csi-node-driver-m9jdr" WorkloadEndpoint="ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0" Apr 25 01:26:44.944648 containerd[1734]: time="2026-04-25T01:26:44.944612364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85bd8bbdbd-vbd9d,Uid:bfc2469b-0460-4c01-a157-10152c3a426a,Namespace:calico-system,Attempt:1,} returns sandbox id \"1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b\"" Apr 25 01:26:44.946067 containerd[1734]: time="2026-04-25T01:26:44.945725323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 01:26:44.946067 containerd[1734]: time="2026-04-25T01:26:44.945773723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 01:26:44.946067 containerd[1734]: time="2026-04-25T01:26:44.945784843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:26:44.946067 containerd[1734]: time="2026-04-25T01:26:44.945858283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 01:26:44.966643 systemd[1]: Started cri-containerd-5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428.scope - libcontainer container 5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428. Apr 25 01:26:44.995684 containerd[1734]: time="2026-04-25T01:26:44.995641870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m9jdr,Uid:824a5d45-7c15-4527-82b4-bbcfeeb63e50,Namespace:calico-system,Attempt:1,} returns sandbox id \"5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428\"" Apr 25 01:26:45.410636 systemd-networkd[1367]: cali4054c2c0d9d: Gained IPv6LL Apr 25 01:26:45.562645 kubelet[3177]: I0425 01:26:45.562558 3177 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-xz5zq" podStartSLOduration=57.562405682 podStartE2EDuration="57.562405682s" podCreationTimestamp="2026-04-25 01:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 01:26:45.561670562 +0000 UTC m=+63.540292249" watchObservedRunningTime="2026-04-25 01:26:45.562405682 +0000 UTC m=+63.541027409" Apr 25 01:26:45.628411 kubelet[3177]: I0425 01:26:45.628323 3177 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-s6dwd" podStartSLOduration=40.067385478 podStartE2EDuration="43.628308064s" podCreationTimestamp="2026-04-25 01:26:02 +0000 UTC" firstStartedPulling="2026-04-25 01:26:40.502896529 +0000 UTC m=+58.481518216" lastFinishedPulling="2026-04-25 01:26:44.063819075 +0000 UTC m=+62.042440802" observedRunningTime="2026-04-25 01:26:45.594097273 +0000 UTC m=+63.572719000" watchObservedRunningTime="2026-04-25 01:26:45.628308064 +0000 UTC m=+63.606929791" Apr 25 01:26:45.666583 systemd-networkd[1367]: cali40dae98f140: Gained IPv6LL Apr 25 01:26:46.562577 systemd-networkd[1367]: calie931fe8acc6: Gained IPv6LL Apr 25 01:26:46.626616 systemd-networkd[1367]: calic1f058b605f: Gained IPv6LL Apr 25 01:26:47.843576 containerd[1734]: time="2026-04-25T01:26:47.843533563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:47.846697 containerd[1734]: time="2026-04-25T01:26:47.846661602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 25 01:26:47.849921 containerd[1734]: time="2026-04-25T01:26:47.849864561Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:47.854325 containerd[1734]: time="2026-04-25T01:26:47.854273440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:47.855479 containerd[1734]: time="2026-04-25T01:26:47.855090120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 3.790412205s" Apr 25 01:26:47.855479 containerd[1734]: time="2026-04-25T01:26:47.855125320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 25 01:26:47.856488 containerd[1734]: time="2026-04-25T01:26:47.856452280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 25 01:26:47.864464 containerd[1734]: time="2026-04-25T01:26:47.863714998Z" level=info msg="CreateContainer within sandbox \"534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 25 01:26:47.901795 containerd[1734]: time="2026-04-25T01:26:47.901590868Z" level=info msg="CreateContainer within sandbox \"534a631d0c4a629289a9384b34e42ea1388f929a1d05cddaff69b54ba8a6162a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ce4f633db9f48bd4f3ce6c6f87e8709c5c214498034dca4a25dc0c838e633f44\"" Apr 25 01:26:47.903738 containerd[1734]: time="2026-04-25T01:26:47.903280547Z" level=info msg="StartContainer for \"ce4f633db9f48bd4f3ce6c6f87e8709c5c214498034dca4a25dc0c838e633f44\"" Apr 25 01:26:47.970601 systemd[1]: Started cri-containerd-ce4f633db9f48bd4f3ce6c6f87e8709c5c214498034dca4a25dc0c838e633f44.scope - libcontainer container ce4f633db9f48bd4f3ce6c6f87e8709c5c214498034dca4a25dc0c838e633f44. Apr 25 01:26:48.007930 containerd[1734]: time="2026-04-25T01:26:48.007885840Z" level=info msg="StartContainer for \"ce4f633db9f48bd4f3ce6c6f87e8709c5c214498034dca4a25dc0c838e633f44\" returns successfully" Apr 25 01:26:48.177701 containerd[1734]: time="2026-04-25T01:26:48.176859596Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:48.183604 containerd[1734]: time="2026-04-25T01:26:48.182949954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 25 01:26:48.186723 containerd[1734]: time="2026-04-25T01:26:48.186686993Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 330.193953ms" Apr 25 01:26:48.186935 containerd[1734]: time="2026-04-25T01:26:48.186915153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 25 01:26:48.200279 containerd[1734]: time="2026-04-25T01:26:48.200089549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 25 01:26:48.202775 containerd[1734]: time="2026-04-25T01:26:48.202746669Z" level=info msg="CreateContainer within sandbox \"bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 25 01:26:48.248339 containerd[1734]: time="2026-04-25T01:26:48.248299177Z" level=info msg="CreateContainer within sandbox \"bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f78b5ba6cfeebe20279aa95e3c717fee90bdf48323a76c82417b04a4a0bd5f9e\"" Apr 25 01:26:48.250419 containerd[1734]: time="2026-04-25T01:26:48.250390776Z" level=info msg="StartContainer for \"f78b5ba6cfeebe20279aa95e3c717fee90bdf48323a76c82417b04a4a0bd5f9e\"" Apr 25 01:26:48.279607 systemd[1]: Started cri-containerd-f78b5ba6cfeebe20279aa95e3c717fee90bdf48323a76c82417b04a4a0bd5f9e.scope - libcontainer container f78b5ba6cfeebe20279aa95e3c717fee90bdf48323a76c82417b04a4a0bd5f9e. Apr 25 01:26:48.328697 containerd[1734]: time="2026-04-25T01:26:48.328657436Z" level=info msg="StartContainer for \"f78b5ba6cfeebe20279aa95e3c717fee90bdf48323a76c82417b04a4a0bd5f9e\" returns successfully" Apr 25 01:26:48.567973 kubelet[3177]: I0425 01:26:48.567475 3177 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-684644c847-8lnh2" podStartSLOduration=42.330837205 podStartE2EDuration="46.567457493s" podCreationTimestamp="2026-04-25 01:26:02 +0000 UTC" firstStartedPulling="2026-04-25 01:26:43.951837824 +0000 UTC m=+61.930459551" lastFinishedPulling="2026-04-25 01:26:48.188458112 +0000 UTC m=+66.167079839" observedRunningTime="2026-04-25 01:26:48.565575214 +0000 UTC m=+66.544196941" watchObservedRunningTime="2026-04-25 01:26:48.567457493 +0000 UTC m=+66.546079300" Apr 25 01:26:48.888783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3256935131.mount: Deactivated successfully. Apr 25 01:26:49.551582 kubelet[3177]: I0425 01:26:49.551548 3177 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 25 01:26:50.184196 kubelet[3177]: I0425 01:26:50.183750 3177 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-684644c847-7pwrk" podStartSLOduration=41.859467808 podStartE2EDuration="48.183736509s" podCreationTimestamp="2026-04-25 01:26:02 +0000 UTC" firstStartedPulling="2026-04-25 01:26:41.531883659 +0000 UTC m=+59.510505386" lastFinishedPulling="2026-04-25 01:26:47.85615236 +0000 UTC m=+65.834774087" observedRunningTime="2026-04-25 01:26:48.586868288 +0000 UTC m=+66.565490055" watchObservedRunningTime="2026-04-25 01:26:50.183736509 +0000 UTC m=+68.162358236" Apr 25 01:26:52.099762 containerd[1734]: time="2026-04-25T01:26:52.099713354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:52.103084 containerd[1734]: time="2026-04-25T01:26:52.103052913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 25 01:26:52.106474 containerd[1734]: time="2026-04-25T01:26:52.106269233Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:52.111393 containerd[1734]: time="2026-04-25T01:26:52.111344992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:52.112457 containerd[1734]: time="2026-04-25T01:26:52.112074072Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.911930003s" Apr 25 01:26:52.112457 containerd[1734]: time="2026-04-25T01:26:52.112132992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 25 01:26:52.116483 containerd[1734]: time="2026-04-25T01:26:52.115615391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 25 01:26:52.146149 containerd[1734]: time="2026-04-25T01:26:52.146107946Z" level=info msg="CreateContainer within sandbox \"1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 25 01:26:52.589484 containerd[1734]: time="2026-04-25T01:26:52.589414191Z" level=info msg="CreateContainer within sandbox \"1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e794307e845fee62583decf8d386d9699f6ad57350fbad63d488bdb096494113\"" Apr 25 01:26:52.590783 containerd[1734]: time="2026-04-25T01:26:52.590277670Z" level=info msg="StartContainer for \"e794307e845fee62583decf8d386d9699f6ad57350fbad63d488bdb096494113\"" Apr 25 01:26:52.626611 systemd[1]: Started cri-containerd-e794307e845fee62583decf8d386d9699f6ad57350fbad63d488bdb096494113.scope - libcontainer container e794307e845fee62583decf8d386d9699f6ad57350fbad63d488bdb096494113. Apr 25 01:26:52.664161 containerd[1734]: time="2026-04-25T01:26:52.664098258Z" level=info msg="StartContainer for \"e794307e845fee62583decf8d386d9699f6ad57350fbad63d488bdb096494113\" returns successfully" Apr 25 01:26:53.587934 kubelet[3177]: I0425 01:26:53.587677 3177 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-85bd8bbdbd-vbd9d" podStartSLOduration=42.421960833 podStartE2EDuration="49.587641221s" podCreationTimestamp="2026-04-25 01:26:04 +0000 UTC" firstStartedPulling="2026-04-25 01:26:44.948283483 +0000 UTC m=+62.926905210" lastFinishedPulling="2026-04-25 01:26:52.113963871 +0000 UTC m=+70.092585598" observedRunningTime="2026-04-25 01:26:53.584779262 +0000 UTC m=+71.563400989" watchObservedRunningTime="2026-04-25 01:26:53.587641221 +0000 UTC m=+71.566262948" Apr 25 01:26:53.755565 containerd[1734]: time="2026-04-25T01:26:53.755515553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:53.758890 containerd[1734]: time="2026-04-25T01:26:53.758747792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 25 01:26:53.762118 containerd[1734]: time="2026-04-25T01:26:53.762019272Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:53.766997 containerd[1734]: time="2026-04-25T01:26:53.766947871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:53.767735 containerd[1734]: time="2026-04-25T01:26:53.767709511Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.65206204s" Apr 25 01:26:53.767903 containerd[1734]: time="2026-04-25T01:26:53.767822271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 25 01:26:53.775989 containerd[1734]: time="2026-04-25T01:26:53.775954750Z" level=info msg="CreateContainer within sandbox \"5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 25 01:26:53.816130 containerd[1734]: time="2026-04-25T01:26:53.816008663Z" level=info msg="CreateContainer within sandbox \"5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"70624801b68c19792d1fbbfb7e83aaf0d58566d876da48766bdb22af58cdf2a7\"" Apr 25 01:26:53.816994 containerd[1734]: time="2026-04-25T01:26:53.816724223Z" level=info msg="StartContainer for \"70624801b68c19792d1fbbfb7e83aaf0d58566d876da48766bdb22af58cdf2a7\"" Apr 25 01:26:53.850306 systemd[1]: Started cri-containerd-70624801b68c19792d1fbbfb7e83aaf0d58566d876da48766bdb22af58cdf2a7.scope - libcontainer container 70624801b68c19792d1fbbfb7e83aaf0d58566d876da48766bdb22af58cdf2a7. Apr 25 01:26:53.888819 containerd[1734]: time="2026-04-25T01:26:53.888774130Z" level=info msg="StartContainer for \"70624801b68c19792d1fbbfb7e83aaf0d58566d876da48766bdb22af58cdf2a7\" returns successfully" Apr 25 01:26:53.891754 containerd[1734]: time="2026-04-25T01:26:53.891718530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 25 01:26:55.998305 containerd[1734]: time="2026-04-25T01:26:55.997782773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:56.001002 containerd[1734]: time="2026-04-25T01:26:56.000957293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 25 01:26:56.004562 containerd[1734]: time="2026-04-25T01:26:56.004517492Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:56.009922 containerd[1734]: time="2026-04-25T01:26:56.009894811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 01:26:56.010931 containerd[1734]: time="2026-04-25T01:26:56.010517371Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.118759561s" Apr 25 01:26:56.011370 containerd[1734]: time="2026-04-25T01:26:56.011347771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 25 01:26:56.021258 containerd[1734]: time="2026-04-25T01:26:56.021134929Z" level=info msg="CreateContainer within sandbox \"5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 25 01:26:56.057815 containerd[1734]: time="2026-04-25T01:26:56.057766243Z" level=info msg="CreateContainer within sandbox \"5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5b907286fb1ff7928a6a5d8aa82118f382d810a661e9733a7c2dee0d3274ee95\"" Apr 25 01:26:56.059730 containerd[1734]: time="2026-04-25T01:26:56.058628603Z" level=info msg="StartContainer for \"5b907286fb1ff7928a6a5d8aa82118f382d810a661e9733a7c2dee0d3274ee95\"" Apr 25 01:26:56.091598 systemd[1]: Started cri-containerd-5b907286fb1ff7928a6a5d8aa82118f382d810a661e9733a7c2dee0d3274ee95.scope - libcontainer container 5b907286fb1ff7928a6a5d8aa82118f382d810a661e9733a7c2dee0d3274ee95. Apr 25 01:26:56.125972 containerd[1734]: time="2026-04-25T01:26:56.125931671Z" level=info msg="StartContainer for \"5b907286fb1ff7928a6a5d8aa82118f382d810a661e9733a7c2dee0d3274ee95\" returns successfully" Apr 25 01:26:56.245392 kubelet[3177]: I0425 01:26:56.245286 3177 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 25 01:26:56.245392 kubelet[3177]: I0425 01:26:56.245319 3177 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 25 01:27:01.446226 kubelet[3177]: I0425 01:27:01.445183 3177 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-m9jdr" podStartSLOduration=46.430469755 podStartE2EDuration="57.445167096s" podCreationTimestamp="2026-04-25 01:26:04 +0000 UTC" firstStartedPulling="2026-04-25 01:26:44.99731363 +0000 UTC m=+62.975935357" lastFinishedPulling="2026-04-25 01:26:56.012010971 +0000 UTC m=+73.990632698" observedRunningTime="2026-04-25 01:26:56.591808592 +0000 UTC m=+74.570430319" watchObservedRunningTime="2026-04-25 01:27:01.445167096 +0000 UTC m=+79.423788783" Apr 25 01:27:10.742613 kubelet[3177]: I0425 01:27:10.742196 3177 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 25 01:27:36.687870 systemd[1]: Started sshd@7-10.0.0.7:22-4.175.71.9:46670.service - OpenSSH per-connection server daemon (4.175.71.9:46670). Apr 25 01:27:37.613138 sshd[6407]: Accepted publickey for core from 4.175.71.9 port 46670 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:27:37.615969 sshd[6407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:27:37.620586 systemd-logind[1716]: New session 10 of user core. Apr 25 01:27:37.624597 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 25 01:27:38.378943 sshd[6407]: pam_unix(sshd:session): session closed for user core Apr 25 01:27:38.382610 systemd[1]: sshd@7-10.0.0.7:22-4.175.71.9:46670.service: Deactivated successfully. Apr 25 01:27:38.386045 systemd[1]: session-10.scope: Deactivated successfully. Apr 25 01:27:38.388017 systemd-logind[1716]: Session 10 logged out. Waiting for processes to exit. Apr 25 01:27:38.388851 systemd-logind[1716]: Removed session 10. Apr 25 01:27:43.053751 containerd[1734]: time="2026-04-25T01:27:43.053461533Z" level=info msg="StopPodSandbox for \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\"" Apr 25 01:27:43.123375 containerd[1734]: 2026-04-25 01:27:43.090 [WARNING][6430] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0", GenerateName:"calico-kube-controllers-85bd8bbdbd-", Namespace:"calico-system", SelfLink:"", UID:"bfc2469b-0460-4c01-a157-10152c3a426a", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85bd8bbdbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b", Pod:"calico-kube-controllers-85bd8bbdbd-vbd9d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.111.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie931fe8acc6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:27:43.123375 containerd[1734]: 2026-04-25 01:27:43.090 [INFO][6430] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Apr 25 01:27:43.123375 containerd[1734]: 2026-04-25 01:27:43.090 [INFO][6430] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" iface="eth0" netns="" Apr 25 01:27:43.123375 containerd[1734]: 2026-04-25 01:27:43.090 [INFO][6430] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Apr 25 01:27:43.123375 containerd[1734]: 2026-04-25 01:27:43.090 [INFO][6430] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Apr 25 01:27:43.123375 containerd[1734]: 2026-04-25 01:27:43.110 [INFO][6437] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" HandleID="k8s-pod-network.360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0" Apr 25 01:27:43.123375 containerd[1734]: 2026-04-25 01:27:43.110 [INFO][6437] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:27:43.123375 containerd[1734]: 2026-04-25 01:27:43.110 [INFO][6437] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:27:43.123375 containerd[1734]: 2026-04-25 01:27:43.118 [WARNING][6437] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" HandleID="k8s-pod-network.360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0" Apr 25 01:27:43.123375 containerd[1734]: 2026-04-25 01:27:43.118 [INFO][6437] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" HandleID="k8s-pod-network.360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0" Apr 25 01:27:43.123375 containerd[1734]: 2026-04-25 01:27:43.119 [INFO][6437] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:27:43.123375 containerd[1734]: 2026-04-25 01:27:43.121 [INFO][6430] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Apr 25 01:27:43.123375 containerd[1734]: time="2026-04-25T01:27:43.123242915Z" level=info msg="TearDown network for sandbox \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\" successfully" Apr 25 01:27:43.123375 containerd[1734]: time="2026-04-25T01:27:43.123273155Z" level=info msg="StopPodSandbox for \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\" returns successfully" Apr 25 01:27:43.124461 containerd[1734]: time="2026-04-25T01:27:43.124346715Z" level=info msg="RemovePodSandbox for \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\"" Apr 25 01:27:43.124461 containerd[1734]: time="2026-04-25T01:27:43.124387155Z" level=info msg="Forcibly stopping sandbox \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\"" Apr 25 01:27:43.195460 containerd[1734]: 2026-04-25 01:27:43.162 [WARNING][6451] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0", GenerateName:"calico-kube-controllers-85bd8bbdbd-", Namespace:"calico-system", SelfLink:"", UID:"bfc2469b-0460-4c01-a157-10152c3a426a", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85bd8bbdbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"1d1b8042bd5a8eceb1818eff740ce69bc5ab045e24a12912a6438452c6dc465b", Pod:"calico-kube-controllers-85bd8bbdbd-vbd9d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.111.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie931fe8acc6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:27:43.195460 containerd[1734]: 2026-04-25 01:27:43.162 [INFO][6451] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Apr 25 01:27:43.195460 containerd[1734]: 2026-04-25 01:27:43.162 [INFO][6451] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" iface="eth0" netns="" Apr 25 01:27:43.195460 containerd[1734]: 2026-04-25 01:27:43.162 [INFO][6451] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Apr 25 01:27:43.195460 containerd[1734]: 2026-04-25 01:27:43.162 [INFO][6451] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Apr 25 01:27:43.195460 containerd[1734]: 2026-04-25 01:27:43.181 [INFO][6459] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" HandleID="k8s-pod-network.360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0" Apr 25 01:27:43.195460 containerd[1734]: 2026-04-25 01:27:43.181 [INFO][6459] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:27:43.195460 containerd[1734]: 2026-04-25 01:27:43.181 [INFO][6459] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:27:43.195460 containerd[1734]: 2026-04-25 01:27:43.191 [WARNING][6459] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" HandleID="k8s-pod-network.360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0" Apr 25 01:27:43.195460 containerd[1734]: 2026-04-25 01:27:43.191 [INFO][6459] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" HandleID="k8s-pod-network.360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--kube--controllers--85bd8bbdbd--vbd9d-eth0" Apr 25 01:27:43.195460 containerd[1734]: 2026-04-25 01:27:43.192 [INFO][6459] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:27:43.195460 containerd[1734]: 2026-04-25 01:27:43.193 [INFO][6451] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150" Apr 25 01:27:43.195860 containerd[1734]: time="2026-04-25T01:27:43.195503137Z" level=info msg="TearDown network for sandbox \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\" successfully" Apr 25 01:27:43.206968 containerd[1734]: time="2026-04-25T01:27:43.206919335Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 01:27:43.207084 containerd[1734]: time="2026-04-25T01:27:43.206995415Z" level=info msg="RemovePodSandbox \"360d2fb90c737118025b054947aad4c12c1757fe693c5f2f2fa1fa3a2d21a150\" returns successfully" Apr 25 01:27:43.207754 containerd[1734]: time="2026-04-25T01:27:43.207498615Z" level=info msg="StopPodSandbox for \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\"" Apr 25 01:27:43.277855 containerd[1734]: 2026-04-25 01:27:43.240 [WARNING][6473] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"9a4c953d-e4d6-4586-907f-7af01091f4b3", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896", Pod:"coredns-7d764666f9-hmvfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f92490ec7e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:27:43.277855 containerd[1734]: 2026-04-25 01:27:43.240 [INFO][6473] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Apr 25 01:27:43.277855 containerd[1734]: 2026-04-25 01:27:43.240 [INFO][6473] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" iface="eth0" netns="" Apr 25 01:27:43.277855 containerd[1734]: 2026-04-25 01:27:43.240 [INFO][6473] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Apr 25 01:27:43.277855 containerd[1734]: 2026-04-25 01:27:43.240 [INFO][6473] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Apr 25 01:27:43.277855 containerd[1734]: 2026-04-25 01:27:43.259 [INFO][6480] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" HandleID="k8s-pod-network.cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0" Apr 25 01:27:43.277855 containerd[1734]: 2026-04-25 01:27:43.259 [INFO][6480] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:27:43.277855 containerd[1734]: 2026-04-25 01:27:43.259 [INFO][6480] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:27:43.277855 containerd[1734]: 2026-04-25 01:27:43.267 [WARNING][6480] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" HandleID="k8s-pod-network.cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0" Apr 25 01:27:43.277855 containerd[1734]: 2026-04-25 01:27:43.268 [INFO][6480] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" HandleID="k8s-pod-network.cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0" Apr 25 01:27:43.277855 containerd[1734]: 2026-04-25 01:27:43.271 [INFO][6480] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:27:43.277855 containerd[1734]: 2026-04-25 01:27:43.275 [INFO][6473] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Apr 25 01:27:43.278706 containerd[1734]: time="2026-04-25T01:27:43.277902117Z" level=info msg="TearDown network for sandbox \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\" successfully" Apr 25 01:27:43.278706 containerd[1734]: time="2026-04-25T01:27:43.277926557Z" level=info msg="StopPodSandbox for \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\" returns successfully" Apr 25 01:27:43.279259 containerd[1734]: time="2026-04-25T01:27:43.279228557Z" level=info msg="RemovePodSandbox for \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\"" Apr 25 01:27:43.279307 containerd[1734]: time="2026-04-25T01:27:43.279261077Z" level=info msg="Forcibly stopping sandbox \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\"" Apr 25 01:27:43.350642 containerd[1734]: 2026-04-25 01:27:43.318 [WARNING][6495] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"9a4c953d-e4d6-4586-907f-7af01091f4b3", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"6fc7523dc6278caa0fce9a9d6fb76f6db5a313a90fbf52ee038f07dda80f1896", Pod:"coredns-7d764666f9-hmvfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f92490ec7e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:27:43.350642 containerd[1734]: 2026-04-25 01:27:43.319 [INFO][6495] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Apr 25 01:27:43.350642 containerd[1734]: 2026-04-25 01:27:43.319 [INFO][6495] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" iface="eth0" netns="" Apr 25 01:27:43.350642 containerd[1734]: 2026-04-25 01:27:43.319 [INFO][6495] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Apr 25 01:27:43.350642 containerd[1734]: 2026-04-25 01:27:43.319 [INFO][6495] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Apr 25 01:27:43.350642 containerd[1734]: 2026-04-25 01:27:43.336 [INFO][6502] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" HandleID="k8s-pod-network.cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0" Apr 25 01:27:43.350642 containerd[1734]: 2026-04-25 01:27:43.336 [INFO][6502] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:27:43.350642 containerd[1734]: 2026-04-25 01:27:43.336 [INFO][6502] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:27:43.350642 containerd[1734]: 2026-04-25 01:27:43.345 [WARNING][6502] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" HandleID="k8s-pod-network.cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0" Apr 25 01:27:43.350642 containerd[1734]: 2026-04-25 01:27:43.346 [INFO][6502] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" HandleID="k8s-pod-network.cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--hmvfq-eth0" Apr 25 01:27:43.350642 containerd[1734]: 2026-04-25 01:27:43.347 [INFO][6502] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:27:43.350642 containerd[1734]: 2026-04-25 01:27:43.348 [INFO][6495] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da" Apr 25 01:27:43.350642 containerd[1734]: time="2026-04-25T01:27:43.350621899Z" level=info msg="TearDown network for sandbox \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\" successfully" Apr 25 01:27:43.359216 containerd[1734]: time="2026-04-25T01:27:43.359136137Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 01:27:43.359331 containerd[1734]: time="2026-04-25T01:27:43.359268217Z" level=info msg="RemovePodSandbox \"cb47bb78b55e32571f6e15a999fe80b965ba5feafad4856b6e72317194e984da\" returns successfully" Apr 25 01:27:43.360084 containerd[1734]: time="2026-04-25T01:27:43.359755737Z" level=info msg="StopPodSandbox for \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\"" Apr 25 01:27:43.426588 containerd[1734]: 2026-04-25 01:27:43.392 [WARNING][6516] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0", GenerateName:"calico-apiserver-684644c847-", Namespace:"calico-system", SelfLink:"", UID:"95e54b6a-50b6-4304-b325-75ea339a6594", ResourceVersion:"1191", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684644c847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc", Pod:"calico-apiserver-684644c847-8lnh2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4054c2c0d9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:27:43.426588 containerd[1734]: 2026-04-25 01:27:43.393 [INFO][6516] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Apr 25 01:27:43.426588 containerd[1734]: 2026-04-25 01:27:43.393 [INFO][6516] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" iface="eth0" netns="" Apr 25 01:27:43.426588 containerd[1734]: 2026-04-25 01:27:43.393 [INFO][6516] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Apr 25 01:27:43.426588 containerd[1734]: 2026-04-25 01:27:43.393 [INFO][6516] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Apr 25 01:27:43.426588 containerd[1734]: 2026-04-25 01:27:43.413 [INFO][6523] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" HandleID="k8s-pod-network.b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0" Apr 25 01:27:43.426588 containerd[1734]: 2026-04-25 01:27:43.413 [INFO][6523] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:27:43.426588 containerd[1734]: 2026-04-25 01:27:43.413 [INFO][6523] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:27:43.426588 containerd[1734]: 2026-04-25 01:27:43.422 [WARNING][6523] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" HandleID="k8s-pod-network.b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0" Apr 25 01:27:43.426588 containerd[1734]: 2026-04-25 01:27:43.422 [INFO][6523] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" HandleID="k8s-pod-network.b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0" Apr 25 01:27:43.426588 containerd[1734]: 2026-04-25 01:27:43.423 [INFO][6523] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:27:43.426588 containerd[1734]: 2026-04-25 01:27:43.424 [INFO][6516] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Apr 25 01:27:43.427082 containerd[1734]: time="2026-04-25T01:27:43.426628520Z" level=info msg="TearDown network for sandbox \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\" successfully" Apr 25 01:27:43.427082 containerd[1734]: time="2026-04-25T01:27:43.426652800Z" level=info msg="StopPodSandbox for \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\" returns successfully" Apr 25 01:27:43.427864 containerd[1734]: time="2026-04-25T01:27:43.427564240Z" level=info msg="RemovePodSandbox for \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\"" Apr 25 01:27:43.427864 containerd[1734]: time="2026-04-25T01:27:43.427596880Z" level=info msg="Forcibly stopping sandbox \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\"" Apr 25 01:27:43.494021 containerd[1734]: 2026-04-25 01:27:43.462 [WARNING][6538] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0", GenerateName:"calico-apiserver-684644c847-", Namespace:"calico-system", SelfLink:"", UID:"95e54b6a-50b6-4304-b325-75ea339a6594", ResourceVersion:"1191", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"684644c847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"bc745fa5caa3ce956eb6adb9b27d599d2ff797924912b6b141f76b5ed7efe7bc", Pod:"calico-apiserver-684644c847-8lnh2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4054c2c0d9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:27:43.494021 containerd[1734]: 2026-04-25 01:27:43.462 [INFO][6538] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Apr 25 01:27:43.494021 containerd[1734]: 2026-04-25 01:27:43.462 [INFO][6538] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" iface="eth0" netns="" Apr 25 01:27:43.494021 containerd[1734]: 2026-04-25 01:27:43.462 [INFO][6538] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Apr 25 01:27:43.494021 containerd[1734]: 2026-04-25 01:27:43.462 [INFO][6538] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Apr 25 01:27:43.494021 containerd[1734]: 2026-04-25 01:27:43.480 [INFO][6546] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" HandleID="k8s-pod-network.b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0" Apr 25 01:27:43.494021 containerd[1734]: 2026-04-25 01:27:43.480 [INFO][6546] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:27:43.494021 containerd[1734]: 2026-04-25 01:27:43.480 [INFO][6546] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:27:43.494021 containerd[1734]: 2026-04-25 01:27:43.489 [WARNING][6546] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" HandleID="k8s-pod-network.b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0" Apr 25 01:27:43.494021 containerd[1734]: 2026-04-25 01:27:43.489 [INFO][6546] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" HandleID="k8s-pod-network.b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-calico--apiserver--684644c847--8lnh2-eth0" Apr 25 01:27:43.494021 containerd[1734]: 2026-04-25 01:27:43.490 [INFO][6546] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:27:43.494021 containerd[1734]: 2026-04-25 01:27:43.492 [INFO][6538] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17" Apr 25 01:27:43.494021 containerd[1734]: time="2026-04-25T01:27:43.493932063Z" level=info msg="TearDown network for sandbox \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\" successfully" Apr 25 01:27:43.503124 containerd[1734]: time="2026-04-25T01:27:43.503068421Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 01:27:43.503405 containerd[1734]: time="2026-04-25T01:27:43.503280581Z" level=info msg="RemovePodSandbox \"b5d025818192a5196707717ae62b1b47402868a8a917a7f8fc4c0a16a14b0b17\" returns successfully" Apr 25 01:27:43.503845 containerd[1734]: time="2026-04-25T01:27:43.503706701Z" level=info msg="StopPodSandbox for \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\"" Apr 25 01:27:43.536522 systemd[1]: Started sshd@8-10.0.0.7:22-4.175.71.9:46674.service - OpenSSH per-connection server daemon (4.175.71.9:46674). Apr 25 01:27:43.577345 containerd[1734]: 2026-04-25 01:27:43.544 [WARNING][6560] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"824a5d45-7c15-4527-82b4-bbcfeeb63e50", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428", Pod:"csi-node-driver-m9jdr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic1f058b605f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:27:43.577345 containerd[1734]: 2026-04-25 01:27:43.544 [INFO][6560] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Apr 25 01:27:43.577345 containerd[1734]: 2026-04-25 01:27:43.544 [INFO][6560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" iface="eth0" netns="" Apr 25 01:27:43.577345 containerd[1734]: 2026-04-25 01:27:43.544 [INFO][6560] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Apr 25 01:27:43.577345 containerd[1734]: 2026-04-25 01:27:43.544 [INFO][6560] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Apr 25 01:27:43.577345 containerd[1734]: 2026-04-25 01:27:43.563 [INFO][6570] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" HandleID="k8s-pod-network.4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0" Apr 25 01:27:43.577345 containerd[1734]: 2026-04-25 01:27:43.563 [INFO][6570] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:27:43.577345 containerd[1734]: 2026-04-25 01:27:43.563 [INFO][6570] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:27:43.577345 containerd[1734]: 2026-04-25 01:27:43.572 [WARNING][6570] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" HandleID="k8s-pod-network.4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0" Apr 25 01:27:43.577345 containerd[1734]: 2026-04-25 01:27:43.572 [INFO][6570] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" HandleID="k8s-pod-network.4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0" Apr 25 01:27:43.577345 containerd[1734]: 2026-04-25 01:27:43.574 [INFO][6570] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:27:43.577345 containerd[1734]: 2026-04-25 01:27:43.575 [INFO][6560] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Apr 25 01:27:43.578559 containerd[1734]: time="2026-04-25T01:27:43.577852642Z" level=info msg="TearDown network for sandbox \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\" successfully" Apr 25 01:27:43.578559 containerd[1734]: time="2026-04-25T01:27:43.577894802Z" level=info msg="StopPodSandbox for \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\" returns successfully" Apr 25 01:27:43.579063 containerd[1734]: time="2026-04-25T01:27:43.578720122Z" level=info msg="RemovePodSandbox for \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\"" Apr 25 01:27:43.579063 containerd[1734]: time="2026-04-25T01:27:43.578749042Z" level=info msg="Forcibly stopping sandbox \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\"" Apr 25 01:27:43.640514 containerd[1734]: 2026-04-25 01:27:43.609 [WARNING][6585] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"824a5d45-7c15-4527-82b4-bbcfeeb63e50", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"5d2d251081fbb32be99165864ebe6de5e157a5490bede9c787c357287c6c6428", Pod:"csi-node-driver-m9jdr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic1f058b605f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:27:43.640514 containerd[1734]: 2026-04-25 01:27:43.609 [INFO][6585] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Apr 25 01:27:43.640514 containerd[1734]: 2026-04-25 01:27:43.609 [INFO][6585] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" iface="eth0" netns="" Apr 25 01:27:43.640514 containerd[1734]: 2026-04-25 01:27:43.609 [INFO][6585] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Apr 25 01:27:43.640514 containerd[1734]: 2026-04-25 01:27:43.609 [INFO][6585] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Apr 25 01:27:43.640514 containerd[1734]: 2026-04-25 01:27:43.627 [INFO][6592] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" HandleID="k8s-pod-network.4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0" Apr 25 01:27:43.640514 containerd[1734]: 2026-04-25 01:27:43.627 [INFO][6592] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:27:43.640514 containerd[1734]: 2026-04-25 01:27:43.627 [INFO][6592] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:27:43.640514 containerd[1734]: 2026-04-25 01:27:43.636 [WARNING][6592] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" HandleID="k8s-pod-network.4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0" Apr 25 01:27:43.640514 containerd[1734]: 2026-04-25 01:27:43.636 [INFO][6592] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" HandleID="k8s-pod-network.4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-csi--node--driver--m9jdr-eth0" Apr 25 01:27:43.640514 containerd[1734]: 2026-04-25 01:27:43.637 [INFO][6592] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:27:43.640514 containerd[1734]: 2026-04-25 01:27:43.638 [INFO][6585] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869" Apr 25 01:27:43.641042 containerd[1734]: time="2026-04-25T01:27:43.640623627Z" level=info msg="TearDown network for sandbox \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\" successfully" Apr 25 01:27:43.648526 containerd[1734]: time="2026-04-25T01:27:43.648378465Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 01:27:43.648526 containerd[1734]: time="2026-04-25T01:27:43.648474385Z" level=info msg="RemovePodSandbox \"4ea05b113e582bc70847bce4ccc8864bfdd37d4fd3157dcdab2c1db99522e869\" returns successfully" Apr 25 01:27:43.649081 containerd[1734]: time="2026-04-25T01:27:43.649057865Z" level=info msg="StopPodSandbox for \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\"" Apr 25 01:27:43.716131 containerd[1734]: 2026-04-25 01:27:43.681 [WARNING][6607] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"533f9528-3ff0-42b3-85ca-983925f11ebc", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be", Pod:"coredns-7d764666f9-xz5zq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40dae98f140", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:27:43.716131 containerd[1734]: 2026-04-25 01:27:43.681 [INFO][6607] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Apr 25 01:27:43.716131 containerd[1734]: 2026-04-25 01:27:43.681 [INFO][6607] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" iface="eth0" netns="" Apr 25 01:27:43.716131 containerd[1734]: 2026-04-25 01:27:43.681 [INFO][6607] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Apr 25 01:27:43.716131 containerd[1734]: 2026-04-25 01:27:43.681 [INFO][6607] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Apr 25 01:27:43.716131 containerd[1734]: 2026-04-25 01:27:43.703 [INFO][6614] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" HandleID="k8s-pod-network.adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0" Apr 25 01:27:43.716131 containerd[1734]: 2026-04-25 01:27:43.703 [INFO][6614] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:27:43.716131 containerd[1734]: 2026-04-25 01:27:43.703 [INFO][6614] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:27:43.716131 containerd[1734]: 2026-04-25 01:27:43.711 [WARNING][6614] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" HandleID="k8s-pod-network.adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0" Apr 25 01:27:43.716131 containerd[1734]: 2026-04-25 01:27:43.711 [INFO][6614] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" HandleID="k8s-pod-network.adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0" Apr 25 01:27:43.716131 containerd[1734]: 2026-04-25 01:27:43.712 [INFO][6614] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:27:43.716131 containerd[1734]: 2026-04-25 01:27:43.714 [INFO][6607] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Apr 25 01:27:43.716844 containerd[1734]: time="2026-04-25T01:27:43.716170168Z" level=info msg="TearDown network for sandbox \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\" successfully" Apr 25 01:27:43.716844 containerd[1734]: time="2026-04-25T01:27:43.716195448Z" level=info msg="StopPodSandbox for \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\" returns successfully" Apr 25 01:27:43.716844 containerd[1734]: time="2026-04-25T01:27:43.716737448Z" level=info msg="RemovePodSandbox for \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\"" Apr 25 01:27:43.716844 containerd[1734]: time="2026-04-25T01:27:43.716763648Z" level=info msg="Forcibly stopping sandbox \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\"" Apr 25 01:27:43.788225 containerd[1734]: 2026-04-25 01:27:43.751 [WARNING][6629] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"533f9528-3ff0-42b3-85ca-983925f11ebc", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.April, 25, 1, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-cf3dcbc0ec", ContainerID:"5744ccbcd3ec822dfda81545bb03282d8e975aa74e781d3a8acb3db5fbe018be", Pod:"coredns-7d764666f9-xz5zq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40dae98f140", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 25 01:27:43.788225 containerd[1734]: 2026-04-25 01:27:43.752 [INFO][6629] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Apr 25 01:27:43.788225 containerd[1734]: 2026-04-25 01:27:43.752 [INFO][6629] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" iface="eth0" netns="" Apr 25 01:27:43.788225 containerd[1734]: 2026-04-25 01:27:43.752 [INFO][6629] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Apr 25 01:27:43.788225 containerd[1734]: 2026-04-25 01:27:43.752 [INFO][6629] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Apr 25 01:27:43.788225 containerd[1734]: 2026-04-25 01:27:43.770 [INFO][6637] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" HandleID="k8s-pod-network.adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0" Apr 25 01:27:43.788225 containerd[1734]: 2026-04-25 01:27:43.770 [INFO][6637] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 25 01:27:43.788225 containerd[1734]: 2026-04-25 01:27:43.770 [INFO][6637] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 25 01:27:43.788225 containerd[1734]: 2026-04-25 01:27:43.778 [WARNING][6637] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" HandleID="k8s-pod-network.adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0" Apr 25 01:27:43.788225 containerd[1734]: 2026-04-25 01:27:43.778 [INFO][6637] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" HandleID="k8s-pod-network.adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Workload="ci--4081.3.6--n--cf3dcbc0ec-k8s-coredns--7d764666f9--xz5zq-eth0" Apr 25 01:27:43.788225 containerd[1734]: 2026-04-25 01:27:43.785 [INFO][6637] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 25 01:27:43.788225 containerd[1734]: 2026-04-25 01:27:43.786 [INFO][6629] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238" Apr 25 01:27:43.788774 containerd[1734]: time="2026-04-25T01:27:43.788247910Z" level=info msg="TearDown network for sandbox \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\" successfully" Apr 25 01:27:43.796820 containerd[1734]: time="2026-04-25T01:27:43.796767988Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 01:27:43.796944 containerd[1734]: time="2026-04-25T01:27:43.796845388Z" level=info msg="RemovePodSandbox \"adbbd0069d385e5687aef1b89e7bfefe6ca352666177ebbedbe2cf15ba7fd238\" returns successfully" Apr 25 01:27:44.426470 sshd[6567]: Accepted publickey for core from 4.175.71.9 port 46674 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:27:44.427943 sshd[6567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:27:44.433026 systemd-logind[1716]: New session 11 of user core. Apr 25 01:27:44.436775 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 25 01:27:45.216569 sshd[6567]: pam_unix(sshd:session): session closed for user core Apr 25 01:27:45.219248 systemd[1]: sshd@8-10.0.0.7:22-4.175.71.9:46674.service: Deactivated successfully. Apr 25 01:27:45.222394 systemd[1]: session-11.scope: Deactivated successfully. Apr 25 01:27:45.224578 systemd-logind[1716]: Session 11 logged out. Waiting for processes to exit. Apr 25 01:27:45.225692 systemd-logind[1716]: Removed session 11. Apr 25 01:27:50.373685 systemd[1]: Started sshd@9-10.0.0.7:22-4.175.71.9:54648.service - OpenSSH per-connection server daemon (4.175.71.9:54648). Apr 25 01:27:51.255897 sshd[6684]: Accepted publickey for core from 4.175.71.9 port 54648 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:27:51.257361 sshd[6684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:27:51.261312 systemd-logind[1716]: New session 12 of user core. Apr 25 01:27:51.267587 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 25 01:27:51.951688 sshd[6684]: pam_unix(sshd:session): session closed for user core Apr 25 01:27:51.955239 systemd[1]: sshd@9-10.0.0.7:22-4.175.71.9:54648.service: Deactivated successfully. Apr 25 01:27:51.957222 systemd[1]: session-12.scope: Deactivated successfully. Apr 25 01:27:51.958231 systemd-logind[1716]: Session 12 logged out. Waiting for processes to exit. Apr 25 01:27:51.959137 systemd-logind[1716]: Removed session 12. Apr 25 01:27:53.584018 systemd[1]: run-containerd-runc-k8s.io-e794307e845fee62583decf8d386d9699f6ad57350fbad63d488bdb096494113-runc.dOHm27.mount: Deactivated successfully. Apr 25 01:27:57.109714 systemd[1]: Started sshd@10-10.0.0.7:22-4.175.71.9:50124.service - OpenSSH per-connection server daemon (4.175.71.9:50124). Apr 25 01:27:57.990531 sshd[6770]: Accepted publickey for core from 4.175.71.9 port 50124 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:27:57.992480 sshd[6770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:27:57.996589 systemd-logind[1716]: New session 13 of user core. Apr 25 01:27:58.001572 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 25 01:27:58.681431 sshd[6770]: pam_unix(sshd:session): session closed for user core Apr 25 01:27:58.687017 systemd[1]: sshd@10-10.0.0.7:22-4.175.71.9:50124.service: Deactivated successfully. Apr 25 01:27:58.690105 systemd[1]: session-13.scope: Deactivated successfully. Apr 25 01:27:58.691171 systemd-logind[1716]: Session 13 logged out. Waiting for processes to exit. Apr 25 01:27:58.692249 systemd-logind[1716]: Removed session 13. Apr 25 01:27:58.832717 systemd[1]: Started sshd@11-10.0.0.7:22-4.175.71.9:50132.service - OpenSSH per-connection server daemon (4.175.71.9:50132). Apr 25 01:27:59.686358 sshd[6784]: Accepted publickey for core from 4.175.71.9 port 50132 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:27:59.687793 sshd[6784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:27:59.691751 systemd-logind[1716]: New session 14 of user core. Apr 25 01:27:59.696585 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 25 01:28:00.405601 sshd[6784]: pam_unix(sshd:session): session closed for user core Apr 25 01:28:00.410010 systemd[1]: sshd@11-10.0.0.7:22-4.175.71.9:50132.service: Deactivated successfully. Apr 25 01:28:00.412369 systemd[1]: session-14.scope: Deactivated successfully. Apr 25 01:28:00.413095 systemd-logind[1716]: Session 14 logged out. Waiting for processes to exit. Apr 25 01:28:00.414164 systemd-logind[1716]: Removed session 14. Apr 25 01:28:00.569747 systemd[1]: Started sshd@12-10.0.0.7:22-4.175.71.9:50138.service - OpenSSH per-connection server daemon (4.175.71.9:50138). Apr 25 01:28:01.455837 sshd[6795]: Accepted publickey for core from 4.175.71.9 port 50138 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:28:01.457320 sshd[6795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:28:01.462318 systemd-logind[1716]: New session 15 of user core. Apr 25 01:28:01.466612 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 25 01:28:02.147147 sshd[6795]: pam_unix(sshd:session): session closed for user core Apr 25 01:28:02.150146 systemd-logind[1716]: Session 15 logged out. Waiting for processes to exit. Apr 25 01:28:02.150467 systemd[1]: sshd@12-10.0.0.7:22-4.175.71.9:50138.service: Deactivated successfully. Apr 25 01:28:02.152556 systemd[1]: session-15.scope: Deactivated successfully. Apr 25 01:28:02.154805 systemd-logind[1716]: Removed session 15. Apr 25 01:28:07.306703 systemd[1]: Started sshd@13-10.0.0.7:22-4.175.71.9:53828.service - OpenSSH per-connection server daemon (4.175.71.9:53828). Apr 25 01:28:08.196465 sshd[6856]: Accepted publickey for core from 4.175.71.9 port 53828 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:28:08.197656 sshd[6856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:28:08.201610 systemd-logind[1716]: New session 16 of user core. Apr 25 01:28:08.207573 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 25 01:28:08.889934 sshd[6856]: pam_unix(sshd:session): session closed for user core Apr 25 01:28:08.893695 systemd-logind[1716]: Session 16 logged out. Waiting for processes to exit. Apr 25 01:28:08.894407 systemd[1]: sshd@13-10.0.0.7:22-4.175.71.9:53828.service: Deactivated successfully. Apr 25 01:28:08.897895 systemd[1]: session-16.scope: Deactivated successfully. Apr 25 01:28:08.899211 systemd-logind[1716]: Removed session 16. Apr 25 01:28:09.051697 systemd[1]: Started sshd@14-10.0.0.7:22-4.175.71.9:53844.service - OpenSSH per-connection server daemon (4.175.71.9:53844). Apr 25 01:28:09.969381 sshd[6881]: Accepted publickey for core from 4.175.71.9 port 53844 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:28:09.970853 sshd[6881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:28:09.974459 systemd-logind[1716]: New session 17 of user core. Apr 25 01:28:09.983574 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 25 01:28:10.864729 sshd[6881]: pam_unix(sshd:session): session closed for user core Apr 25 01:28:10.868758 systemd[1]: sshd@14-10.0.0.7:22-4.175.71.9:53844.service: Deactivated successfully. Apr 25 01:28:10.872248 systemd[1]: session-17.scope: Deactivated successfully. Apr 25 01:28:10.873342 systemd-logind[1716]: Session 17 logged out. Waiting for processes to exit. Apr 25 01:28:10.875069 systemd-logind[1716]: Removed session 17. Apr 25 01:28:11.029708 systemd[1]: Started sshd@15-10.0.0.7:22-4.175.71.9:53856.service - OpenSSH per-connection server daemon (4.175.71.9:53856). Apr 25 01:28:11.948329 sshd[6892]: Accepted publickey for core from 4.175.71.9 port 53856 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:28:11.949416 sshd[6892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:28:11.953623 systemd-logind[1716]: New session 18 of user core. Apr 25 01:28:11.958583 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 25 01:28:13.265212 sshd[6892]: pam_unix(sshd:session): session closed for user core Apr 25 01:28:13.268043 systemd-logind[1716]: Session 18 logged out. Waiting for processes to exit. Apr 25 01:28:13.268750 systemd[1]: sshd@15-10.0.0.7:22-4.175.71.9:53856.service: Deactivated successfully. Apr 25 01:28:13.271423 systemd[1]: session-18.scope: Deactivated successfully. Apr 25 01:28:13.273043 systemd-logind[1716]: Removed session 18. Apr 25 01:28:13.423719 systemd[1]: Started sshd@16-10.0.0.7:22-4.175.71.9:53872.service - OpenSSH per-connection server daemon (4.175.71.9:53872). Apr 25 01:28:14.327463 sshd[6942]: Accepted publickey for core from 4.175.71.9 port 53872 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:28:14.328636 sshd[6942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:28:14.332471 systemd-logind[1716]: New session 19 of user core. Apr 25 01:28:14.339605 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 25 01:28:15.132222 sshd[6942]: pam_unix(sshd:session): session closed for user core Apr 25 01:28:15.136744 systemd-logind[1716]: Session 19 logged out. Waiting for processes to exit. Apr 25 01:28:15.136983 systemd[1]: sshd@16-10.0.0.7:22-4.175.71.9:53872.service: Deactivated successfully. Apr 25 01:28:15.138777 systemd[1]: session-19.scope: Deactivated successfully. Apr 25 01:28:15.139749 systemd-logind[1716]: Removed session 19. Apr 25 01:28:15.300070 systemd[1]: Started sshd@17-10.0.0.7:22-4.175.71.9:53886.service - OpenSSH per-connection server daemon (4.175.71.9:53886). Apr 25 01:28:16.217846 sshd[6954]: Accepted publickey for core from 4.175.71.9 port 53886 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:28:16.219819 sshd[6954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:28:16.225006 systemd-logind[1716]: New session 20 of user core. Apr 25 01:28:16.227605 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 25 01:28:16.916869 sshd[6954]: pam_unix(sshd:session): session closed for user core Apr 25 01:28:16.920429 systemd[1]: sshd@17-10.0.0.7:22-4.175.71.9:53886.service: Deactivated successfully. Apr 25 01:28:16.922858 systemd[1]: session-20.scope: Deactivated successfully. Apr 25 01:28:16.923703 systemd-logind[1716]: Session 20 logged out. Waiting for processes to exit. Apr 25 01:28:16.925010 systemd-logind[1716]: Removed session 20. Apr 25 01:28:22.079665 systemd[1]: Started sshd@18-10.0.0.7:22-4.175.71.9:33436.service - OpenSSH per-connection server daemon (4.175.71.9:33436). Apr 25 01:28:22.989553 sshd[6991]: Accepted publickey for core from 4.175.71.9 port 33436 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:28:22.990956 sshd[6991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:28:22.994878 systemd-logind[1716]: New session 21 of user core. Apr 25 01:28:23.002584 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 25 01:28:23.681740 sshd[6991]: pam_unix(sshd:session): session closed for user core Apr 25 01:28:23.685949 systemd[1]: sshd@18-10.0.0.7:22-4.175.71.9:33436.service: Deactivated successfully. Apr 25 01:28:23.689222 systemd[1]: session-21.scope: Deactivated successfully. Apr 25 01:28:23.690702 systemd-logind[1716]: Session 21 logged out. Waiting for processes to exit. Apr 25 01:28:23.692083 systemd-logind[1716]: Removed session 21. Apr 25 01:28:28.842983 systemd[1]: Started sshd@19-10.0.0.7:22-4.175.71.9:45204.service - OpenSSH per-connection server daemon (4.175.71.9:45204). Apr 25 01:28:29.722010 sshd[7023]: Accepted publickey for core from 4.175.71.9 port 45204 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:28:29.723930 sshd[7023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:28:29.729157 systemd-logind[1716]: New session 22 of user core. Apr 25 01:28:29.735935 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 25 01:28:30.397682 sshd[7023]: pam_unix(sshd:session): session closed for user core Apr 25 01:28:30.401653 systemd-logind[1716]: Session 22 logged out. Waiting for processes to exit. Apr 25 01:28:30.402363 systemd[1]: sshd@19-10.0.0.7:22-4.175.71.9:45204.service: Deactivated successfully. Apr 25 01:28:30.404303 systemd[1]: session-22.scope: Deactivated successfully. Apr 25 01:28:30.405208 systemd-logind[1716]: Removed session 22. Apr 25 01:28:35.568687 systemd[1]: Started sshd@20-10.0.0.7:22-4.175.71.9:55868.service - OpenSSH per-connection server daemon (4.175.71.9:55868). Apr 25 01:28:36.476466 sshd[7057]: Accepted publickey for core from 4.175.71.9 port 55868 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:28:36.477635 sshd[7057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:28:36.482396 systemd-logind[1716]: New session 23 of user core. Apr 25 01:28:36.489583 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 25 01:28:37.166681 sshd[7057]: pam_unix(sshd:session): session closed for user core Apr 25 01:28:37.170834 systemd[1]: sshd@20-10.0.0.7:22-4.175.71.9:55868.service: Deactivated successfully. Apr 25 01:28:37.172631 systemd[1]: session-23.scope: Deactivated successfully. Apr 25 01:28:37.174108 systemd-logind[1716]: Session 23 logged out. Waiting for processes to exit. Apr 25 01:28:37.175330 systemd-logind[1716]: Removed session 23. Apr 25 01:28:42.327316 systemd[1]: Started sshd@21-10.0.0.7:22-4.175.71.9:55878.service - OpenSSH per-connection server daemon (4.175.71.9:55878). Apr 25 01:28:43.213933 sshd[7073]: Accepted publickey for core from 4.175.71.9 port 55878 ssh2: RSA SHA256:ib9tIjFRIcqBvUoilyM275yd3PFHCb9uOUCs+nzTMeQ Apr 25 01:28:43.214829 sshd[7073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 01:28:43.219199 systemd-logind[1716]: New session 24 of user core. Apr 25 01:28:43.223592 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 25 01:28:43.902289 sshd[7073]: pam_unix(sshd:session): session closed for user core Apr 25 01:28:43.909033 systemd-logind[1716]: Session 24 logged out. Waiting for processes to exit. Apr 25 01:28:43.909607 systemd[1]: sshd@21-10.0.0.7:22-4.175.71.9:55878.service: Deactivated successfully. Apr 25 01:28:43.912077 systemd[1]: session-24.scope: Deactivated successfully. Apr 25 01:28:43.913246 systemd-logind[1716]: Removed session 24.