Mar 11 01:22:40.188780 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 11 01:22:40.188802 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Mar 10 23:05:53 -00 2026 Mar 11 01:22:40.188811 kernel: KASLR enabled Mar 11 01:22:40.188817 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 11 01:22:40.188824 kernel: printk: bootconsole [pl11] enabled Mar 11 01:22:40.188830 kernel: efi: EFI v2.7 by EDK II Mar 11 01:22:40.188838 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 11 01:22:40.188844 kernel: random: crng init done Mar 11 01:22:40.188851 kernel: ACPI: Early table checksum verification disabled Mar 11 01:22:40.188857 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 11 01:22:40.188864 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:22:40.188870 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:22:40.188878 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 11 01:22:40.188885 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:22:40.188892 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:22:40.188899 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:22:40.188906 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:22:40.188915 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:22:40.188921 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:22:40.188928 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 11 01:22:40.188935 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:22:40.188942 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 11 01:22:40.188949 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 11 01:22:40.188955 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 11 01:22:40.188962 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 11 01:22:40.188969 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 11 01:22:40.188976 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 11 01:22:40.188982 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 11 01:22:40.188990 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 11 01:22:40.188997 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 11 01:22:40.189004 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 11 01:22:40.189011 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 11 01:22:40.189017 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 11 01:22:40.189024 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 11 01:22:40.189031 kernel: NUMA: NODE_DATA [mem 0x1bf7f1800-0x1bf7f6fff] Mar 11 01:22:40.189037 kernel: Zone ranges: Mar 11 01:22:40.189044 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 11 01:22:40.189051 kernel: DMA32 empty Mar 11 01:22:40.189058 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 11 01:22:40.189064 kernel: Movable zone start for each node Mar 11 01:22:40.189075 kernel: Early memory node ranges Mar 11 01:22:40.189083 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 11 01:22:40.189090 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 11 01:22:40.189097 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 11 01:22:40.189104 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 11 01:22:40.189113 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 11 01:22:40.189120 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 11 01:22:40.189127 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 11 01:22:40.189134 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 11 01:22:40.189141 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 11 01:22:40.189148 kernel: psci: probing for conduit method from ACPI. Mar 11 01:22:40.189155 kernel: psci: PSCIv1.1 detected in firmware. Mar 11 01:22:40.189163 kernel: psci: Using standard PSCI v0.2 function IDs Mar 11 01:22:40.189170 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 11 01:22:40.191203 kernel: psci: SMC Calling Convention v1.4 Mar 11 01:22:40.191212 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 11 01:22:40.191220 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 11 01:22:40.191233 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 11 01:22:40.191240 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 11 01:22:40.191248 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 11 01:22:40.191255 kernel: Detected PIPT I-cache on CPU0 Mar 11 01:22:40.191262 kernel: CPU features: detected: GIC system register CPU interface Mar 11 01:22:40.191269 kernel: CPU features: detected: Hardware dirty bit management Mar 11 01:22:40.191277 kernel: CPU features: detected: Spectre-BHB Mar 11 01:22:40.191284 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 11 01:22:40.191291 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 11 01:22:40.191298 kernel: CPU features: detected: ARM erratum 1418040 Mar 11 01:22:40.191306 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 11 01:22:40.191314 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 11 01:22:40.191322 kernel: alternatives: applying boot alternatives Mar 11 01:22:40.191331 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7fe021b64c084ac374d4d673d0197603cd77b13b2055fe6fd36a6b55fadd8e5c Mar 11 01:22:40.191338 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 11 01:22:40.191346 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 11 01:22:40.191353 kernel: Fallback order for Node 0: 0 Mar 11 01:22:40.191360 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 11 01:22:40.191367 kernel: Policy zone: Normal Mar 11 01:22:40.191374 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 11 01:22:40.191381 kernel: software IO TLB: area num 2. Mar 11 01:22:40.191389 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 11 01:22:40.191398 kernel: Memory: 3982644K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211516K reserved, 0K cma-reserved) Mar 11 01:22:40.191405 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 11 01:22:40.191412 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 11 01:22:40.191420 kernel: rcu: RCU event tracing is enabled. Mar 11 01:22:40.191427 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 11 01:22:40.191434 kernel: Trampoline variant of Tasks RCU enabled. Mar 11 01:22:40.191442 kernel: Tracing variant of Tasks RCU enabled. Mar 11 01:22:40.191449 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 11 01:22:40.191456 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 11 01:22:40.191463 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 11 01:22:40.191470 kernel: GICv3: 960 SPIs implemented Mar 11 01:22:40.191479 kernel: GICv3: 0 Extended SPIs implemented Mar 11 01:22:40.191486 kernel: Root IRQ handler: gic_handle_irq Mar 11 01:22:40.191493 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 11 01:22:40.191501 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 11 01:22:40.191508 kernel: ITS: No ITS available, not enabling LPIs Mar 11 01:22:40.191515 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 11 01:22:40.191523 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 11 01:22:40.191530 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 11 01:22:40.191537 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 11 01:22:40.191544 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 11 01:22:40.191552 kernel: Console: colour dummy device 80x25 Mar 11 01:22:40.191561 kernel: printk: console [tty1] enabled Mar 11 01:22:40.191568 kernel: ACPI: Core revision 20230628 Mar 11 01:22:40.191576 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 11 01:22:40.191583 kernel: pid_max: default: 32768 minimum: 301 Mar 11 01:22:40.191591 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 11 01:22:40.191598 kernel: landlock: Up and running. Mar 11 01:22:40.191606 kernel: SELinux: Initializing. Mar 11 01:22:40.191613 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 11 01:22:40.191621 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 11 01:22:40.191630 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 11 01:22:40.191637 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 11 01:22:40.191645 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 11 01:22:40.191652 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 11 01:22:40.191660 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 11 01:22:40.191667 kernel: rcu: Hierarchical SRCU implementation. Mar 11 01:22:40.191675 kernel: rcu: Max phase no-delay instances is 400. Mar 11 01:22:40.191682 kernel: Remapping and enabling EFI services. Mar 11 01:22:40.191695 kernel: smp: Bringing up secondary CPUs ... Mar 11 01:22:40.191703 kernel: Detected PIPT I-cache on CPU1 Mar 11 01:22:40.191711 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 11 01:22:40.191719 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 11 01:22:40.191728 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 11 01:22:40.191736 kernel: smp: Brought up 1 node, 2 CPUs Mar 11 01:22:40.191744 kernel: SMP: Total of 2 processors activated. Mar 11 01:22:40.191752 kernel: CPU features: detected: 32-bit EL0 Support Mar 11 01:22:40.191760 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 11 01:22:40.191769 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 11 01:22:40.191777 kernel: CPU features: detected: CRC32 instructions Mar 11 01:22:40.191785 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 11 01:22:40.191793 kernel: CPU features: detected: LSE atomic instructions Mar 11 01:22:40.191801 kernel: CPU features: detected: Privileged Access Never Mar 11 01:22:40.191809 kernel: CPU: All CPU(s) started at EL1 Mar 11 01:22:40.191816 kernel: alternatives: applying system-wide alternatives Mar 11 01:22:40.191824 kernel: devtmpfs: initialized Mar 11 01:22:40.191832 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 11 01:22:40.191841 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 11 01:22:40.191849 kernel: pinctrl core: initialized pinctrl subsystem Mar 11 01:22:40.191857 kernel: SMBIOS 3.1.0 present. Mar 11 01:22:40.191865 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 11 01:22:40.191873 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 11 01:22:40.191880 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 11 01:22:40.191888 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 11 01:22:40.191896 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 11 01:22:40.191904 kernel: audit: initializing netlink subsys (disabled) Mar 11 01:22:40.191913 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 11 01:22:40.191921 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 11 01:22:40.191929 kernel: cpuidle: using governor menu Mar 11 01:22:40.191936 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 11 01:22:40.191944 kernel: ASID allocator initialised with 32768 entries Mar 11 01:22:40.191952 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 11 01:22:40.191960 kernel: Serial: AMBA PL011 UART driver Mar 11 01:22:40.191968 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 11 01:22:40.191975 kernel: Modules: 0 pages in range for non-PLT usage Mar 11 01:22:40.191985 kernel: Modules: 509008 pages in range for PLT usage Mar 11 01:22:40.191992 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 11 01:22:40.192000 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 11 01:22:40.192008 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 11 01:22:40.192016 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 11 01:22:40.192024 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 11 01:22:40.192032 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 11 01:22:40.192039 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 11 01:22:40.192047 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 11 01:22:40.192056 kernel: ACPI: Added _OSI(Module Device) Mar 11 01:22:40.192064 kernel: ACPI: Added _OSI(Processor Device) Mar 11 01:22:40.192072 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 11 01:22:40.192079 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 11 01:22:40.192087 kernel: ACPI: Interpreter enabled Mar 11 01:22:40.192095 kernel: ACPI: Using GIC for interrupt routing Mar 11 01:22:40.192103 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 11 01:22:40.192110 kernel: printk: console [ttyAMA0] enabled Mar 11 01:22:40.192118 kernel: printk: bootconsole [pl11] disabled Mar 11 01:22:40.192127 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 11 01:22:40.192135 kernel: iommu: Default domain type: Translated Mar 11 01:22:40.192143 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 11 01:22:40.192150 kernel: efivars: Registered efivars operations Mar 11 01:22:40.192158 kernel: vgaarb: loaded Mar 11 01:22:40.192166 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 11 01:22:40.192180 kernel: VFS: Disk quotas dquot_6.6.0 Mar 11 01:22:40.192188 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 11 01:22:40.192196 kernel: pnp: PnP ACPI init Mar 11 01:22:40.192206 kernel: pnp: PnP ACPI: found 0 devices Mar 11 01:22:40.192213 kernel: NET: Registered PF_INET protocol family Mar 11 01:22:40.192221 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 11 01:22:40.192229 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 11 01:22:40.192237 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 11 01:22:40.192245 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 11 01:22:40.192253 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 11 01:22:40.192261 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 11 01:22:40.192269 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 11 01:22:40.192278 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 11 01:22:40.192286 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 11 01:22:40.192294 kernel: PCI: CLS 0 bytes, default 64 Mar 11 01:22:40.192301 kernel: kvm [1]: HYP mode not available Mar 11 01:22:40.192309 kernel: Initialise system trusted keyrings Mar 11 01:22:40.192317 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 11 01:22:40.192325 kernel: Key type asymmetric registered Mar 11 01:22:40.192332 kernel: Asymmetric key parser 'x509' registered Mar 11 01:22:40.192340 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 11 01:22:40.192350 kernel: io scheduler mq-deadline registered Mar 11 01:22:40.192358 kernel: io scheduler kyber registered Mar 11 01:22:40.192365 kernel: io scheduler bfq registered Mar 11 01:22:40.192373 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 11 01:22:40.192381 kernel: thunder_xcv, ver 1.0 Mar 11 01:22:40.192388 kernel: thunder_bgx, ver 1.0 Mar 11 01:22:40.192396 kernel: nicpf, ver 1.0 Mar 11 01:22:40.192403 kernel: nicvf, ver 1.0 Mar 11 01:22:40.192538 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 11 01:22:40.192617 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-11T01:22:39 UTC (1773192159) Mar 11 01:22:40.192628 kernel: efifb: probing for efifb Mar 11 01:22:40.192636 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 11 01:22:40.192643 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 11 01:22:40.192651 kernel: efifb: scrolling: redraw Mar 11 01:22:40.192659 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 11 01:22:40.192667 kernel: Console: switching to colour frame buffer device 128x48 Mar 11 01:22:40.192675 kernel: fb0: EFI VGA frame buffer device Mar 11 01:22:40.192685 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 11 01:22:40.192693 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 11 01:22:40.192701 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 11 01:22:40.192709 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 11 01:22:40.192716 kernel: watchdog: Hard watchdog permanently disabled Mar 11 01:22:40.192724 kernel: NET: Registered PF_INET6 protocol family Mar 11 01:22:40.192732 kernel: Segment Routing with IPv6 Mar 11 01:22:40.192740 kernel: In-situ OAM (IOAM) with IPv6 Mar 11 01:22:40.192748 kernel: NET: Registered PF_PACKET protocol family Mar 11 01:22:40.192757 kernel: Key type dns_resolver registered Mar 11 01:22:40.192765 kernel: registered taskstats version 1 Mar 11 01:22:40.192773 kernel: Loading compiled-in X.509 certificates Mar 11 01:22:40.192781 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: e2d32b7c633536fa6eb6e76ba97909ae7ad11d09' Mar 11 01:22:40.192788 kernel: Key type .fscrypt registered Mar 11 01:22:40.192796 kernel: Key type fscrypt-provisioning registered Mar 11 01:22:40.192804 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 11 01:22:40.192812 kernel: ima: Allocated hash algorithm: sha1 Mar 11 01:22:40.192819 kernel: ima: No architecture policies found Mar 11 01:22:40.192828 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 11 01:22:40.192836 kernel: clk: Disabling unused clocks Mar 11 01:22:40.192844 kernel: Freeing unused kernel memory: 39424K Mar 11 01:22:40.192852 kernel: Run /init as init process Mar 11 01:22:40.192860 kernel: with arguments: Mar 11 01:22:40.192867 kernel: /init Mar 11 01:22:40.192875 kernel: with environment: Mar 11 01:22:40.192882 kernel: HOME=/ Mar 11 01:22:40.192890 kernel: TERM=linux Mar 11 01:22:40.192900 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 11 01:22:40.192911 systemd[1]: Detected virtualization microsoft. Mar 11 01:22:40.192920 systemd[1]: Detected architecture arm64. Mar 11 01:22:40.192928 systemd[1]: Running in initrd. Mar 11 01:22:40.192936 systemd[1]: No hostname configured, using default hostname. Mar 11 01:22:40.192944 systemd[1]: Hostname set to . Mar 11 01:22:40.192952 systemd[1]: Initializing machine ID from random generator. Mar 11 01:22:40.192962 systemd[1]: Queued start job for default target initrd.target. Mar 11 01:22:40.192971 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 01:22:40.192979 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 01:22:40.192988 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 11 01:22:40.192997 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 11 01:22:40.193005 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 11 01:22:40.193014 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 11 01:22:40.193024 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 11 01:22:40.193034 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 11 01:22:40.193043 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 01:22:40.193051 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 11 01:22:40.193060 systemd[1]: Reached target paths.target - Path Units. Mar 11 01:22:40.193073 systemd[1]: Reached target slices.target - Slice Units. Mar 11 01:22:40.193083 systemd[1]: Reached target swap.target - Swaps. Mar 11 01:22:40.193093 systemd[1]: Reached target timers.target - Timer Units. Mar 11 01:22:40.193101 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 11 01:22:40.193112 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 11 01:22:40.193122 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 11 01:22:40.193130 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 11 01:22:40.193139 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 11 01:22:40.193147 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 11 01:22:40.193156 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 01:22:40.193164 systemd[1]: Reached target sockets.target - Socket Units. Mar 11 01:22:40.193807 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 11 01:22:40.193826 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 11 01:22:40.193835 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 11 01:22:40.193843 systemd[1]: Starting systemd-fsck-usr.service... Mar 11 01:22:40.193852 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 11 01:22:40.193860 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 11 01:22:40.193892 systemd-journald[218]: Collecting audit messages is disabled. Mar 11 01:22:40.193914 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:22:40.193923 systemd-journald[218]: Journal started Mar 11 01:22:40.193943 systemd-journald[218]: Runtime Journal (/run/log/journal/e3aa01b02a964a6b86ab3266e97b3a60) is 8.0M, max 78.5M, 70.5M free. Mar 11 01:22:40.194262 systemd-modules-load[219]: Inserted module 'overlay' Mar 11 01:22:40.213860 systemd[1]: Started systemd-journald.service - Journal Service. Mar 11 01:22:40.222632 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 11 01:22:40.229708 systemd-modules-load[219]: Inserted module 'br_netfilter' Mar 11 01:22:40.234799 kernel: Bridge firewalling registered Mar 11 01:22:40.230328 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 11 01:22:40.241197 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 01:22:40.249268 systemd[1]: Finished systemd-fsck-usr.service. Mar 11 01:22:40.257619 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 11 01:22:40.265556 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:22:40.282373 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 01:22:40.288301 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 11 01:22:40.298331 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 11 01:22:40.329283 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 11 01:22:40.336192 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 01:22:40.345499 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 11 01:22:40.350321 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 01:22:40.362762 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 01:22:40.386399 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 11 01:22:40.397316 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 11 01:22:40.411979 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 11 01:22:40.419148 dracut-cmdline[253]: dracut-dracut-053 Mar 11 01:22:40.431204 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 01:22:40.447149 dracut-cmdline[253]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7fe021b64c084ac374d4d673d0197603cd77b13b2055fe6fd36a6b55fadd8e5c Mar 11 01:22:40.475360 systemd-resolved[256]: Positive Trust Anchors: Mar 11 01:22:40.475376 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 11 01:22:40.475407 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 11 01:22:40.478013 systemd-resolved[256]: Defaulting to hostname 'linux'. Mar 11 01:22:40.479355 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 11 01:22:40.485443 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 11 01:22:40.587195 kernel: SCSI subsystem initialized Mar 11 01:22:40.593182 kernel: Loading iSCSI transport class v2.0-870. Mar 11 01:22:40.603191 kernel: iscsi: registered transport (tcp) Mar 11 01:22:40.619373 kernel: iscsi: registered transport (qla4xxx) Mar 11 01:22:40.619390 kernel: QLogic iSCSI HBA Driver Mar 11 01:22:40.651680 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 11 01:22:40.665353 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 11 01:22:40.693003 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 11 01:22:40.693053 kernel: device-mapper: uevent: version 1.0.3 Mar 11 01:22:40.697914 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 11 01:22:40.748190 kernel: raid6: neonx8 gen() 15812 MB/s Mar 11 01:22:40.763187 kernel: raid6: neonx4 gen() 15687 MB/s Mar 11 01:22:40.782182 kernel: raid6: neonx2 gen() 13245 MB/s Mar 11 01:22:40.802185 kernel: raid6: neonx1 gen() 10513 MB/s Mar 11 01:22:40.821179 kernel: raid6: int64x8 gen() 6978 MB/s Mar 11 01:22:40.840179 kernel: raid6: int64x4 gen() 7372 MB/s Mar 11 01:22:40.860180 kernel: raid6: int64x2 gen() 6146 MB/s Mar 11 01:22:40.882319 kernel: raid6: int64x1 gen() 5068 MB/s Mar 11 01:22:40.882331 kernel: raid6: using algorithm neonx8 gen() 15812 MB/s Mar 11 01:22:40.904968 kernel: raid6: .... xor() 12056 MB/s, rmw enabled Mar 11 01:22:40.904980 kernel: raid6: using neon recovery algorithm Mar 11 01:22:40.913182 kernel: xor: measuring software checksum speed Mar 11 01:22:40.918691 kernel: 8regs : 18881 MB/sec Mar 11 01:22:40.918704 kernel: 32regs : 19664 MB/sec Mar 11 01:22:40.921632 kernel: arm64_neon : 27016 MB/sec Mar 11 01:22:40.925181 kernel: xor: using function: arm64_neon (27016 MB/sec) Mar 11 01:22:40.975534 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 11 01:22:40.984219 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 11 01:22:40.999295 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 01:22:41.019751 systemd-udevd[439]: Using default interface naming scheme 'v255'. Mar 11 01:22:41.024143 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 01:22:41.047302 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 11 01:22:41.064376 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Mar 11 01:22:41.093802 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 11 01:22:41.106385 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 11 01:22:41.146208 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 01:22:41.163812 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 11 01:22:41.189982 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 11 01:22:41.204935 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 11 01:22:41.216935 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 01:22:41.228614 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 11 01:22:41.243248 kernel: hv_vmbus: Vmbus version:5.3 Mar 11 01:22:41.244412 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 11 01:22:41.267667 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 11 01:22:41.297209 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 11 01:22:41.297232 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 11 01:22:41.297251 kernel: hv_vmbus: registering driver hid_hyperv Mar 11 01:22:41.297261 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 11 01:22:41.297271 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 11 01:22:41.297281 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 11 01:22:41.297421 kernel: hv_vmbus: registering driver hv_netvsc Mar 11 01:22:41.297433 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 11 01:22:41.311805 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 11 01:22:41.343008 kernel: hv_vmbus: registering driver hv_storvsc Mar 11 01:22:41.343029 kernel: scsi host0: storvsc_host_t Mar 11 01:22:41.343194 kernel: scsi host1: storvsc_host_t Mar 11 01:22:41.343217 kernel: PTP clock support registered Mar 11 01:22:41.312262 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 01:22:41.356528 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 11 01:22:41.322400 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 01:22:41.373580 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 11 01:22:41.336763 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 01:22:41.336992 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:22:41.346703 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:22:41.379550 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:22:41.412108 kernel: hv_utils: Registering HyperV Utility Driver Mar 11 01:22:41.412156 kernel: hv_vmbus: registering driver hv_utils Mar 11 01:22:41.413189 kernel: hv_utils: Heartbeat IC version 3.0 Mar 11 01:22:41.418288 kernel: hv_utils: Shutdown IC version 3.2 Mar 11 01:22:41.418326 kernel: hv_utils: TimeSync IC version 4.0 Mar 11 01:22:41.421185 kernel: hv_netvsc 7ced8dd2-a8e9-7ced-8dd2-a8e97ced8dd2 eth0: VF slot 1 added Mar 11 01:22:41.292182 systemd-resolved[256]: Clock change detected. Flushing caches. Mar 11 01:22:41.308390 systemd-journald[218]: Time jumped backwards, rotating. Mar 11 01:22:41.295196 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:22:41.322487 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 01:22:41.345877 kernel: hv_vmbus: registering driver hv_pci Mar 11 01:22:41.345897 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 11 01:22:41.346081 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 11 01:22:41.346092 kernel: hv_pci fbf108b5-9bdc-45ed-b09b-17feca332aa2: PCI VMBus probing: Using version 0x10004 Mar 11 01:22:41.347473 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 11 01:22:41.357473 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 11 01:22:41.363673 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 11 01:22:41.374364 kernel: hv_pci fbf108b5-9bdc-45ed-b09b-17feca332aa2: PCI host bridge to bus 9bdc:00 Mar 11 01:22:41.374534 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 11 01:22:41.380155 kernel: pci_bus 9bdc:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 11 01:22:41.380323 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 11 01:22:41.377481 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 01:22:41.410175 kernel: pci_bus 9bdc:00: No busn resource found for root bus, will use [bus 00-ff] Mar 11 01:22:41.410301 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 11 01:22:41.410402 kernel: pci 9bdc:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 11 01:22:41.410425 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 11 01:22:41.420250 kernel: pci 9bdc:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 11 01:22:41.420294 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 11 01:22:41.420304 kernel: pci 9bdc:00:02.0: enabling Extended Tags Mar 11 01:22:41.429476 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 11 01:22:41.452074 kernel: pci 9bdc:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 9bdc:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 11 01:22:41.452252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#187 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 11 01:22:41.452343 kernel: pci_bus 9bdc:00: busn_res: [bus 00-ff] end is updated to 00 Mar 11 01:22:41.460652 kernel: pci 9bdc:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 11 01:22:41.501693 kernel: mlx5_core 9bdc:00:02.0: enabling device (0000 -> 0002) Mar 11 01:22:41.508100 kernel: mlx5_core 9bdc:00:02.0: firmware version: 16.30.5026 Mar 11 01:22:41.583286 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 11 01:22:41.604617 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (484) Mar 11 01:22:41.606582 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 11 01:22:41.632001 kernel: BTRFS: device fsid 6268782d-ce1a-4049-a9c9-846620fa6ee9 devid 1 transid 44 /dev/sda3 scanned by (udev-worker) (493) Mar 11 01:22:41.640431 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 11 01:22:41.655379 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 11 01:22:41.660711 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 11 01:22:41.684140 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 11 01:22:41.707472 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 11 01:22:41.755536 kernel: hv_netvsc 7ced8dd2-a8e9-7ced-8dd2-a8e97ced8dd2 eth0: VF registering: eth1 Mar 11 01:22:41.755715 kernel: mlx5_core 9bdc:00:02.0 eth1: joined to eth0 Mar 11 01:22:41.764670 kernel: mlx5_core 9bdc:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 11 01:22:41.783498 kernel: mlx5_core 9bdc:00:02.0 enP39900s1: renamed from eth1 Mar 11 01:22:42.723339 disk-uuid[603]: The operation has completed successfully. Mar 11 01:22:42.727631 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 11 01:22:42.783067 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 11 01:22:42.785613 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 11 01:22:42.818570 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 11 01:22:42.828811 sh[720]: Success Mar 11 01:22:42.848636 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 11 01:22:42.925801 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 11 01:22:42.933555 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 11 01:22:42.947839 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 11 01:22:42.977099 kernel: BTRFS info (device dm-0): first mount of filesystem 6268782d-ce1a-4049-a9c9-846620fa6ee9 Mar 11 01:22:42.977149 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 11 01:22:42.982508 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 11 01:22:42.986450 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 11 01:22:42.990070 kernel: BTRFS info (device dm-0): using free space tree Mar 11 01:22:43.054842 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 11 01:22:43.059106 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 11 01:22:43.079590 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 11 01:22:43.086588 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 11 01:22:43.123619 kernel: BTRFS info (device sda6): first mount of filesystem 099bc99e-50a7-40d1-8691-55b4d6eb7046 Mar 11 01:22:43.123670 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 11 01:22:43.127770 kernel: BTRFS info (device sda6): using free space tree Mar 11 01:22:43.142470 kernel: BTRFS info (device sda6): auto enabling async discard Mar 11 01:22:43.151266 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 11 01:22:43.161474 kernel: BTRFS info (device sda6): last unmount of filesystem 099bc99e-50a7-40d1-8691-55b4d6eb7046 Mar 11 01:22:43.168603 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 11 01:22:43.183645 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 11 01:22:43.216885 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 11 01:22:43.229618 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 11 01:22:43.272691 systemd-networkd[904]: lo: Link UP Mar 11 01:22:43.272698 systemd-networkd[904]: lo: Gained carrier Mar 11 01:22:43.277086 systemd-networkd[904]: Enumeration completed Mar 11 01:22:43.277241 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 11 01:22:43.282443 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 01:22:43.282446 systemd-networkd[904]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 11 01:22:43.283047 systemd[1]: Reached target network.target - Network. Mar 11 01:22:43.362496 kernel: mlx5_core 9bdc:00:02.0 enP39900s1: Link up Mar 11 01:22:43.387150 ignition[871]: Ignition 2.19.0 Mar 11 01:22:43.387164 ignition[871]: Stage: fetch-offline Mar 11 01:22:43.391117 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 11 01:22:43.387196 ignition[871]: no configs at "/usr/lib/ignition/base.d" Mar 11 01:22:43.414761 kernel: hv_netvsc 7ced8dd2-a8e9-7ced-8dd2-a8e97ced8dd2 eth0: Data path switched to VF: enP39900s1 Mar 11 01:22:43.414310 systemd-networkd[904]: enP39900s1: Link UP Mar 11 01:22:43.387204 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:22:43.414411 systemd-networkd[904]: eth0: Link UP Mar 11 01:22:43.387403 ignition[871]: parsed url from cmdline: "" Mar 11 01:22:43.414560 systemd-networkd[904]: eth0: Gained carrier Mar 11 01:22:43.387406 ignition[871]: no config URL provided Mar 11 01:22:43.414570 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 01:22:43.387411 ignition[871]: reading system config file "/usr/lib/ignition/user.ign" Mar 11 01:22:43.422592 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 11 01:22:43.387420 ignition[871]: no config at "/usr/lib/ignition/user.ign" Mar 11 01:22:43.426643 systemd-networkd[904]: enP39900s1: Gained carrier Mar 11 01:22:43.387428 ignition[871]: failed to fetch config: resource requires networking Mar 11 01:22:43.449490 systemd-networkd[904]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 11 01:22:43.387637 ignition[871]: Ignition finished successfully Mar 11 01:22:43.439268 ignition[912]: Ignition 2.19.0 Mar 11 01:22:43.439278 ignition[912]: Stage: fetch Mar 11 01:22:43.440427 ignition[912]: no configs at "/usr/lib/ignition/base.d" Mar 11 01:22:43.440448 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:22:43.440734 ignition[912]: parsed url from cmdline: "" Mar 11 01:22:43.440738 ignition[912]: no config URL provided Mar 11 01:22:43.440743 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Mar 11 01:22:43.440753 ignition[912]: no config at "/usr/lib/ignition/user.ign" Mar 11 01:22:43.440774 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 11 01:22:43.440932 ignition[912]: GET error: Get "http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 11 01:22:43.641905 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #2 Mar 11 01:22:43.728505 ignition[912]: GET result: OK Mar 11 01:22:43.728568 ignition[912]: config has been read from IMDS userdata Mar 11 01:22:43.728611 ignition[912]: parsing config with SHA512: 8ace4f8e07dc58a5bc285f68cfc4972ab7408ef4cf2b60d4c0e087a7c1342b5463231629be5d4aa63d851662ae27827134c7813baa339eeeab491b0ce64be6e7 Mar 11 01:22:43.732667 unknown[912]: fetched base config from "system" Mar 11 01:22:43.733011 ignition[912]: fetch: fetch complete Mar 11 01:22:43.732673 unknown[912]: fetched base config from "system" Mar 11 01:22:43.733015 ignition[912]: fetch: fetch passed Mar 11 01:22:43.732681 unknown[912]: fetched user config from "azure" Mar 11 01:22:43.733056 ignition[912]: Ignition finished successfully Mar 11 01:22:43.736631 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 11 01:22:43.756664 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 11 01:22:43.776650 ignition[920]: Ignition 2.19.0 Mar 11 01:22:43.776658 ignition[920]: Stage: kargs Mar 11 01:22:43.780715 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 11 01:22:43.776818 ignition[920]: no configs at "/usr/lib/ignition/base.d" Mar 11 01:22:43.793574 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 11 01:22:43.776827 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:22:43.808717 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 11 01:22:43.777820 ignition[920]: kargs: kargs passed Mar 11 01:22:43.813520 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 11 01:22:43.777866 ignition[920]: Ignition finished successfully Mar 11 01:22:43.823010 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 11 01:22:43.805868 ignition[926]: Ignition 2.19.0 Mar 11 01:22:43.832808 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 11 01:22:43.805874 ignition[926]: Stage: disks Mar 11 01:22:43.840011 systemd[1]: Reached target sysinit.target - System Initialization. Mar 11 01:22:43.806094 ignition[926]: no configs at "/usr/lib/ignition/base.d" Mar 11 01:22:43.849787 systemd[1]: Reached target basic.target - Basic System. Mar 11 01:22:43.806104 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:22:43.869614 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 11 01:22:43.807348 ignition[926]: disks: disks passed Mar 11 01:22:43.807399 ignition[926]: Ignition finished successfully Mar 11 01:22:43.924532 systemd-fsck[934]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 11 01:22:43.931892 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 11 01:22:43.946597 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 11 01:22:43.996480 kernel: EXT4-fs (sda9): mounted filesystem 19488164-8e25-4d6a-86d9-f70a8ed432cb r/w with ordered data mode. Quota mode: none. Mar 11 01:22:43.997315 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 11 01:22:44.001446 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 11 01:22:44.022557 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 11 01:22:44.031263 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 11 01:22:44.050471 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (945) Mar 11 01:22:44.055483 kernel: BTRFS info (device sda6): first mount of filesystem 099bc99e-50a7-40d1-8691-55b4d6eb7046 Mar 11 01:22:44.060885 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 11 01:22:44.061342 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 11 01:22:44.075990 kernel: BTRFS info (device sda6): using free space tree Mar 11 01:22:44.070164 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 11 01:22:44.070194 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 11 01:22:44.082693 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 11 01:22:44.105697 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 11 01:22:44.120465 kernel: BTRFS info (device sda6): auto enabling async discard Mar 11 01:22:44.121229 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 11 01:22:44.239637 coreos-metadata[947]: Mar 11 01:22:44.239 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 11 01:22:44.248274 coreos-metadata[947]: Mar 11 01:22:44.248 INFO Fetch successful Mar 11 01:22:44.253170 coreos-metadata[947]: Mar 11 01:22:44.253 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 11 01:22:44.271630 coreos-metadata[947]: Mar 11 01:22:44.271 INFO Fetch successful Mar 11 01:22:44.276231 coreos-metadata[947]: Mar 11 01:22:44.275 INFO wrote hostname ci-4081.3.6-n-49f1e4db19 to /sysroot/etc/hostname Mar 11 01:22:44.283450 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 11 01:22:44.320826 initrd-setup-root[976]: cut: /sysroot/etc/passwd: No such file or directory Mar 11 01:22:44.331958 initrd-setup-root[983]: cut: /sysroot/etc/group: No such file or directory Mar 11 01:22:44.344794 initrd-setup-root[990]: cut: /sysroot/etc/shadow: No such file or directory Mar 11 01:22:44.353540 initrd-setup-root[997]: cut: /sysroot/etc/gshadow: No such file or directory Mar 11 01:22:44.592423 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 11 01:22:44.604570 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 11 01:22:44.610297 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 11 01:22:44.629590 kernel: BTRFS info (device sda6): last unmount of filesystem 099bc99e-50a7-40d1-8691-55b4d6eb7046 Mar 11 01:22:44.628926 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 11 01:22:44.650921 ignition[1065]: INFO : Ignition 2.19.0 Mar 11 01:22:44.654303 ignition[1065]: INFO : Stage: mount Mar 11 01:22:44.654303 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 01:22:44.654303 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:22:44.651506 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 11 01:22:44.679692 ignition[1065]: INFO : mount: mount passed Mar 11 01:22:44.679692 ignition[1065]: INFO : Ignition finished successfully Mar 11 01:22:44.666972 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 11 01:22:44.686630 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 11 01:22:45.005644 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 11 01:22:45.031374 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1076) Mar 11 01:22:45.031411 kernel: BTRFS info (device sda6): first mount of filesystem 099bc99e-50a7-40d1-8691-55b4d6eb7046 Mar 11 01:22:45.036079 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 11 01:22:45.039424 kernel: BTRFS info (device sda6): using free space tree Mar 11 01:22:45.048467 kernel: BTRFS info (device sda6): auto enabling async discard Mar 11 01:22:45.048771 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 11 01:22:45.077417 ignition[1094]: INFO : Ignition 2.19.0 Mar 11 01:22:45.077417 ignition[1094]: INFO : Stage: files Mar 11 01:22:45.084007 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 01:22:45.084007 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:22:45.084007 ignition[1094]: DEBUG : files: compiled without relabeling support, skipping Mar 11 01:22:45.084007 ignition[1094]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 11 01:22:45.084007 ignition[1094]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 11 01:22:45.110588 ignition[1094]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 11 01:22:45.110588 ignition[1094]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 11 01:22:45.098467 systemd-networkd[904]: eth0: Gained IPv6LL Mar 11 01:22:45.125123 ignition[1094]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 11 01:22:45.125123 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 11 01:22:45.125123 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 11 01:22:45.110828 unknown[1094]: wrote ssh authorized keys file for user: core Mar 11 01:22:45.152974 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 11 01:22:45.265933 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Mar 11 01:22:45.724389 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 11 01:22:46.408927 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 11 01:22:46.420035 ignition[1094]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 11 01:22:46.420035 ignition[1094]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 11 01:22:46.420035 ignition[1094]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 11 01:22:46.420035 ignition[1094]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 11 01:22:46.420035 ignition[1094]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 11 01:22:46.420035 ignition[1094]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 11 01:22:46.420035 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 11 01:22:46.420035 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 11 01:22:46.420035 ignition[1094]: INFO : files: files passed Mar 11 01:22:46.420035 ignition[1094]: INFO : Ignition finished successfully Mar 11 01:22:46.422011 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 11 01:22:46.452704 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 11 01:22:46.464602 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 11 01:22:46.522775 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 11 01:22:46.522775 initrd-setup-root-after-ignition[1121]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 11 01:22:46.478700 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 11 01:22:46.540227 initrd-setup-root-after-ignition[1125]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 11 01:22:46.478782 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 11 01:22:46.536914 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 11 01:22:46.545772 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 11 01:22:46.572668 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 11 01:22:46.606788 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 11 01:22:46.606914 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 11 01:22:46.617425 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 11 01:22:46.627857 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 11 01:22:46.637310 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 11 01:22:46.649669 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 11 01:22:46.667153 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 11 01:22:46.680623 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 11 01:22:46.695525 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 11 01:22:46.700589 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 01:22:46.710152 systemd[1]: Stopped target timers.target - Timer Units. Mar 11 01:22:46.718819 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 11 01:22:46.718927 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 11 01:22:46.731204 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 11 01:22:46.735878 systemd[1]: Stopped target basic.target - Basic System. Mar 11 01:22:46.744575 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 11 01:22:46.753433 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 11 01:22:46.761962 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 11 01:22:46.771269 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 11 01:22:46.780377 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 11 01:22:46.790776 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 11 01:22:46.799396 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 11 01:22:46.808670 systemd[1]: Stopped target swap.target - Swaps. Mar 11 01:22:46.816600 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 11 01:22:46.816715 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 11 01:22:46.828130 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 11 01:22:46.833002 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 01:22:46.842483 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 11 01:22:46.842551 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 01:22:46.851853 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 11 01:22:46.851955 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 11 01:22:46.865200 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 11 01:22:46.865304 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 11 01:22:46.870718 systemd[1]: ignition-files.service: Deactivated successfully. Mar 11 01:22:46.870806 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 11 01:22:46.926407 ignition[1145]: INFO : Ignition 2.19.0 Mar 11 01:22:46.926407 ignition[1145]: INFO : Stage: umount Mar 11 01:22:46.926407 ignition[1145]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 01:22:46.926407 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:22:46.879311 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 11 01:22:46.958182 ignition[1145]: INFO : umount: umount passed Mar 11 01:22:46.958182 ignition[1145]: INFO : Ignition finished successfully Mar 11 01:22:46.879401 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 11 01:22:46.903645 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 11 01:22:46.921838 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 11 01:22:46.930014 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 11 01:22:46.930189 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 01:22:46.939628 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 11 01:22:46.939720 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 11 01:22:46.959704 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 11 01:22:46.960300 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 11 01:22:46.960395 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 11 01:22:46.969931 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 11 01:22:46.970215 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 11 01:22:46.979517 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 11 01:22:46.979566 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 11 01:22:46.987736 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 11 01:22:46.987777 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 11 01:22:46.995820 systemd[1]: Stopped target network.target - Network. Mar 11 01:22:47.004248 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 11 01:22:47.004310 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 11 01:22:47.015274 systemd[1]: Stopped target paths.target - Path Units. Mar 11 01:22:47.022999 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 11 01:22:47.026476 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 01:22:47.032731 systemd[1]: Stopped target slices.target - Slice Units. Mar 11 01:22:47.040352 systemd[1]: Stopped target sockets.target - Socket Units. Mar 11 01:22:47.044491 systemd[1]: iscsid.socket: Deactivated successfully. Mar 11 01:22:47.044547 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 11 01:22:47.052544 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 11 01:22:47.052592 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 11 01:22:47.062324 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 11 01:22:47.062371 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 11 01:22:47.071243 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 11 01:22:47.071283 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 11 01:22:47.080212 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 11 01:22:47.088418 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 11 01:22:47.097276 systemd-networkd[904]: eth0: DHCPv6 lease lost Mar 11 01:22:47.099059 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 11 01:22:47.099166 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 11 01:22:47.109369 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 11 01:22:47.294921 kernel: hv_netvsc 7ced8dd2-a8e9-7ced-8dd2-a8e97ced8dd2 eth0: Data path switched from VF: enP39900s1 Mar 11 01:22:47.109477 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 11 01:22:47.119965 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 11 01:22:47.120119 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 11 01:22:47.129425 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 11 01:22:47.129496 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 11 01:22:47.146573 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 11 01:22:47.154537 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 11 01:22:47.154598 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 11 01:22:47.166305 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 11 01:22:47.166352 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 11 01:22:47.174163 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 11 01:22:47.174198 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 11 01:22:47.183181 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 11 01:22:47.183220 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 01:22:47.193183 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 01:22:47.238038 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 11 01:22:47.238171 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 01:22:47.248560 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 11 01:22:47.248598 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 11 01:22:47.253601 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 11 01:22:47.253633 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 01:22:47.261532 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 11 01:22:47.261581 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 11 01:22:47.276856 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 11 01:22:47.276908 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 11 01:22:47.285603 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 11 01:22:47.285642 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 01:22:47.313609 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 11 01:22:47.329507 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 11 01:22:47.329570 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 01:22:47.335861 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 11 01:22:47.505588 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Mar 11 01:22:47.335906 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 01:22:47.345999 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 11 01:22:47.346041 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 01:22:47.356808 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 01:22:47.356849 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:22:47.365961 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 11 01:22:47.366088 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 11 01:22:47.374039 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 11 01:22:47.374120 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 11 01:22:47.383751 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 11 01:22:47.383841 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 11 01:22:47.397209 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 11 01:22:47.397310 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 11 01:22:47.407065 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 11 01:22:47.431859 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 11 01:22:47.452205 systemd[1]: Switching root. Mar 11 01:22:47.578767 systemd-journald[218]: Journal stopped Mar 11 01:22:40.188780 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 11 01:22:40.188802 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Mar 10 23:05:53 -00 2026 Mar 11 01:22:40.188811 kernel: KASLR enabled Mar 11 01:22:40.188817 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 11 01:22:40.188824 kernel: printk: bootconsole [pl11] enabled Mar 11 01:22:40.188830 kernel: efi: EFI v2.7 by EDK II Mar 11 01:22:40.188838 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 11 01:22:40.188844 kernel: random: crng init done Mar 11 01:22:40.188851 kernel: ACPI: Early table checksum verification disabled Mar 11 01:22:40.188857 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 11 01:22:40.188864 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:22:40.188870 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:22:40.188878 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 11 01:22:40.188885 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:22:40.188892 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:22:40.188899 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:22:40.188906 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:22:40.188915 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:22:40.188921 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:22:40.188928 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 11 01:22:40.188935 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 11 01:22:40.188942 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 11 01:22:40.188949 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 11 01:22:40.188955 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 11 01:22:40.188962 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 11 01:22:40.188969 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 11 01:22:40.188976 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 11 01:22:40.188982 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 11 01:22:40.188990 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 11 01:22:40.188997 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 11 01:22:40.189004 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 11 01:22:40.189011 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 11 01:22:40.189017 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 11 01:22:40.189024 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 11 01:22:40.189031 kernel: NUMA: NODE_DATA [mem 0x1bf7f1800-0x1bf7f6fff] Mar 11 01:22:40.189037 kernel: Zone ranges: Mar 11 01:22:40.189044 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 11 01:22:40.189051 kernel: DMA32 empty Mar 11 01:22:40.189058 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 11 01:22:40.189064 kernel: Movable zone start for each node Mar 11 01:22:40.189075 kernel: Early memory node ranges Mar 11 01:22:40.189083 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 11 01:22:40.189090 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 11 01:22:40.189097 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 11 01:22:40.189104 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 11 01:22:40.189113 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 11 01:22:40.189120 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 11 01:22:40.189127 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 11 01:22:40.189134 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 11 01:22:40.189141 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 11 01:22:40.189148 kernel: psci: probing for conduit method from ACPI. Mar 11 01:22:40.189155 kernel: psci: PSCIv1.1 detected in firmware. Mar 11 01:22:40.189163 kernel: psci: Using standard PSCI v0.2 function IDs Mar 11 01:22:40.189170 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 11 01:22:40.191203 kernel: psci: SMC Calling Convention v1.4 Mar 11 01:22:40.191212 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 11 01:22:40.191220 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 11 01:22:40.191233 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 11 01:22:40.191240 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 11 01:22:40.191248 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 11 01:22:40.191255 kernel: Detected PIPT I-cache on CPU0 Mar 11 01:22:40.191262 kernel: CPU features: detected: GIC system register CPU interface Mar 11 01:22:40.191269 kernel: CPU features: detected: Hardware dirty bit management Mar 11 01:22:40.191277 kernel: CPU features: detected: Spectre-BHB Mar 11 01:22:40.191284 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 11 01:22:40.191291 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 11 01:22:40.191298 kernel: CPU features: detected: ARM erratum 1418040 Mar 11 01:22:40.191306 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 11 01:22:40.191314 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 11 01:22:40.191322 kernel: alternatives: applying boot alternatives Mar 11 01:22:40.191331 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7fe021b64c084ac374d4d673d0197603cd77b13b2055fe6fd36a6b55fadd8e5c Mar 11 01:22:40.191338 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 11 01:22:40.191346 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 11 01:22:40.191353 kernel: Fallback order for Node 0: 0 Mar 11 01:22:40.191360 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 11 01:22:40.191367 kernel: Policy zone: Normal Mar 11 01:22:40.191374 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 11 01:22:40.191381 kernel: software IO TLB: area num 2. Mar 11 01:22:40.191389 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 11 01:22:40.191398 kernel: Memory: 3982644K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211516K reserved, 0K cma-reserved) Mar 11 01:22:40.191405 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 11 01:22:40.191412 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 11 01:22:40.191420 kernel: rcu: RCU event tracing is enabled. Mar 11 01:22:40.191427 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 11 01:22:40.191434 kernel: Trampoline variant of Tasks RCU enabled. Mar 11 01:22:40.191442 kernel: Tracing variant of Tasks RCU enabled. Mar 11 01:22:40.191449 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 11 01:22:40.191456 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 11 01:22:40.191463 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 11 01:22:40.191470 kernel: GICv3: 960 SPIs implemented Mar 11 01:22:40.191479 kernel: GICv3: 0 Extended SPIs implemented Mar 11 01:22:40.191486 kernel: Root IRQ handler: gic_handle_irq Mar 11 01:22:40.191493 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 11 01:22:40.191501 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 11 01:22:40.191508 kernel: ITS: No ITS available, not enabling LPIs Mar 11 01:22:40.191515 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 11 01:22:40.191523 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 11 01:22:40.191530 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 11 01:22:40.191537 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 11 01:22:40.191544 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 11 01:22:40.191552 kernel: Console: colour dummy device 80x25 Mar 11 01:22:40.191561 kernel: printk: console [tty1] enabled Mar 11 01:22:40.191568 kernel: ACPI: Core revision 20230628 Mar 11 01:22:40.191576 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 11 01:22:40.191583 kernel: pid_max: default: 32768 minimum: 301 Mar 11 01:22:40.191591 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 11 01:22:40.191598 kernel: landlock: Up and running. Mar 11 01:22:40.191606 kernel: SELinux: Initializing. Mar 11 01:22:40.191613 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 11 01:22:40.191621 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 11 01:22:40.191630 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 11 01:22:40.191637 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 11 01:22:40.191645 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 11 01:22:40.191652 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 11 01:22:40.191660 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 11 01:22:40.191667 kernel: rcu: Hierarchical SRCU implementation. Mar 11 01:22:40.191675 kernel: rcu: Max phase no-delay instances is 400. Mar 11 01:22:40.191682 kernel: Remapping and enabling EFI services. Mar 11 01:22:40.191695 kernel: smp: Bringing up secondary CPUs ... Mar 11 01:22:40.191703 kernel: Detected PIPT I-cache on CPU1 Mar 11 01:22:40.191711 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 11 01:22:40.191719 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 11 01:22:40.191728 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 11 01:22:40.191736 kernel: smp: Brought up 1 node, 2 CPUs Mar 11 01:22:40.191744 kernel: SMP: Total of 2 processors activated. Mar 11 01:22:40.191752 kernel: CPU features: detected: 32-bit EL0 Support Mar 11 01:22:40.191760 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 11 01:22:40.191769 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 11 01:22:40.191777 kernel: CPU features: detected: CRC32 instructions Mar 11 01:22:40.191785 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 11 01:22:40.191793 kernel: CPU features: detected: LSE atomic instructions Mar 11 01:22:40.191801 kernel: CPU features: detected: Privileged Access Never Mar 11 01:22:40.191809 kernel: CPU: All CPU(s) started at EL1 Mar 11 01:22:40.191816 kernel: alternatives: applying system-wide alternatives Mar 11 01:22:40.191824 kernel: devtmpfs: initialized Mar 11 01:22:40.191832 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 11 01:22:40.191841 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 11 01:22:40.191849 kernel: pinctrl core: initialized pinctrl subsystem Mar 11 01:22:40.191857 kernel: SMBIOS 3.1.0 present. Mar 11 01:22:40.191865 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 11 01:22:40.191873 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 11 01:22:40.191880 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 11 01:22:40.191888 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 11 01:22:40.191896 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 11 01:22:40.191904 kernel: audit: initializing netlink subsys (disabled) Mar 11 01:22:40.191913 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 11 01:22:40.191921 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 11 01:22:40.191929 kernel: cpuidle: using governor menu Mar 11 01:22:40.191936 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 11 01:22:40.191944 kernel: ASID allocator initialised with 32768 entries Mar 11 01:22:40.191952 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 11 01:22:40.191960 kernel: Serial: AMBA PL011 UART driver Mar 11 01:22:40.191968 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 11 01:22:40.191975 kernel: Modules: 0 pages in range for non-PLT usage Mar 11 01:22:40.191985 kernel: Modules: 509008 pages in range for PLT usage Mar 11 01:22:40.191992 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 11 01:22:40.192000 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 11 01:22:40.192008 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 11 01:22:40.192016 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 11 01:22:40.192024 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 11 01:22:40.192032 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 11 01:22:40.192039 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 11 01:22:40.192047 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 11 01:22:40.192056 kernel: ACPI: Added _OSI(Module Device) Mar 11 01:22:40.192064 kernel: ACPI: Added _OSI(Processor Device) Mar 11 01:22:40.192072 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 11 01:22:40.192079 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 11 01:22:40.192087 kernel: ACPI: Interpreter enabled Mar 11 01:22:40.192095 kernel: ACPI: Using GIC for interrupt routing Mar 11 01:22:40.192103 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 11 01:22:40.192110 kernel: printk: console [ttyAMA0] enabled Mar 11 01:22:40.192118 kernel: printk: bootconsole [pl11] disabled Mar 11 01:22:40.192127 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 11 01:22:40.192135 kernel: iommu: Default domain type: Translated Mar 11 01:22:40.192143 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 11 01:22:40.192150 kernel: efivars: Registered efivars operations Mar 11 01:22:40.192158 kernel: vgaarb: loaded Mar 11 01:22:40.192166 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 11 01:22:40.192180 kernel: VFS: Disk quotas dquot_6.6.0 Mar 11 01:22:40.192188 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 11 01:22:40.192196 kernel: pnp: PnP ACPI init Mar 11 01:22:40.192206 kernel: pnp: PnP ACPI: found 0 devices Mar 11 01:22:40.192213 kernel: NET: Registered PF_INET protocol family Mar 11 01:22:40.192221 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 11 01:22:40.192229 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 11 01:22:40.192237 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 11 01:22:40.192245 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 11 01:22:40.192253 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 11 01:22:40.192261 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 11 01:22:40.192269 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 11 01:22:40.192278 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 11 01:22:40.192286 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 11 01:22:40.192294 kernel: PCI: CLS 0 bytes, default 64 Mar 11 01:22:40.192301 kernel: kvm [1]: HYP mode not available Mar 11 01:22:40.192309 kernel: Initialise system trusted keyrings Mar 11 01:22:40.192317 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 11 01:22:40.192325 kernel: Key type asymmetric registered Mar 11 01:22:40.192332 kernel: Asymmetric key parser 'x509' registered Mar 11 01:22:40.192340 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 11 01:22:40.192350 kernel: io scheduler mq-deadline registered Mar 11 01:22:40.192358 kernel: io scheduler kyber registered Mar 11 01:22:40.192365 kernel: io scheduler bfq registered Mar 11 01:22:40.192373 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 11 01:22:40.192381 kernel: thunder_xcv, ver 1.0 Mar 11 01:22:40.192388 kernel: thunder_bgx, ver 1.0 Mar 11 01:22:40.192396 kernel: nicpf, ver 1.0 Mar 11 01:22:40.192403 kernel: nicvf, ver 1.0 Mar 11 01:22:40.192538 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 11 01:22:40.192617 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-11T01:22:39 UTC (1773192159) Mar 11 01:22:40.192628 kernel: efifb: probing for efifb Mar 11 01:22:40.192636 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 11 01:22:40.192643 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 11 01:22:40.192651 kernel: efifb: scrolling: redraw Mar 11 01:22:40.192659 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 11 01:22:40.192667 kernel: Console: switching to colour frame buffer device 128x48 Mar 11 01:22:40.192675 kernel: fb0: EFI VGA frame buffer device Mar 11 01:22:40.192685 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 11 01:22:40.192693 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 11 01:22:40.192701 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 11 01:22:40.192709 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 11 01:22:40.192716 kernel: watchdog: Hard watchdog permanently disabled Mar 11 01:22:40.192724 kernel: NET: Registered PF_INET6 protocol family Mar 11 01:22:40.192732 kernel: Segment Routing with IPv6 Mar 11 01:22:40.192740 kernel: In-situ OAM (IOAM) with IPv6 Mar 11 01:22:40.192748 kernel: NET: Registered PF_PACKET protocol family Mar 11 01:22:40.192757 kernel: Key type dns_resolver registered Mar 11 01:22:40.192765 kernel: registered taskstats version 1 Mar 11 01:22:40.192773 kernel: Loading compiled-in X.509 certificates Mar 11 01:22:40.192781 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: e2d32b7c633536fa6eb6e76ba97909ae7ad11d09' Mar 11 01:22:40.192788 kernel: Key type .fscrypt registered Mar 11 01:22:40.192796 kernel: Key type fscrypt-provisioning registered Mar 11 01:22:40.192804 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 11 01:22:40.192812 kernel: ima: Allocated hash algorithm: sha1 Mar 11 01:22:40.192819 kernel: ima: No architecture policies found Mar 11 01:22:40.192828 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 11 01:22:40.192836 kernel: clk: Disabling unused clocks Mar 11 01:22:40.192844 kernel: Freeing unused kernel memory: 39424K Mar 11 01:22:40.192852 kernel: Run /init as init process Mar 11 01:22:40.192860 kernel: with arguments: Mar 11 01:22:40.192867 kernel: /init Mar 11 01:22:40.192875 kernel: with environment: Mar 11 01:22:40.192882 kernel: HOME=/ Mar 11 01:22:40.192890 kernel: TERM=linux Mar 11 01:22:40.192900 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 11 01:22:40.192911 systemd[1]: Detected virtualization microsoft. Mar 11 01:22:40.192920 systemd[1]: Detected architecture arm64. Mar 11 01:22:40.192928 systemd[1]: Running in initrd. Mar 11 01:22:40.192936 systemd[1]: No hostname configured, using default hostname. Mar 11 01:22:40.192944 systemd[1]: Hostname set to . Mar 11 01:22:40.192952 systemd[1]: Initializing machine ID from random generator. Mar 11 01:22:40.192962 systemd[1]: Queued start job for default target initrd.target. Mar 11 01:22:40.192971 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 01:22:40.192979 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 01:22:40.192988 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 11 01:22:40.192997 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 11 01:22:40.193005 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 11 01:22:40.193014 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 11 01:22:40.193024 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 11 01:22:40.193034 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 11 01:22:40.193043 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 01:22:40.193051 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 11 01:22:40.193060 systemd[1]: Reached target paths.target - Path Units. Mar 11 01:22:40.193073 systemd[1]: Reached target slices.target - Slice Units. Mar 11 01:22:40.193083 systemd[1]: Reached target swap.target - Swaps. Mar 11 01:22:40.193093 systemd[1]: Reached target timers.target - Timer Units. Mar 11 01:22:40.193101 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 11 01:22:40.193112 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 11 01:22:40.193122 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 11 01:22:40.193130 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 11 01:22:40.193139 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 11 01:22:40.193147 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 11 01:22:40.193156 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 01:22:40.193164 systemd[1]: Reached target sockets.target - Socket Units. Mar 11 01:22:40.193807 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 11 01:22:40.193826 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 11 01:22:40.193835 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 11 01:22:40.193843 systemd[1]: Starting systemd-fsck-usr.service... Mar 11 01:22:40.193852 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 11 01:22:40.193860 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 11 01:22:40.193892 systemd-journald[218]: Collecting audit messages is disabled. Mar 11 01:22:40.193914 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:22:40.193923 systemd-journald[218]: Journal started Mar 11 01:22:40.193943 systemd-journald[218]: Runtime Journal (/run/log/journal/e3aa01b02a964a6b86ab3266e97b3a60) is 8.0M, max 78.5M, 70.5M free. Mar 11 01:22:40.194262 systemd-modules-load[219]: Inserted module 'overlay' Mar 11 01:22:40.213860 systemd[1]: Started systemd-journald.service - Journal Service. Mar 11 01:22:40.222632 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 11 01:22:40.229708 systemd-modules-load[219]: Inserted module 'br_netfilter' Mar 11 01:22:40.234799 kernel: Bridge firewalling registered Mar 11 01:22:40.230328 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 11 01:22:40.241197 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 01:22:40.249268 systemd[1]: Finished systemd-fsck-usr.service. Mar 11 01:22:40.257619 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 11 01:22:40.265556 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:22:40.282373 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 01:22:40.288301 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 11 01:22:40.298331 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 11 01:22:40.329283 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 11 01:22:40.336192 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 01:22:40.345499 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 11 01:22:40.350321 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 01:22:40.362762 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 01:22:40.386399 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 11 01:22:40.397316 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 11 01:22:40.411979 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 11 01:22:40.419148 dracut-cmdline[253]: dracut-dracut-053 Mar 11 01:22:40.431204 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 01:22:40.447149 dracut-cmdline[253]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7fe021b64c084ac374d4d673d0197603cd77b13b2055fe6fd36a6b55fadd8e5c Mar 11 01:22:40.475360 systemd-resolved[256]: Positive Trust Anchors: Mar 11 01:22:40.475376 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 11 01:22:40.475407 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 11 01:22:40.478013 systemd-resolved[256]: Defaulting to hostname 'linux'. Mar 11 01:22:40.479355 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 11 01:22:40.485443 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 11 01:22:40.587195 kernel: SCSI subsystem initialized Mar 11 01:22:40.593182 kernel: Loading iSCSI transport class v2.0-870. Mar 11 01:22:40.603191 kernel: iscsi: registered transport (tcp) Mar 11 01:22:40.619373 kernel: iscsi: registered transport (qla4xxx) Mar 11 01:22:40.619390 kernel: QLogic iSCSI HBA Driver Mar 11 01:22:40.651680 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 11 01:22:40.665353 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 11 01:22:40.693003 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 11 01:22:40.693053 kernel: device-mapper: uevent: version 1.0.3 Mar 11 01:22:40.697914 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 11 01:22:40.748190 kernel: raid6: neonx8 gen() 15812 MB/s Mar 11 01:22:40.763187 kernel: raid6: neonx4 gen() 15687 MB/s Mar 11 01:22:40.782182 kernel: raid6: neonx2 gen() 13245 MB/s Mar 11 01:22:40.802185 kernel: raid6: neonx1 gen() 10513 MB/s Mar 11 01:22:40.821179 kernel: raid6: int64x8 gen() 6978 MB/s Mar 11 01:22:40.840179 kernel: raid6: int64x4 gen() 7372 MB/s Mar 11 01:22:40.860180 kernel: raid6: int64x2 gen() 6146 MB/s Mar 11 01:22:40.882319 kernel: raid6: int64x1 gen() 5068 MB/s Mar 11 01:22:40.882331 kernel: raid6: using algorithm neonx8 gen() 15812 MB/s Mar 11 01:22:40.904968 kernel: raid6: .... xor() 12056 MB/s, rmw enabled Mar 11 01:22:40.904980 kernel: raid6: using neon recovery algorithm Mar 11 01:22:40.913182 kernel: xor: measuring software checksum speed Mar 11 01:22:40.918691 kernel: 8regs : 18881 MB/sec Mar 11 01:22:40.918704 kernel: 32regs : 19664 MB/sec Mar 11 01:22:40.921632 kernel: arm64_neon : 27016 MB/sec Mar 11 01:22:40.925181 kernel: xor: using function: arm64_neon (27016 MB/sec) Mar 11 01:22:40.975534 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 11 01:22:40.984219 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 11 01:22:40.999295 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 01:22:41.019751 systemd-udevd[439]: Using default interface naming scheme 'v255'. Mar 11 01:22:41.024143 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 01:22:41.047302 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 11 01:22:41.064376 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Mar 11 01:22:41.093802 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 11 01:22:41.106385 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 11 01:22:41.146208 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 01:22:41.163812 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 11 01:22:41.189982 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 11 01:22:41.204935 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 11 01:22:41.216935 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 01:22:41.228614 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 11 01:22:41.243248 kernel: hv_vmbus: Vmbus version:5.3 Mar 11 01:22:41.244412 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 11 01:22:41.267667 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 11 01:22:41.297209 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 11 01:22:41.297232 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 11 01:22:41.297251 kernel: hv_vmbus: registering driver hid_hyperv Mar 11 01:22:41.297261 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 11 01:22:41.297271 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 11 01:22:41.297281 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 11 01:22:41.297421 kernel: hv_vmbus: registering driver hv_netvsc Mar 11 01:22:41.297433 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 11 01:22:41.311805 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 11 01:22:41.343008 kernel: hv_vmbus: registering driver hv_storvsc Mar 11 01:22:41.343029 kernel: scsi host0: storvsc_host_t Mar 11 01:22:41.343194 kernel: scsi host1: storvsc_host_t Mar 11 01:22:41.343217 kernel: PTP clock support registered Mar 11 01:22:41.312262 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 01:22:41.356528 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 11 01:22:41.322400 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 01:22:41.373580 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 11 01:22:41.336763 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 01:22:41.336992 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:22:41.346703 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:22:41.379550 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:22:41.412108 kernel: hv_utils: Registering HyperV Utility Driver Mar 11 01:22:41.412156 kernel: hv_vmbus: registering driver hv_utils Mar 11 01:22:41.413189 kernel: hv_utils: Heartbeat IC version 3.0 Mar 11 01:22:41.418288 kernel: hv_utils: Shutdown IC version 3.2 Mar 11 01:22:41.418326 kernel: hv_utils: TimeSync IC version 4.0 Mar 11 01:22:41.421185 kernel: hv_netvsc 7ced8dd2-a8e9-7ced-8dd2-a8e97ced8dd2 eth0: VF slot 1 added Mar 11 01:22:41.292182 systemd-resolved[256]: Clock change detected. Flushing caches. Mar 11 01:22:41.308390 systemd-journald[218]: Time jumped backwards, rotating. Mar 11 01:22:41.295196 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:22:41.322487 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 11 01:22:41.345877 kernel: hv_vmbus: registering driver hv_pci Mar 11 01:22:41.345897 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 11 01:22:41.346081 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 11 01:22:41.346092 kernel: hv_pci fbf108b5-9bdc-45ed-b09b-17feca332aa2: PCI VMBus probing: Using version 0x10004 Mar 11 01:22:41.347473 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 11 01:22:41.357473 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 11 01:22:41.363673 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 11 01:22:41.374364 kernel: hv_pci fbf108b5-9bdc-45ed-b09b-17feca332aa2: PCI host bridge to bus 9bdc:00 Mar 11 01:22:41.374534 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 11 01:22:41.380155 kernel: pci_bus 9bdc:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 11 01:22:41.380323 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 11 01:22:41.377481 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 01:22:41.410175 kernel: pci_bus 9bdc:00: No busn resource found for root bus, will use [bus 00-ff] Mar 11 01:22:41.410301 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 11 01:22:41.410402 kernel: pci 9bdc:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 11 01:22:41.410425 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 11 01:22:41.420250 kernel: pci 9bdc:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 11 01:22:41.420294 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 11 01:22:41.420304 kernel: pci 9bdc:00:02.0: enabling Extended Tags Mar 11 01:22:41.429476 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 11 01:22:41.452074 kernel: pci 9bdc:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 9bdc:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 11 01:22:41.452252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#187 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 11 01:22:41.452343 kernel: pci_bus 9bdc:00: busn_res: [bus 00-ff] end is updated to 00 Mar 11 01:22:41.460652 kernel: pci 9bdc:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 11 01:22:41.501693 kernel: mlx5_core 9bdc:00:02.0: enabling device (0000 -> 0002) Mar 11 01:22:41.508100 kernel: mlx5_core 9bdc:00:02.0: firmware version: 16.30.5026 Mar 11 01:22:41.583286 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 11 01:22:41.604617 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (484) Mar 11 01:22:41.606582 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 11 01:22:41.632001 kernel: BTRFS: device fsid 6268782d-ce1a-4049-a9c9-846620fa6ee9 devid 1 transid 44 /dev/sda3 scanned by (udev-worker) (493) Mar 11 01:22:41.640431 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 11 01:22:41.655379 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 11 01:22:41.660711 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 11 01:22:41.684140 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 11 01:22:41.707472 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 11 01:22:41.755536 kernel: hv_netvsc 7ced8dd2-a8e9-7ced-8dd2-a8e97ced8dd2 eth0: VF registering: eth1 Mar 11 01:22:41.755715 kernel: mlx5_core 9bdc:00:02.0 eth1: joined to eth0 Mar 11 01:22:41.764670 kernel: mlx5_core 9bdc:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 11 01:22:41.783498 kernel: mlx5_core 9bdc:00:02.0 enP39900s1: renamed from eth1 Mar 11 01:22:42.723339 disk-uuid[603]: The operation has completed successfully. Mar 11 01:22:42.727631 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 11 01:22:42.783067 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 11 01:22:42.785613 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 11 01:22:42.818570 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 11 01:22:42.828811 sh[720]: Success Mar 11 01:22:42.848636 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 11 01:22:42.925801 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 11 01:22:42.933555 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 11 01:22:42.947839 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 11 01:22:42.977099 kernel: BTRFS info (device dm-0): first mount of filesystem 6268782d-ce1a-4049-a9c9-846620fa6ee9 Mar 11 01:22:42.977149 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 11 01:22:42.982508 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 11 01:22:42.986450 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 11 01:22:42.990070 kernel: BTRFS info (device dm-0): using free space tree Mar 11 01:22:43.054842 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 11 01:22:43.059106 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 11 01:22:43.079590 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 11 01:22:43.086588 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 11 01:22:43.123619 kernel: BTRFS info (device sda6): first mount of filesystem 099bc99e-50a7-40d1-8691-55b4d6eb7046 Mar 11 01:22:43.123670 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 11 01:22:43.127770 kernel: BTRFS info (device sda6): using free space tree Mar 11 01:22:43.142470 kernel: BTRFS info (device sda6): auto enabling async discard Mar 11 01:22:43.151266 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 11 01:22:43.161474 kernel: BTRFS info (device sda6): last unmount of filesystem 099bc99e-50a7-40d1-8691-55b4d6eb7046 Mar 11 01:22:43.168603 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 11 01:22:43.183645 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 11 01:22:43.216885 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 11 01:22:43.229618 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 11 01:22:43.272691 systemd-networkd[904]: lo: Link UP Mar 11 01:22:43.272698 systemd-networkd[904]: lo: Gained carrier Mar 11 01:22:43.277086 systemd-networkd[904]: Enumeration completed Mar 11 01:22:43.277241 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 11 01:22:43.282443 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 01:22:43.282446 systemd-networkd[904]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 11 01:22:43.283047 systemd[1]: Reached target network.target - Network. Mar 11 01:22:43.362496 kernel: mlx5_core 9bdc:00:02.0 enP39900s1: Link up Mar 11 01:22:43.387150 ignition[871]: Ignition 2.19.0 Mar 11 01:22:43.387164 ignition[871]: Stage: fetch-offline Mar 11 01:22:43.391117 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 11 01:22:43.387196 ignition[871]: no configs at "/usr/lib/ignition/base.d" Mar 11 01:22:43.414761 kernel: hv_netvsc 7ced8dd2-a8e9-7ced-8dd2-a8e97ced8dd2 eth0: Data path switched to VF: enP39900s1 Mar 11 01:22:43.414310 systemd-networkd[904]: enP39900s1: Link UP Mar 11 01:22:43.387204 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:22:43.414411 systemd-networkd[904]: eth0: Link UP Mar 11 01:22:43.387403 ignition[871]: parsed url from cmdline: "" Mar 11 01:22:43.414560 systemd-networkd[904]: eth0: Gained carrier Mar 11 01:22:43.387406 ignition[871]: no config URL provided Mar 11 01:22:43.414570 systemd-networkd[904]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 01:22:43.387411 ignition[871]: reading system config file "/usr/lib/ignition/user.ign" Mar 11 01:22:43.422592 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 11 01:22:43.387420 ignition[871]: no config at "/usr/lib/ignition/user.ign" Mar 11 01:22:43.426643 systemd-networkd[904]: enP39900s1: Gained carrier Mar 11 01:22:43.387428 ignition[871]: failed to fetch config: resource requires networking Mar 11 01:22:43.449490 systemd-networkd[904]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 11 01:22:43.387637 ignition[871]: Ignition finished successfully Mar 11 01:22:43.439268 ignition[912]: Ignition 2.19.0 Mar 11 01:22:43.439278 ignition[912]: Stage: fetch Mar 11 01:22:43.440427 ignition[912]: no configs at "/usr/lib/ignition/base.d" Mar 11 01:22:43.440448 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:22:43.440734 ignition[912]: parsed url from cmdline: "" Mar 11 01:22:43.440738 ignition[912]: no config URL provided Mar 11 01:22:43.440743 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Mar 11 01:22:43.440753 ignition[912]: no config at "/usr/lib/ignition/user.ign" Mar 11 01:22:43.440774 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 11 01:22:43.440932 ignition[912]: GET error: Get "http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 11 01:22:43.641905 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #2 Mar 11 01:22:43.728505 ignition[912]: GET result: OK Mar 11 01:22:43.728568 ignition[912]: config has been read from IMDS userdata Mar 11 01:22:43.728611 ignition[912]: parsing config with SHA512: 8ace4f8e07dc58a5bc285f68cfc4972ab7408ef4cf2b60d4c0e087a7c1342b5463231629be5d4aa63d851662ae27827134c7813baa339eeeab491b0ce64be6e7 Mar 11 01:22:43.732667 unknown[912]: fetched base config from "system" Mar 11 01:22:43.733011 ignition[912]: fetch: fetch complete Mar 11 01:22:43.732673 unknown[912]: fetched base config from "system" Mar 11 01:22:43.733015 ignition[912]: fetch: fetch passed Mar 11 01:22:43.732681 unknown[912]: fetched user config from "azure" Mar 11 01:22:43.733056 ignition[912]: Ignition finished successfully Mar 11 01:22:43.736631 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 11 01:22:43.756664 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 11 01:22:43.776650 ignition[920]: Ignition 2.19.0 Mar 11 01:22:43.776658 ignition[920]: Stage: kargs Mar 11 01:22:43.780715 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 11 01:22:43.776818 ignition[920]: no configs at "/usr/lib/ignition/base.d" Mar 11 01:22:43.793574 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 11 01:22:43.776827 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:22:43.808717 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 11 01:22:43.777820 ignition[920]: kargs: kargs passed Mar 11 01:22:43.813520 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 11 01:22:43.777866 ignition[920]: Ignition finished successfully Mar 11 01:22:43.823010 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 11 01:22:43.805868 ignition[926]: Ignition 2.19.0 Mar 11 01:22:43.832808 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 11 01:22:43.805874 ignition[926]: Stage: disks Mar 11 01:22:43.840011 systemd[1]: Reached target sysinit.target - System Initialization. Mar 11 01:22:43.806094 ignition[926]: no configs at "/usr/lib/ignition/base.d" Mar 11 01:22:43.849787 systemd[1]: Reached target basic.target - Basic System. Mar 11 01:22:43.806104 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:22:43.869614 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 11 01:22:43.807348 ignition[926]: disks: disks passed Mar 11 01:22:43.807399 ignition[926]: Ignition finished successfully Mar 11 01:22:43.924532 systemd-fsck[934]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 11 01:22:43.931892 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 11 01:22:43.946597 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 11 01:22:43.996480 kernel: EXT4-fs (sda9): mounted filesystem 19488164-8e25-4d6a-86d9-f70a8ed432cb r/w with ordered data mode. Quota mode: none. Mar 11 01:22:43.997315 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 11 01:22:44.001446 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 11 01:22:44.022557 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 11 01:22:44.031263 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 11 01:22:44.050471 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (945) Mar 11 01:22:44.055483 kernel: BTRFS info (device sda6): first mount of filesystem 099bc99e-50a7-40d1-8691-55b4d6eb7046 Mar 11 01:22:44.060885 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 11 01:22:44.061342 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 11 01:22:44.075990 kernel: BTRFS info (device sda6): using free space tree Mar 11 01:22:44.070164 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 11 01:22:44.070194 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 11 01:22:44.082693 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 11 01:22:44.105697 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 11 01:22:44.120465 kernel: BTRFS info (device sda6): auto enabling async discard Mar 11 01:22:44.121229 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 11 01:22:44.239637 coreos-metadata[947]: Mar 11 01:22:44.239 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 11 01:22:44.248274 coreos-metadata[947]: Mar 11 01:22:44.248 INFO Fetch successful Mar 11 01:22:44.253170 coreos-metadata[947]: Mar 11 01:22:44.253 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 11 01:22:44.271630 coreos-metadata[947]: Mar 11 01:22:44.271 INFO Fetch successful Mar 11 01:22:44.276231 coreos-metadata[947]: Mar 11 01:22:44.275 INFO wrote hostname ci-4081.3.6-n-49f1e4db19 to /sysroot/etc/hostname Mar 11 01:22:44.283450 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 11 01:22:44.320826 initrd-setup-root[976]: cut: /sysroot/etc/passwd: No such file or directory Mar 11 01:22:44.331958 initrd-setup-root[983]: cut: /sysroot/etc/group: No such file or directory Mar 11 01:22:44.344794 initrd-setup-root[990]: cut: /sysroot/etc/shadow: No such file or directory Mar 11 01:22:44.353540 initrd-setup-root[997]: cut: /sysroot/etc/gshadow: No such file or directory Mar 11 01:22:44.592423 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 11 01:22:44.604570 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 11 01:22:44.610297 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 11 01:22:44.629590 kernel: BTRFS info (device sda6): last unmount of filesystem 099bc99e-50a7-40d1-8691-55b4d6eb7046 Mar 11 01:22:44.628926 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 11 01:22:44.650921 ignition[1065]: INFO : Ignition 2.19.0 Mar 11 01:22:44.654303 ignition[1065]: INFO : Stage: mount Mar 11 01:22:44.654303 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 01:22:44.654303 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:22:44.651506 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 11 01:22:44.679692 ignition[1065]: INFO : mount: mount passed Mar 11 01:22:44.679692 ignition[1065]: INFO : Ignition finished successfully Mar 11 01:22:44.666972 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 11 01:22:44.686630 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 11 01:22:45.005644 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 11 01:22:45.031374 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1076) Mar 11 01:22:45.031411 kernel: BTRFS info (device sda6): first mount of filesystem 099bc99e-50a7-40d1-8691-55b4d6eb7046 Mar 11 01:22:45.036079 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 11 01:22:45.039424 kernel: BTRFS info (device sda6): using free space tree Mar 11 01:22:45.048467 kernel: BTRFS info (device sda6): auto enabling async discard Mar 11 01:22:45.048771 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 11 01:22:45.077417 ignition[1094]: INFO : Ignition 2.19.0 Mar 11 01:22:45.077417 ignition[1094]: INFO : Stage: files Mar 11 01:22:45.084007 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 01:22:45.084007 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:22:45.084007 ignition[1094]: DEBUG : files: compiled without relabeling support, skipping Mar 11 01:22:45.084007 ignition[1094]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 11 01:22:45.084007 ignition[1094]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 11 01:22:45.110588 ignition[1094]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 11 01:22:45.110588 ignition[1094]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 11 01:22:45.098467 systemd-networkd[904]: eth0: Gained IPv6LL Mar 11 01:22:45.125123 ignition[1094]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 11 01:22:45.125123 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 11 01:22:45.125123 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 11 01:22:45.110828 unknown[1094]: wrote ssh authorized keys file for user: core Mar 11 01:22:45.152974 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 11 01:22:45.265933 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 11 01:22:45.273989 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Mar 11 01:22:45.724389 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 11 01:22:46.408927 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 11 01:22:46.420035 ignition[1094]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 11 01:22:46.420035 ignition[1094]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 11 01:22:46.420035 ignition[1094]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 11 01:22:46.420035 ignition[1094]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 11 01:22:46.420035 ignition[1094]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 11 01:22:46.420035 ignition[1094]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 11 01:22:46.420035 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 11 01:22:46.420035 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 11 01:22:46.420035 ignition[1094]: INFO : files: files passed Mar 11 01:22:46.420035 ignition[1094]: INFO : Ignition finished successfully Mar 11 01:22:46.422011 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 11 01:22:46.452704 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 11 01:22:46.464602 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 11 01:22:46.522775 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 11 01:22:46.522775 initrd-setup-root-after-ignition[1121]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 11 01:22:46.478700 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 11 01:22:46.540227 initrd-setup-root-after-ignition[1125]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 11 01:22:46.478782 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 11 01:22:46.536914 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 11 01:22:46.545772 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 11 01:22:46.572668 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 11 01:22:46.606788 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 11 01:22:46.606914 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 11 01:22:46.617425 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 11 01:22:46.627857 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 11 01:22:46.637310 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 11 01:22:46.649669 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 11 01:22:46.667153 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 11 01:22:46.680623 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 11 01:22:46.695525 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 11 01:22:46.700589 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 01:22:46.710152 systemd[1]: Stopped target timers.target - Timer Units. Mar 11 01:22:46.718819 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 11 01:22:46.718927 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 11 01:22:46.731204 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 11 01:22:46.735878 systemd[1]: Stopped target basic.target - Basic System. Mar 11 01:22:46.744575 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 11 01:22:46.753433 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 11 01:22:46.761962 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 11 01:22:46.771269 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 11 01:22:46.780377 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 11 01:22:46.790776 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 11 01:22:46.799396 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 11 01:22:46.808670 systemd[1]: Stopped target swap.target - Swaps. Mar 11 01:22:46.816600 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 11 01:22:46.816715 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 11 01:22:46.828130 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 11 01:22:46.833002 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 01:22:46.842483 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 11 01:22:46.842551 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 01:22:46.851853 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 11 01:22:46.851955 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 11 01:22:46.865200 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 11 01:22:46.865304 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 11 01:22:46.870718 systemd[1]: ignition-files.service: Deactivated successfully. Mar 11 01:22:46.870806 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 11 01:22:46.926407 ignition[1145]: INFO : Ignition 2.19.0 Mar 11 01:22:46.926407 ignition[1145]: INFO : Stage: umount Mar 11 01:22:46.926407 ignition[1145]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 11 01:22:46.926407 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 11 01:22:46.879311 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 11 01:22:46.958182 ignition[1145]: INFO : umount: umount passed Mar 11 01:22:46.958182 ignition[1145]: INFO : Ignition finished successfully Mar 11 01:22:46.879401 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 11 01:22:46.903645 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 11 01:22:46.921838 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 11 01:22:46.930014 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 11 01:22:46.930189 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 01:22:46.939628 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 11 01:22:46.939720 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 11 01:22:46.959704 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 11 01:22:46.960300 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 11 01:22:46.960395 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 11 01:22:46.969931 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 11 01:22:46.970215 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 11 01:22:46.979517 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 11 01:22:46.979566 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 11 01:22:46.987736 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 11 01:22:46.987777 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 11 01:22:46.995820 systemd[1]: Stopped target network.target - Network. Mar 11 01:22:47.004248 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 11 01:22:47.004310 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 11 01:22:47.015274 systemd[1]: Stopped target paths.target - Path Units. Mar 11 01:22:47.022999 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 11 01:22:47.026476 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 01:22:47.032731 systemd[1]: Stopped target slices.target - Slice Units. Mar 11 01:22:47.040352 systemd[1]: Stopped target sockets.target - Socket Units. Mar 11 01:22:47.044491 systemd[1]: iscsid.socket: Deactivated successfully. Mar 11 01:22:47.044547 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 11 01:22:47.052544 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 11 01:22:47.052592 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 11 01:22:47.062324 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 11 01:22:47.062371 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 11 01:22:47.071243 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 11 01:22:47.071283 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 11 01:22:47.080212 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 11 01:22:47.088418 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 11 01:22:47.097276 systemd-networkd[904]: eth0: DHCPv6 lease lost Mar 11 01:22:47.099059 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 11 01:22:47.099166 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 11 01:22:47.109369 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 11 01:22:47.294921 kernel: hv_netvsc 7ced8dd2-a8e9-7ced-8dd2-a8e97ced8dd2 eth0: Data path switched from VF: enP39900s1 Mar 11 01:22:47.109477 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 11 01:22:47.119965 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 11 01:22:47.120119 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 11 01:22:47.129425 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 11 01:22:47.129496 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 11 01:22:47.146573 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 11 01:22:47.154537 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 11 01:22:47.154598 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 11 01:22:47.166305 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 11 01:22:47.166352 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 11 01:22:47.174163 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 11 01:22:47.174198 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 11 01:22:47.183181 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 11 01:22:47.183220 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 01:22:47.193183 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 01:22:47.238038 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 11 01:22:47.238171 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 01:22:47.248560 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 11 01:22:47.248598 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 11 01:22:47.253601 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 11 01:22:47.253633 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 01:22:47.261532 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 11 01:22:47.261581 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 11 01:22:47.276856 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 11 01:22:47.276908 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 11 01:22:47.285603 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 11 01:22:47.285642 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 11 01:22:47.313609 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 11 01:22:47.329507 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 11 01:22:47.329570 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 01:22:47.335861 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 11 01:22:47.505588 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Mar 11 01:22:47.335906 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 01:22:47.345999 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 11 01:22:47.346041 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 01:22:47.356808 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 01:22:47.356849 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:22:47.365961 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 11 01:22:47.366088 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 11 01:22:47.374039 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 11 01:22:47.374120 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 11 01:22:47.383751 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 11 01:22:47.383841 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 11 01:22:47.397209 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 11 01:22:47.397310 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 11 01:22:47.407065 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 11 01:22:47.431859 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 11 01:22:47.452205 systemd[1]: Switching root. Mar 11 01:22:47.578767 systemd-journald[218]: Journal stopped Mar 11 01:22:49.607843 kernel: SELinux: policy capability network_peer_controls=1 Mar 11 01:22:49.607868 kernel: SELinux: policy capability open_perms=1 Mar 11 01:22:49.607878 kernel: SELinux: policy capability extended_socket_class=1 Mar 11 01:22:49.607887 kernel: SELinux: policy capability always_check_network=0 Mar 11 01:22:49.607896 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 11 01:22:49.607904 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 11 01:22:49.607915 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 11 01:22:49.607923 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 11 01:22:49.607932 systemd[1]: Successfully loaded SELinux policy in 77.745ms. Mar 11 01:22:49.607942 kernel: audit: type=1403 audit(1773192168.020:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 11 01:22:49.607953 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.234ms. Mar 11 01:22:49.607963 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 11 01:22:49.607972 systemd[1]: Detected virtualization microsoft. Mar 11 01:22:49.607980 systemd[1]: Detected architecture arm64. Mar 11 01:22:49.607990 systemd[1]: Detected first boot. Mar 11 01:22:49.608001 systemd[1]: Hostname set to . Mar 11 01:22:49.608010 systemd[1]: Initializing machine ID from random generator. Mar 11 01:22:49.608019 zram_generator::config[1187]: No configuration found. Mar 11 01:22:49.608028 systemd[1]: Populated /etc with preset unit settings. Mar 11 01:22:49.608037 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 11 01:22:49.608046 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 11 01:22:49.608055 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 11 01:22:49.608067 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 11 01:22:49.608076 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 11 01:22:49.608086 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 11 01:22:49.608095 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 11 01:22:49.608104 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 11 01:22:49.608115 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 11 01:22:49.608124 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 11 01:22:49.608135 systemd[1]: Created slice user.slice - User and Session Slice. Mar 11 01:22:49.608145 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 11 01:22:49.608154 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 11 01:22:49.608163 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 11 01:22:49.608173 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 11 01:22:49.608182 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 11 01:22:49.608191 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 11 01:22:49.608200 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 11 01:22:49.608211 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 11 01:22:49.608220 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 11 01:22:49.608230 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 11 01:22:49.608241 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 11 01:22:49.608251 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 11 01:22:49.608260 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 11 01:22:49.608270 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 11 01:22:49.608279 systemd[1]: Reached target slices.target - Slice Units. Mar 11 01:22:49.608290 systemd[1]: Reached target swap.target - Swaps. Mar 11 01:22:49.608299 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 11 01:22:49.608309 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 11 01:22:49.608319 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 11 01:22:49.608328 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 11 01:22:49.608338 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 11 01:22:49.608350 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 11 01:22:49.608360 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 11 01:22:49.608369 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 11 01:22:49.608379 systemd[1]: Mounting media.mount - External Media Directory... Mar 11 01:22:49.608388 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 11 01:22:49.608398 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 11 01:22:49.608407 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 11 01:22:49.608419 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 11 01:22:49.608429 systemd[1]: Reached target machines.target - Containers. Mar 11 01:22:49.608438 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 11 01:22:49.608448 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 11 01:22:49.608473 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 11 01:22:49.608483 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 11 01:22:49.608493 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 11 01:22:49.608503 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 11 01:22:49.608514 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 11 01:22:49.608524 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 11 01:22:49.608534 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 11 01:22:49.608545 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 11 01:22:49.608554 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 11 01:22:49.608564 kernel: fuse: init (API version 7.39) Mar 11 01:22:49.608572 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 11 01:22:49.608582 kernel: ACPI: bus type drm_connector registered Mar 11 01:22:49.608591 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 11 01:22:49.608602 systemd[1]: Stopped systemd-fsck-usr.service. Mar 11 01:22:49.608612 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 11 01:22:49.608621 kernel: loop: module loaded Mar 11 01:22:49.608646 systemd-journald[1290]: Collecting audit messages is disabled. Mar 11 01:22:49.608668 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 11 01:22:49.608678 systemd-journald[1290]: Journal started Mar 11 01:22:49.608698 systemd-journald[1290]: Runtime Journal (/run/log/journal/9a9cd92f423a4482982524d6d99c65cf) is 8.0M, max 78.5M, 70.5M free. Mar 11 01:22:48.913524 systemd[1]: Queued start job for default target multi-user.target. Mar 11 01:22:48.956663 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 11 01:22:48.957015 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 11 01:22:48.957307 systemd[1]: systemd-journald.service: Consumed 2.472s CPU time. Mar 11 01:22:49.624790 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 11 01:22:49.637479 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 11 01:22:49.646137 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 11 01:22:49.657449 systemd[1]: verity-setup.service: Deactivated successfully. Mar 11 01:22:49.657503 systemd[1]: Stopped verity-setup.service. Mar 11 01:22:49.673466 systemd[1]: Started systemd-journald.service - Journal Service. Mar 11 01:22:49.672215 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 11 01:22:49.676701 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 11 01:22:49.681512 systemd[1]: Mounted media.mount - External Media Directory. Mar 11 01:22:49.685755 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 11 01:22:49.690651 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 11 01:22:49.695482 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 11 01:22:49.699713 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 11 01:22:49.705317 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 11 01:22:49.711106 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 11 01:22:49.711238 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 11 01:22:49.716953 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 11 01:22:49.717100 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 11 01:22:49.722545 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 11 01:22:49.724482 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 11 01:22:49.729389 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 11 01:22:49.729528 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 11 01:22:49.735376 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 11 01:22:49.737506 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 11 01:22:49.742445 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 11 01:22:49.742587 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 11 01:22:49.747550 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 11 01:22:49.752745 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 11 01:22:49.758397 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 11 01:22:49.764023 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 11 01:22:49.779917 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 11 01:22:49.791524 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 11 01:22:49.797058 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 11 01:22:49.802106 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 11 01:22:49.802139 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 11 01:22:49.807424 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 11 01:22:49.813699 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 11 01:22:49.819650 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 11 01:22:49.824193 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 11 01:22:49.825202 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 11 01:22:49.832170 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 11 01:22:49.838434 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 11 01:22:49.839587 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 11 01:22:49.845855 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 11 01:22:49.860232 systemd-journald[1290]: Time spent on flushing to /var/log/journal/9a9cd92f423a4482982524d6d99c65cf is 111.987ms for 889 entries. Mar 11 01:22:49.860232 systemd-journald[1290]: System Journal (/var/log/journal/9a9cd92f423a4482982524d6d99c65cf) is 11.8M, max 2.6G, 2.6G free. Mar 11 01:22:49.992544 systemd-journald[1290]: Received client request to flush runtime journal. Mar 11 01:22:49.992584 systemd-journald[1290]: /var/log/journal/9a9cd92f423a4482982524d6d99c65cf/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Mar 11 01:22:49.992658 systemd-journald[1290]: Rotating system journal. Mar 11 01:22:49.992681 kernel: loop0: detected capacity change from 0 to 31320 Mar 11 01:22:49.855632 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 11 01:22:49.871659 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 11 01:22:49.896721 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 11 01:22:49.917615 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 11 01:22:49.924961 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 11 01:22:49.932544 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 11 01:22:49.943967 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 11 01:22:49.955515 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 11 01:22:49.962673 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 11 01:22:49.972757 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 11 01:22:49.984815 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 11 01:22:49.991078 udevadm[1327]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 11 01:22:49.991387 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Mar 11 01:22:49.991397 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Mar 11 01:22:49.997058 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 11 01:22:50.003868 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 11 01:22:50.019003 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 11 01:22:50.039229 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 11 01:22:50.042492 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 11 01:22:50.080648 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 11 01:22:50.081814 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 11 01:22:50.092736 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 11 01:22:50.111181 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Mar 11 01:22:50.111194 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Mar 11 01:22:50.115225 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 11 01:22:50.121956 kernel: loop1: detected capacity change from 0 to 200864 Mar 11 01:22:50.182506 kernel: loop2: detected capacity change from 0 to 114432 Mar 11 01:22:50.279627 kernel: loop3: detected capacity change from 0 to 114328 Mar 11 01:22:50.371507 kernel: loop4: detected capacity change from 0 to 31320 Mar 11 01:22:50.384867 kernel: loop5: detected capacity change from 0 to 200864 Mar 11 01:22:50.401507 kernel: loop6: detected capacity change from 0 to 114432 Mar 11 01:22:50.415481 kernel: loop7: detected capacity change from 0 to 114328 Mar 11 01:22:50.425028 (sd-merge)[1350]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 11 01:22:50.425430 (sd-merge)[1350]: Merged extensions into '/usr'. Mar 11 01:22:50.430611 systemd[1]: Reloading requested from client PID 1322 ('systemd-sysext') (unit systemd-sysext.service)... Mar 11 01:22:50.430731 systemd[1]: Reloading... Mar 11 01:22:50.532180 zram_generator::config[1379]: No configuration found. Mar 11 01:22:50.648754 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 01:22:50.706783 systemd[1]: Reloading finished in 275 ms. Mar 11 01:22:50.739075 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 11 01:22:50.745214 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 11 01:22:50.760670 systemd[1]: Starting ensure-sysext.service... Mar 11 01:22:50.765415 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 11 01:22:50.772630 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 11 01:22:50.789026 systemd[1]: Reloading requested from client PID 1432 ('systemctl') (unit ensure-sysext.service)... Mar 11 01:22:50.789037 systemd[1]: Reloading... Mar 11 01:22:50.799824 systemd-tmpfiles[1433]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 11 01:22:50.800097 systemd-tmpfiles[1433]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 11 01:22:50.800762 systemd-tmpfiles[1433]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 11 01:22:50.800994 systemd-tmpfiles[1433]: ACLs are not supported, ignoring. Mar 11 01:22:50.801041 systemd-tmpfiles[1433]: ACLs are not supported, ignoring. Mar 11 01:22:50.808205 systemd-tmpfiles[1433]: Detected autofs mount point /boot during canonicalization of boot. Mar 11 01:22:50.808215 systemd-tmpfiles[1433]: Skipping /boot Mar 11 01:22:50.821321 systemd-tmpfiles[1433]: Detected autofs mount point /boot during canonicalization of boot. Mar 11 01:22:50.821710 systemd-tmpfiles[1433]: Skipping /boot Mar 11 01:22:50.826018 systemd-udevd[1434]: Using default interface naming scheme 'v255'. Mar 11 01:22:50.875479 zram_generator::config[1461]: No configuration found. Mar 11 01:22:51.061073 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 01:22:51.069469 kernel: mousedev: PS/2 mouse device common for all mice Mar 11 01:22:51.069533 kernel: hv_vmbus: registering driver hv_balloon Mar 11 01:22:51.069557 kernel: hv_vmbus: registering driver hyperv_fb Mar 11 01:22:51.093472 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 11 01:22:51.093547 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#238 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 11 01:22:51.093734 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 11 01:22:51.143429 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 11 01:22:51.143928 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 11 01:22:51.159304 kernel: Console: switching to colour dummy device 80x25 Mar 11 01:22:51.167596 kernel: Console: switching to colour frame buffer device 128x48 Mar 11 01:22:51.167666 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1486) Mar 11 01:22:51.173065 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 11 01:22:51.173440 systemd[1]: Reloading finished in 384 ms. Mar 11 01:22:51.187956 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 11 01:22:51.197920 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 11 01:22:51.228197 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Mar 11 01:22:51.229176 systemd[1]: Finished ensure-sysext.service. Mar 11 01:22:51.246694 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 11 01:22:51.261860 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 11 01:22:51.268044 ldconfig[1316]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 11 01:22:51.268854 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 11 01:22:51.278875 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 11 01:22:51.288558 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 11 01:22:51.295613 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 11 01:22:51.302540 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 11 01:22:51.307806 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 11 01:22:51.310205 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 11 01:22:51.320899 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 11 01:22:51.333532 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 11 01:22:51.341745 systemd[1]: Reached target time-set.target - System Time Set. Mar 11 01:22:51.352625 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 11 01:22:51.361681 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:22:51.368445 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 11 01:22:51.377170 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 11 01:22:51.380600 augenrules[1620]: No rules Mar 11 01:22:51.377364 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 11 01:22:51.383872 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 11 01:22:51.389347 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 11 01:22:51.389576 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 11 01:22:51.395041 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 11 01:22:51.395172 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 11 01:22:51.401784 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 11 01:22:51.401908 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 11 01:22:51.414069 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 11 01:22:51.434335 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 11 01:22:51.449349 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 11 01:22:51.454908 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 11 01:22:51.464650 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 11 01:22:51.471659 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 11 01:22:51.477220 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 11 01:22:51.477416 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 11 01:22:51.479678 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 11 01:22:51.487629 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 11 01:22:51.500849 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 11 01:22:51.507051 lvm[1636]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 11 01:22:51.510557 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 11 01:22:51.510758 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:22:51.516350 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 11 01:22:51.529695 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 11 01:22:51.545769 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 11 01:22:51.553086 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 11 01:22:51.562692 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 11 01:22:51.569837 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 11 01:22:51.583713 lvm[1648]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 11 01:22:51.611308 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 11 01:22:51.620244 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 11 01:22:51.630215 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 11 01:22:51.641994 systemd-networkd[1612]: lo: Link UP Mar 11 01:22:51.642251 systemd-networkd[1612]: lo: Gained carrier Mar 11 01:22:51.644332 systemd-networkd[1612]: Enumeration completed Mar 11 01:22:51.644556 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 11 01:22:51.650010 systemd-networkd[1612]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 01:22:51.650092 systemd-networkd[1612]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 11 01:22:51.650180 systemd-resolved[1614]: Positive Trust Anchors: Mar 11 01:22:51.650198 systemd-resolved[1614]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 11 01:22:51.650230 systemd-resolved[1614]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 11 01:22:51.655264 systemd-resolved[1614]: Using system hostname 'ci-4081.3.6-n-49f1e4db19'. Mar 11 01:22:51.656737 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 11 01:22:51.663464 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 11 01:22:51.710469 kernel: mlx5_core 9bdc:00:02.0 enP39900s1: Link up Mar 11 01:22:51.734628 kernel: hv_netvsc 7ced8dd2-a8e9-7ced-8dd2-a8e97ced8dd2 eth0: Data path switched to VF: enP39900s1 Mar 11 01:22:51.735194 systemd-networkd[1612]: enP39900s1: Link UP Mar 11 01:22:51.735287 systemd-networkd[1612]: eth0: Link UP Mar 11 01:22:51.735290 systemd-networkd[1612]: eth0: Gained carrier Mar 11 01:22:51.735304 systemd-networkd[1612]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 01:22:51.736012 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 11 01:22:51.740727 systemd[1]: Reached target network.target - Network. Mar 11 01:22:51.744495 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 11 01:22:51.745730 systemd-networkd[1612]: enP39900s1: Gained carrier Mar 11 01:22:51.749370 systemd[1]: Reached target sysinit.target - System Initialization. Mar 11 01:22:51.754314 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 11 01:22:51.759449 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 11 01:22:51.764868 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 11 01:22:51.769310 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 11 01:22:51.769494 systemd-networkd[1612]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 11 01:22:51.775109 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 11 01:22:51.780303 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 11 01:22:51.780335 systemd[1]: Reached target paths.target - Path Units. Mar 11 01:22:51.784080 systemd[1]: Reached target timers.target - Timer Units. Mar 11 01:22:51.788521 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 11 01:22:51.794393 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 11 01:22:51.804899 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 11 01:22:51.810250 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 11 01:22:51.815175 systemd[1]: Reached target sockets.target - Socket Units. Mar 11 01:22:51.819125 systemd[1]: Reached target basic.target - Basic System. Mar 11 01:22:51.823393 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 11 01:22:51.823420 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 11 01:22:51.840523 systemd[1]: Starting chronyd.service - NTP client/server... Mar 11 01:22:51.848581 systemd[1]: Starting containerd.service - containerd container runtime... Mar 11 01:22:51.859659 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 11 01:22:51.865001 (chronyd)[1664]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 11 01:22:51.867633 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 11 01:22:51.875574 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 11 01:22:51.875990 chronyd[1672]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 11 01:22:51.882629 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 11 01:22:51.886503 chronyd[1672]: Timezone right/UTC failed leap second check, ignoring Mar 11 01:22:51.886683 chronyd[1672]: Loaded seccomp filter (level 2) Mar 11 01:22:51.887659 jq[1670]: false Mar 11 01:22:51.888715 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 11 01:22:51.888752 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 11 01:22:51.892525 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 11 01:22:51.897379 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 11 01:22:51.899074 KVP[1674]: KVP starting; pid is:1674 Mar 11 01:22:51.900626 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 11 01:22:51.908576 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 11 01:22:51.915234 extend-filesystems[1673]: Found loop4 Mar 11 01:22:51.922107 kernel: hv_utils: KVP IC version 4.0 Mar 11 01:22:51.922134 extend-filesystems[1673]: Found loop5 Mar 11 01:22:51.922134 extend-filesystems[1673]: Found loop6 Mar 11 01:22:51.922134 extend-filesystems[1673]: Found loop7 Mar 11 01:22:51.922134 extend-filesystems[1673]: Found sda Mar 11 01:22:51.922134 extend-filesystems[1673]: Found sda1 Mar 11 01:22:51.922134 extend-filesystems[1673]: Found sda2 Mar 11 01:22:51.922134 extend-filesystems[1673]: Found sda3 Mar 11 01:22:51.922134 extend-filesystems[1673]: Found usr Mar 11 01:22:51.922134 extend-filesystems[1673]: Found sda4 Mar 11 01:22:51.922134 extend-filesystems[1673]: Found sda6 Mar 11 01:22:51.922134 extend-filesystems[1673]: Found sda7 Mar 11 01:22:51.922134 extend-filesystems[1673]: Found sda9 Mar 11 01:22:51.922134 extend-filesystems[1673]: Checking size of /dev/sda9 Mar 11 01:22:51.920381 KVP[1674]: KVP LIC Version: 3.1 Mar 11 01:22:51.925624 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 11 01:22:51.995511 extend-filesystems[1673]: Old size kept for /dev/sda9 Mar 11 01:22:51.995511 extend-filesystems[1673]: Found sr0 Mar 11 01:22:51.952862 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 11 01:22:51.978857 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 11 01:22:51.986191 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 11 01:22:51.986699 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 11 01:22:51.987648 systemd[1]: Starting update-engine.service - Update Engine... Mar 11 01:22:52.035098 jq[1697]: true Mar 11 01:22:52.008570 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 11 01:22:52.016761 systemd[1]: Started chronyd.service - NTP client/server. Mar 11 01:22:52.025785 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 11 01:22:52.025956 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 11 01:22:52.026197 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 11 01:22:52.026324 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 11 01:22:52.038308 systemd[1]: motdgen.service: Deactivated successfully. Mar 11 01:22:52.039168 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 11 01:22:52.049056 dbus-daemon[1667]: [system] SELinux support is enabled Mar 11 01:22:52.054505 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 11 01:22:52.066116 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 11 01:22:52.066284 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 11 01:22:52.105583 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1501) Mar 11 01:22:52.110450 update_engine[1695]: I20260311 01:22:52.110383 1695 main.cc:92] Flatcar Update Engine starting Mar 11 01:22:52.120791 update_engine[1695]: I20260311 01:22:52.115783 1695 update_check_scheduler.cc:74] Next update check in 9m35s Mar 11 01:22:52.120604 (ntainerd)[1717]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 11 01:22:52.126100 systemd-logind[1690]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Mar 11 01:22:52.126818 systemd-logind[1690]: New seat seat0. Mar 11 01:22:52.137050 jq[1706]: true Mar 11 01:22:52.132398 systemd[1]: Started systemd-logind.service - User Login Management. Mar 11 01:22:52.146885 dbus-daemon[1667]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 11 01:22:52.148006 systemd[1]: Started update-engine.service - Update Engine. Mar 11 01:22:52.156664 coreos-metadata[1666]: Mar 11 01:22:52.156 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 11 01:22:52.158418 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 11 01:22:52.158623 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 11 01:22:52.167862 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 11 01:22:52.187178 coreos-metadata[1666]: Mar 11 01:22:52.177 INFO Fetch successful Mar 11 01:22:52.187178 coreos-metadata[1666]: Mar 11 01:22:52.177 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 11 01:22:52.167970 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 11 01:22:52.182689 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 11 01:22:52.190243 coreos-metadata[1666]: Mar 11 01:22:52.188 INFO Fetch successful Mar 11 01:22:52.190243 coreos-metadata[1666]: Mar 11 01:22:52.190 INFO Fetching http://168.63.129.16/machine/f0ff9e28-09c3-4c63-9150-b229098ba4b6/98a873f6%2Da051%2D4e08%2Db919%2D5952852f010f.%5Fci%2D4081.3.6%2Dn%2D49f1e4db19?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 11 01:22:52.193085 coreos-metadata[1666]: Mar 11 01:22:52.190 INFO Fetch successful Mar 11 01:22:52.201168 coreos-metadata[1666]: Mar 11 01:22:52.200 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 11 01:22:52.201168 coreos-metadata[1666]: Mar 11 01:22:52.200 INFO Fetch successful Mar 11 01:22:52.220902 tar[1702]: linux-arm64/LICENSE Mar 11 01:22:52.220902 tar[1702]: linux-arm64/helm Mar 11 01:22:52.261766 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 11 01:22:52.271176 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 11 01:22:52.286918 bash[1763]: Updated "/home/core/.ssh/authorized_keys" Mar 11 01:22:52.289836 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 11 01:22:52.301317 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 11 01:22:52.424583 containerd[1717]: time="2026-03-11T01:22:52.423556120Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 11 01:22:52.475800 containerd[1717]: time="2026-03-11T01:22:52.475755880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 11 01:22:52.481496 containerd[1717]: time="2026-03-11T01:22:52.480558880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 11 01:22:52.481496 containerd[1717]: time="2026-03-11T01:22:52.480594240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 11 01:22:52.481496 containerd[1717]: time="2026-03-11T01:22:52.480611520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 11 01:22:52.481496 containerd[1717]: time="2026-03-11T01:22:52.480767920Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 11 01:22:52.481496 containerd[1717]: time="2026-03-11T01:22:52.480784920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 11 01:22:52.481496 containerd[1717]: time="2026-03-11T01:22:52.480846840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 01:22:52.481496 containerd[1717]: time="2026-03-11T01:22:52.480857920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 11 01:22:52.481496 containerd[1717]: time="2026-03-11T01:22:52.481016920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 01:22:52.481496 containerd[1717]: time="2026-03-11T01:22:52.481031920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 11 01:22:52.481496 containerd[1717]: time="2026-03-11T01:22:52.481045520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 01:22:52.481496 containerd[1717]: time="2026-03-11T01:22:52.481054920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 11 01:22:52.481744 containerd[1717]: time="2026-03-11T01:22:52.481121520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 11 01:22:52.481744 containerd[1717]: time="2026-03-11T01:22:52.481309000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 11 01:22:52.481744 containerd[1717]: time="2026-03-11T01:22:52.481400800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 11 01:22:52.481744 containerd[1717]: time="2026-03-11T01:22:52.481414280Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 11 01:22:52.482112 containerd[1717]: time="2026-03-11T01:22:52.481886800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 11 01:22:52.482112 containerd[1717]: time="2026-03-11T01:22:52.481943480Z" level=info msg="metadata content store policy set" policy=shared Mar 11 01:22:52.493361 containerd[1717]: time="2026-03-11T01:22:52.492255360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 11 01:22:52.493361 containerd[1717]: time="2026-03-11T01:22:52.492307200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 11 01:22:52.493361 containerd[1717]: time="2026-03-11T01:22:52.492322040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 11 01:22:52.493361 containerd[1717]: time="2026-03-11T01:22:52.492341440Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 11 01:22:52.493361 containerd[1717]: time="2026-03-11T01:22:52.492386720Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 11 01:22:52.493361 containerd[1717]: time="2026-03-11T01:22:52.492526480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 11 01:22:52.493361 containerd[1717]: time="2026-03-11T01:22:52.492738440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 11 01:22:52.493361 containerd[1717]: time="2026-03-11T01:22:52.492831320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 11 01:22:52.493361 containerd[1717]: time="2026-03-11T01:22:52.492847440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 11 01:22:52.493361 containerd[1717]: time="2026-03-11T01:22:52.492859520Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 11 01:22:52.493361 containerd[1717]: time="2026-03-11T01:22:52.492872600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 11 01:22:52.493361 containerd[1717]: time="2026-03-11T01:22:52.492888680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 11 01:22:52.493361 containerd[1717]: time="2026-03-11T01:22:52.492900720Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 11 01:22:52.493361 containerd[1717]: time="2026-03-11T01:22:52.492913560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 11 01:22:52.493668 containerd[1717]: time="2026-03-11T01:22:52.492926880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 11 01:22:52.493668 containerd[1717]: time="2026-03-11T01:22:52.492939440Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 11 01:22:52.493668 containerd[1717]: time="2026-03-11T01:22:52.492952720Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 11 01:22:52.493668 containerd[1717]: time="2026-03-11T01:22:52.492965480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 11 01:22:52.493668 containerd[1717]: time="2026-03-11T01:22:52.492984160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 11 01:22:52.493668 containerd[1717]: time="2026-03-11T01:22:52.493003880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 11 01:22:52.493668 containerd[1717]: time="2026-03-11T01:22:52.493015800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 11 01:22:52.493668 containerd[1717]: time="2026-03-11T01:22:52.493043520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 11 01:22:52.493668 containerd[1717]: time="2026-03-11T01:22:52.493055240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 11 01:22:52.493668 containerd[1717]: time="2026-03-11T01:22:52.493068840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 11 01:22:52.493668 containerd[1717]: time="2026-03-11T01:22:52.493080360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 11 01:22:52.493668 containerd[1717]: time="2026-03-11T01:22:52.493093240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 11 01:22:52.493668 containerd[1717]: time="2026-03-11T01:22:52.493105560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 11 01:22:52.493668 containerd[1717]: time="2026-03-11T01:22:52.493119680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 11 01:22:52.493898 containerd[1717]: time="2026-03-11T01:22:52.493136360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 11 01:22:52.493898 containerd[1717]: time="2026-03-11T01:22:52.493149360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 11 01:22:52.493898 containerd[1717]: time="2026-03-11T01:22:52.493161720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 11 01:22:52.493898 containerd[1717]: time="2026-03-11T01:22:52.493177080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 11 01:22:52.493898 containerd[1717]: time="2026-03-11T01:22:52.493200840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 11 01:22:52.493898 containerd[1717]: time="2026-03-11T01:22:52.493213080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 11 01:22:52.493898 containerd[1717]: time="2026-03-11T01:22:52.493223600Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 11 01:22:52.494472 containerd[1717]: time="2026-03-11T01:22:52.494438000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 11 01:22:52.494556 containerd[1717]: time="2026-03-11T01:22:52.494541560Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 11 01:22:52.494604 containerd[1717]: time="2026-03-11T01:22:52.494592760Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 11 01:22:52.494673 containerd[1717]: time="2026-03-11T01:22:52.494656120Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 11 01:22:52.496478 containerd[1717]: time="2026-03-11T01:22:52.495477440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 11 01:22:52.496478 containerd[1717]: time="2026-03-11T01:22:52.495497480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 11 01:22:52.496478 containerd[1717]: time="2026-03-11T01:22:52.495509720Z" level=info msg="NRI interface is disabled by configuration." Mar 11 01:22:52.496478 containerd[1717]: time="2026-03-11T01:22:52.495525320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 11 01:22:52.496610 containerd[1717]: time="2026-03-11T01:22:52.495817640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 11 01:22:52.496610 containerd[1717]: time="2026-03-11T01:22:52.495877360Z" level=info msg="Connect containerd service" Mar 11 01:22:52.496610 containerd[1717]: time="2026-03-11T01:22:52.495908840Z" level=info msg="using legacy CRI server" Mar 11 01:22:52.496610 containerd[1717]: time="2026-03-11T01:22:52.495915520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 11 01:22:52.496610 containerd[1717]: time="2026-03-11T01:22:52.496078240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 11 01:22:52.497511 containerd[1717]: time="2026-03-11T01:22:52.496943880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 11 01:22:52.497570 containerd[1717]: time="2026-03-11T01:22:52.497548360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 11 01:22:52.497614 containerd[1717]: time="2026-03-11T01:22:52.497600880Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 11 01:22:52.497657 containerd[1717]: time="2026-03-11T01:22:52.497630920Z" level=info msg="Start subscribing containerd event" Mar 11 01:22:52.497686 containerd[1717]: time="2026-03-11T01:22:52.497675840Z" level=info msg="Start recovering state" Mar 11 01:22:52.497756 containerd[1717]: time="2026-03-11T01:22:52.497740720Z" level=info msg="Start event monitor" Mar 11 01:22:52.497784 containerd[1717]: time="2026-03-11T01:22:52.497754000Z" level=info msg="Start snapshots syncer" Mar 11 01:22:52.497784 containerd[1717]: time="2026-03-11T01:22:52.497768040Z" level=info msg="Start cni network conf syncer for default" Mar 11 01:22:52.497784 containerd[1717]: time="2026-03-11T01:22:52.497775840Z" level=info msg="Start streaming server" Mar 11 01:22:52.502574 containerd[1717]: time="2026-03-11T01:22:52.498005880Z" level=info msg="containerd successfully booted in 0.075257s" Mar 11 01:22:52.498090 systemd[1]: Started containerd.service - containerd container runtime. Mar 11 01:22:52.559489 locksmithd[1732]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 11 01:22:52.730852 tar[1702]: linux-arm64/README.md Mar 11 01:22:52.741283 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 11 01:22:52.742751 sshd_keygen[1693]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 11 01:22:52.762922 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 11 01:22:52.772643 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 11 01:22:52.776631 systemd-networkd[1612]: eth0: Gained IPv6LL Mar 11 01:22:52.780592 systemd[1]: issuegen.service: Deactivated successfully. Mar 11 01:22:52.780839 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 11 01:22:52.786189 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 11 01:22:52.793094 systemd[1]: Reached target network-online.target - Network is Online. Mar 11 01:22:52.802615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:22:52.808812 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 11 01:22:52.823292 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 11 01:22:52.831541 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 11 01:22:52.844365 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 11 01:22:52.859732 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 11 01:22:52.872704 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 11 01:22:52.878779 systemd[1]: Reached target getty.target - Login Prompts. Mar 11 01:22:52.883697 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 11 01:22:52.893322 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 11 01:22:53.450774 waagent[1808]: 2026-03-11T01:22:53.450680Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 11 01:22:53.455769 waagent[1808]: 2026-03-11T01:22:53.455708Z INFO Daemon Daemon OS: flatcar 4081.3.6 Mar 11 01:22:53.461470 waagent[1808]: 2026-03-11T01:22:53.460054Z INFO Daemon Daemon Python: 3.11.9 Mar 11 01:22:53.463695 waagent[1808]: 2026-03-11T01:22:53.463625Z INFO Daemon Daemon Run daemon Mar 11 01:22:53.466804 waagent[1808]: 2026-03-11T01:22:53.466760Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Mar 11 01:22:53.475565 waagent[1808]: 2026-03-11T01:22:53.474914Z INFO Daemon Daemon Using waagent for provisioning Mar 11 01:22:53.479194 waagent[1808]: 2026-03-11T01:22:53.479152Z INFO Daemon Daemon Activate resource disk Mar 11 01:22:53.485934 waagent[1808]: 2026-03-11T01:22:53.484192Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 11 01:22:53.493155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:22:53.499162 (kubelet)[1818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 01:22:53.500389 waagent[1808]: 2026-03-11T01:22:53.500337Z INFO Daemon Daemon Found device: None Mar 11 01:22:53.502286 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 11 01:22:53.504227 waagent[1808]: 2026-03-11T01:22:53.504180Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 11 01:22:53.510776 waagent[1808]: 2026-03-11T01:22:53.510730Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 11 01:22:53.524234 systemd[1]: Startup finished in 604ms (kernel) + 8.354s (initrd) + 5.579s (userspace) = 14.538s. Mar 11 01:22:53.527296 waagent[1808]: 2026-03-11T01:22:53.527242Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 11 01:22:53.532488 waagent[1808]: 2026-03-11T01:22:53.532166Z INFO Daemon Daemon Running default provisioning handler Mar 11 01:22:53.549176 waagent[1808]: 2026-03-11T01:22:53.549115Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 11 01:22:53.561234 waagent[1808]: 2026-03-11T01:22:53.561179Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 11 01:22:53.570101 waagent[1808]: 2026-03-11T01:22:53.570049Z INFO Daemon Daemon cloud-init is enabled: False Mar 11 01:22:53.574438 waagent[1808]: 2026-03-11T01:22:53.574388Z INFO Daemon Daemon Copying ovf-env.xml Mar 11 01:22:53.630778 waagent[1808]: 2026-03-11T01:22:53.630707Z INFO Daemon Daemon Successfully mounted dvd Mar 11 01:22:53.650565 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 11 01:22:53.653670 login[1802]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:22:53.654754 waagent[1808]: 2026-03-11T01:22:53.651913Z INFO Daemon Daemon Detect protocol endpoint Mar 11 01:22:53.658726 login[1804]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:22:53.661769 waagent[1808]: 2026-03-11T01:22:53.661720Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 11 01:22:53.666311 waagent[1808]: 2026-03-11T01:22:53.666271Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 11 01:22:53.666996 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 11 01:22:53.671969 waagent[1808]: 2026-03-11T01:22:53.671922Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 11 01:22:53.676276 waagent[1808]: 2026-03-11T01:22:53.676233Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 11 01:22:53.680844 waagent[1808]: 2026-03-11T01:22:53.680803Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 11 01:22:53.685785 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 11 01:22:53.688563 systemd-logind[1690]: New session 1 of user core. Mar 11 01:22:53.695423 systemd-logind[1690]: New session 2 of user core. Mar 11 01:22:53.713801 waagent[1808]: 2026-03-11T01:22:53.703610Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 11 01:22:53.705256 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 11 01:22:53.711824 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 11 01:22:53.716069 waagent[1808]: 2026-03-11T01:22:53.715971Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 11 01:22:53.721014 waagent[1808]: 2026-03-11T01:22:53.720961Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 11 01:22:53.721765 (systemd)[1836]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 11 01:22:53.881587 systemd[1836]: Queued start job for default target default.target. Mar 11 01:22:53.887633 systemd[1836]: Created slice app.slice - User Application Slice. Mar 11 01:22:53.887660 systemd[1836]: Reached target paths.target - Paths. Mar 11 01:22:53.887672 systemd[1836]: Reached target timers.target - Timers. Mar 11 01:22:53.889270 systemd[1836]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 11 01:22:53.911595 systemd[1836]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 11 01:22:53.911696 systemd[1836]: Reached target sockets.target - Sockets. Mar 11 01:22:53.911708 systemd[1836]: Reached target basic.target - Basic System. Mar 11 01:22:53.911824 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 11 01:22:53.913050 systemd[1836]: Reached target default.target - Main User Target. Mar 11 01:22:53.913103 systemd[1836]: Startup finished in 180ms. Mar 11 01:22:53.916773 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 11 01:22:53.918333 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 11 01:22:53.991713 waagent[1808]: 2026-03-11T01:22:53.991585Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 11 01:22:53.999765 waagent[1808]: 2026-03-11T01:22:53.998911Z INFO Daemon Daemon Forcing an update of the goal state. Mar 11 01:22:54.010972 waagent[1808]: 2026-03-11T01:22:54.010919Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 11 01:22:54.028174 waagent[1808]: 2026-03-11T01:22:54.028120Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 11 01:22:54.032770 waagent[1808]: 2026-03-11T01:22:54.032668Z INFO Daemon Mar 11 01:22:54.035162 waagent[1808]: 2026-03-11T01:22:54.035043Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 7fc238e8-8f0d-4a9d-a24c-3f87faff5a54 eTag: 6251504218800447513 source: Fabric] Mar 11 01:22:54.043973 waagent[1808]: 2026-03-11T01:22:54.043800Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 11 01:22:54.049253 waagent[1808]: 2026-03-11T01:22:54.049207Z INFO Daemon Mar 11 01:22:54.051464 waagent[1808]: 2026-03-11T01:22:54.051397Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 11 01:22:54.060156 waagent[1808]: 2026-03-11T01:22:54.059663Z INFO Daemon Daemon Downloading artifacts profile blob Mar 11 01:22:54.078443 kubelet[1818]: E0311 01:22:54.078398 1818 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 01:22:54.082167 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 01:22:54.082310 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 01:22:54.133472 waagent[1808]: 2026-03-11T01:22:54.131418Z INFO Daemon Downloaded certificate {'thumbprint': '5C51745DD80715651C47BAC9078223A9D16F6B71', 'hasPrivateKey': True} Mar 11 01:22:54.139065 waagent[1808]: 2026-03-11T01:22:54.139022Z INFO Daemon Fetch goal state completed Mar 11 01:22:54.149054 waagent[1808]: 2026-03-11T01:22:54.149019Z INFO Daemon Daemon Starting provisioning Mar 11 01:22:54.152917 waagent[1808]: 2026-03-11T01:22:54.152878Z INFO Daemon Daemon Handle ovf-env.xml. Mar 11 01:22:54.156805 waagent[1808]: 2026-03-11T01:22:54.156774Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-49f1e4db19] Mar 11 01:22:54.163079 waagent[1808]: 2026-03-11T01:22:54.163032Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-49f1e4db19] Mar 11 01:22:54.168290 waagent[1808]: 2026-03-11T01:22:54.168249Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 11 01:22:54.173404 waagent[1808]: 2026-03-11T01:22:54.173366Z INFO Daemon Daemon Primary interface is [eth0] Mar 11 01:22:54.188515 systemd-networkd[1612]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 11 01:22:54.188521 systemd-networkd[1612]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 11 01:22:54.188550 systemd-networkd[1612]: eth0: DHCP lease lost Mar 11 01:22:54.189720 waagent[1808]: 2026-03-11T01:22:54.189663Z INFO Daemon Daemon Create user account if not exists Mar 11 01:22:54.194103 waagent[1808]: 2026-03-11T01:22:54.194062Z INFO Daemon Daemon User core already exists, skip useradd Mar 11 01:22:54.198395 waagent[1808]: 2026-03-11T01:22:54.198360Z INFO Daemon Daemon Configure sudoer Mar 11 01:22:54.201846 waagent[1808]: 2026-03-11T01:22:54.201804Z INFO Daemon Daemon Configure sshd Mar 11 01:22:54.205272 waagent[1808]: 2026-03-11T01:22:54.205228Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 11 01:22:54.214538 waagent[1808]: 2026-03-11T01:22:54.214505Z INFO Daemon Daemon Deploy ssh public key. Mar 11 01:22:54.220499 systemd-networkd[1612]: eth0: DHCPv6 lease lost Mar 11 01:22:54.234508 systemd-networkd[1612]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 11 01:22:55.288886 waagent[1808]: 2026-03-11T01:22:55.285441Z INFO Daemon Daemon Provisioning complete Mar 11 01:22:55.299867 waagent[1808]: 2026-03-11T01:22:55.299825Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 11 01:22:55.304534 waagent[1808]: 2026-03-11T01:22:55.304495Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 11 01:22:55.311530 waagent[1808]: 2026-03-11T01:22:55.311496Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 11 01:22:55.434485 waagent[1880]: 2026-03-11T01:22:55.434256Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 11 01:22:55.434485 waagent[1880]: 2026-03-11T01:22:55.434386Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Mar 11 01:22:55.434485 waagent[1880]: 2026-03-11T01:22:55.434439Z INFO ExtHandler ExtHandler Python: 3.11.9 Mar 11 01:22:55.450447 waagent[1880]: 2026-03-11T01:22:55.450382Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 11 01:22:55.450607 waagent[1880]: 2026-03-11T01:22:55.450571Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 11 01:22:55.450664 waagent[1880]: 2026-03-11T01:22:55.450637Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 11 01:22:55.457553 waagent[1880]: 2026-03-11T01:22:55.457500Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 11 01:22:55.462125 waagent[1880]: 2026-03-11T01:22:55.462088Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 11 01:22:55.462564 waagent[1880]: 2026-03-11T01:22:55.462526Z INFO ExtHandler Mar 11 01:22:55.462633 waagent[1880]: 2026-03-11T01:22:55.462606Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b4d26338-2c20-4a0c-83be-1625c9cc02b9 eTag: 6251504218800447513 source: Fabric] Mar 11 01:22:55.462912 waagent[1880]: 2026-03-11T01:22:55.462876Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 11 01:22:55.463462 waagent[1880]: 2026-03-11T01:22:55.463418Z INFO ExtHandler Mar 11 01:22:55.463539 waagent[1880]: 2026-03-11T01:22:55.463510Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 11 01:22:55.466442 waagent[1880]: 2026-03-11T01:22:55.466411Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 11 01:22:55.534136 waagent[1880]: 2026-03-11T01:22:55.533169Z INFO ExtHandler Downloaded certificate {'thumbprint': '5C51745DD80715651C47BAC9078223A9D16F6B71', 'hasPrivateKey': True} Mar 11 01:22:55.534136 waagent[1880]: 2026-03-11T01:22:55.533689Z INFO ExtHandler Fetch goal state completed Mar 11 01:22:55.548101 waagent[1880]: 2026-03-11T01:22:55.548015Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1880 Mar 11 01:22:55.548326 waagent[1880]: 2026-03-11T01:22:55.548292Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 11 01:22:55.549927 waagent[1880]: 2026-03-11T01:22:55.549889Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Mar 11 01:22:55.550406 waagent[1880]: 2026-03-11T01:22:55.550370Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 11 01:22:55.563161 waagent[1880]: 2026-03-11T01:22:55.563132Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 11 01:22:55.563390 waagent[1880]: 2026-03-11T01:22:55.563355Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 11 01:22:55.569398 waagent[1880]: 2026-03-11T01:22:55.569366Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 11 01:22:55.575171 systemd[1]: Reloading requested from client PID 1893 ('systemctl') (unit waagent.service)... Mar 11 01:22:55.575182 systemd[1]: Reloading... Mar 11 01:22:55.649520 zram_generator::config[1927]: No configuration found. Mar 11 01:22:55.748892 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 01:22:55.831235 systemd[1]: Reloading finished in 255 ms. Mar 11 01:22:55.856748 waagent[1880]: 2026-03-11T01:22:55.856673Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 11 01:22:55.863076 systemd[1]: Reloading requested from client PID 1981 ('systemctl') (unit waagent.service)... Mar 11 01:22:55.863091 systemd[1]: Reloading... Mar 11 01:22:55.935668 zram_generator::config[2015]: No configuration found. Mar 11 01:22:56.037343 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 01:22:56.115098 systemd[1]: Reloading finished in 251 ms. Mar 11 01:22:56.138005 waagent[1880]: 2026-03-11T01:22:56.136558Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 11 01:22:56.138005 waagent[1880]: 2026-03-11T01:22:56.136708Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 11 01:22:56.260901 waagent[1880]: 2026-03-11T01:22:56.260820Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 11 01:22:56.261439 waagent[1880]: 2026-03-11T01:22:56.261396Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 11 01:22:56.262189 waagent[1880]: 2026-03-11T01:22:56.262118Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 11 01:22:56.262537 waagent[1880]: 2026-03-11T01:22:56.262449Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 11 01:22:56.262991 waagent[1880]: 2026-03-11T01:22:56.262892Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 11 01:22:56.263055 waagent[1880]: 2026-03-11T01:22:56.262982Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 11 01:22:56.263289 waagent[1880]: 2026-03-11T01:22:56.263189Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 11 01:22:56.264164 waagent[1880]: 2026-03-11T01:22:56.263438Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 11 01:22:56.264164 waagent[1880]: 2026-03-11T01:22:56.263558Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 11 01:22:56.264164 waagent[1880]: 2026-03-11T01:22:56.263757Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 11 01:22:56.264164 waagent[1880]: 2026-03-11T01:22:56.263918Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 11 01:22:56.264164 waagent[1880]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 11 01:22:56.264164 waagent[1880]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 11 01:22:56.264164 waagent[1880]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 11 01:22:56.264164 waagent[1880]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 11 01:22:56.264164 waagent[1880]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 11 01:22:56.264164 waagent[1880]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 11 01:22:56.264574 waagent[1880]: 2026-03-11T01:22:56.264499Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 11 01:22:56.264621 waagent[1880]: 2026-03-11T01:22:56.264575Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 11 01:22:56.265278 waagent[1880]: 2026-03-11T01:22:56.265248Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 11 01:22:56.265347 waagent[1880]: 2026-03-11T01:22:56.265160Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 11 01:22:56.265593 waagent[1880]: 2026-03-11T01:22:56.265552Z INFO EnvHandler ExtHandler Configure routes Mar 11 01:22:56.266219 waagent[1880]: 2026-03-11T01:22:56.266189Z INFO EnvHandler ExtHandler Gateway:None Mar 11 01:22:56.266408 waagent[1880]: 2026-03-11T01:22:56.266383Z INFO EnvHandler ExtHandler Routes:None Mar 11 01:22:56.270290 waagent[1880]: 2026-03-11T01:22:56.270247Z INFO ExtHandler ExtHandler Mar 11 01:22:56.272861 waagent[1880]: 2026-03-11T01:22:56.272734Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 03ed260c-6817-4bd3-8e7d-577c274a0226 correlation 944ddbf4-4e0e-421f-a030-5fa8b498e113 created: 2026-03-11T01:22:22.258392Z] Mar 11 01:22:56.273306 waagent[1880]: 2026-03-11T01:22:56.273268Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 11 01:22:56.275709 waagent[1880]: 2026-03-11T01:22:56.275656Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 5 ms] Mar 11 01:22:56.290278 waagent[1880]: 2026-03-11T01:22:56.290226Z INFO MonitorHandler ExtHandler Network interfaces: Mar 11 01:22:56.290278 waagent[1880]: Executing ['ip', '-a', '-o', 'link']: Mar 11 01:22:56.290278 waagent[1880]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 11 01:22:56.290278 waagent[1880]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:d2:a8:e9 brd ff:ff:ff:ff:ff:ff Mar 11 01:22:56.290278 waagent[1880]: 3: enP39900s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:d2:a8:e9 brd ff:ff:ff:ff:ff:ff\ altname enP39900p0s2 Mar 11 01:22:56.290278 waagent[1880]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 11 01:22:56.290278 waagent[1880]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 11 01:22:56.290278 waagent[1880]: 2: eth0 inet 10.200.20.12/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 11 01:22:56.290278 waagent[1880]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 11 01:22:56.290278 waagent[1880]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 11 01:22:56.290278 waagent[1880]: 2: eth0 inet6 fe80::7eed:8dff:fed2:a8e9/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 11 01:22:56.310385 waagent[1880]: 2026-03-11T01:22:56.310331Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: C20247B4-791E-4DEC-A16D-349C76006724;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 11 01:22:56.319076 waagent[1880]: 2026-03-11T01:22:56.319031Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 11 01:22:56.319076 waagent[1880]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 11 01:22:56.319076 waagent[1880]: pkts bytes target prot opt in out source destination Mar 11 01:22:56.319076 waagent[1880]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 11 01:22:56.319076 waagent[1880]: pkts bytes target prot opt in out source destination Mar 11 01:22:56.319076 waagent[1880]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 11 01:22:56.319076 waagent[1880]: pkts bytes target prot opt in out source destination Mar 11 01:22:56.319076 waagent[1880]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 11 01:22:56.319076 waagent[1880]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 11 01:22:56.319076 waagent[1880]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 11 01:22:56.321974 waagent[1880]: 2026-03-11T01:22:56.321932Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 11 01:22:56.321974 waagent[1880]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 11 01:22:56.321974 waagent[1880]: pkts bytes target prot opt in out source destination Mar 11 01:22:56.321974 waagent[1880]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 11 01:22:56.321974 waagent[1880]: pkts bytes target prot opt in out source destination Mar 11 01:22:56.321974 waagent[1880]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 11 01:22:56.321974 waagent[1880]: pkts bytes target prot opt in out source destination Mar 11 01:22:56.321974 waagent[1880]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 11 01:22:56.321974 waagent[1880]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 11 01:22:56.321974 waagent[1880]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 11 01:22:56.322523 waagent[1880]: 2026-03-11T01:22:56.322396Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 11 01:23:04.332958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 11 01:23:04.339604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:23:04.435109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:23:04.438570 (kubelet)[2108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 01:23:04.580594 kubelet[2108]: E0311 01:23:04.580547 2108 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 01:23:04.583919 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 01:23:04.584159 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 01:23:08.208502 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 11 01:23:08.209782 systemd[1]: Started sshd@0-10.200.20.12:22-10.200.16.10:50110.service - OpenSSH per-connection server daemon (10.200.16.10:50110). Mar 11 01:23:08.702758 sshd[2116]: Accepted publickey for core from 10.200.16.10 port 50110 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:23:08.703949 sshd[2116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:23:08.707373 systemd-logind[1690]: New session 3 of user core. Mar 11 01:23:08.712605 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 11 01:23:09.125570 systemd[1]: Started sshd@1-10.200.20.12:22-10.200.16.10:50122.service - OpenSSH per-connection server daemon (10.200.16.10:50122). Mar 11 01:23:09.585715 sshd[2121]: Accepted publickey for core from 10.200.16.10 port 50122 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:23:09.586637 sshd[2121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:23:09.590013 systemd-logind[1690]: New session 4 of user core. Mar 11 01:23:09.597575 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 11 01:23:09.919429 sshd[2121]: pam_unix(sshd:session): session closed for user core Mar 11 01:23:09.923242 systemd[1]: sshd@1-10.200.20.12:22-10.200.16.10:50122.service: Deactivated successfully. Mar 11 01:23:09.924805 systemd[1]: session-4.scope: Deactivated successfully. Mar 11 01:23:09.925484 systemd-logind[1690]: Session 4 logged out. Waiting for processes to exit. Mar 11 01:23:09.926367 systemd-logind[1690]: Removed session 4. Mar 11 01:23:10.008727 systemd[1]: Started sshd@2-10.200.20.12:22-10.200.16.10:37028.service - OpenSSH per-connection server daemon (10.200.16.10:37028). Mar 11 01:23:10.501064 sshd[2128]: Accepted publickey for core from 10.200.16.10 port 37028 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:23:10.501854 sshd[2128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:23:10.506211 systemd-logind[1690]: New session 5 of user core. Mar 11 01:23:10.511585 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 11 01:23:10.852056 sshd[2128]: pam_unix(sshd:session): session closed for user core Mar 11 01:23:10.855323 systemd[1]: sshd@2-10.200.20.12:22-10.200.16.10:37028.service: Deactivated successfully. Mar 11 01:23:10.856796 systemd[1]: session-5.scope: Deactivated successfully. Mar 11 01:23:10.857387 systemd-logind[1690]: Session 5 logged out. Waiting for processes to exit. Mar 11 01:23:10.858161 systemd-logind[1690]: Removed session 5. Mar 11 01:23:10.918610 systemd[1]: Started sshd@3-10.200.20.12:22-10.200.16.10:37042.service - OpenSSH per-connection server daemon (10.200.16.10:37042). Mar 11 01:23:11.326968 sshd[2135]: Accepted publickey for core from 10.200.16.10 port 37042 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:23:11.328501 sshd[2135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:23:11.332042 systemd-logind[1690]: New session 6 of user core. Mar 11 01:23:11.342571 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 11 01:23:11.631384 sshd[2135]: pam_unix(sshd:session): session closed for user core Mar 11 01:23:11.635101 systemd[1]: sshd@3-10.200.20.12:22-10.200.16.10:37042.service: Deactivated successfully. Mar 11 01:23:11.636862 systemd[1]: session-6.scope: Deactivated successfully. Mar 11 01:23:11.638886 systemd-logind[1690]: Session 6 logged out. Waiting for processes to exit. Mar 11 01:23:11.639802 systemd-logind[1690]: Removed session 6. Mar 11 01:23:11.716628 systemd[1]: Started sshd@4-10.200.20.12:22-10.200.16.10:37050.service - OpenSSH per-connection server daemon (10.200.16.10:37050). Mar 11 01:23:12.174578 sshd[2142]: Accepted publickey for core from 10.200.16.10 port 37050 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:23:12.175302 sshd[2142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:23:12.178736 systemd-logind[1690]: New session 7 of user core. Mar 11 01:23:12.194564 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 11 01:23:12.469561 sudo[2145]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 11 01:23:12.469827 sudo[2145]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 01:23:12.484095 sudo[2145]: pam_unix(sudo:session): session closed for user root Mar 11 01:23:12.555779 sshd[2142]: pam_unix(sshd:session): session closed for user core Mar 11 01:23:12.560163 systemd[1]: sshd@4-10.200.20.12:22-10.200.16.10:37050.service: Deactivated successfully. Mar 11 01:23:12.561835 systemd[1]: session-7.scope: Deactivated successfully. Mar 11 01:23:12.562621 systemd-logind[1690]: Session 7 logged out. Waiting for processes to exit. Mar 11 01:23:12.563604 systemd-logind[1690]: Removed session 7. Mar 11 01:23:12.641287 systemd[1]: Started sshd@5-10.200.20.12:22-10.200.16.10:37066.service - OpenSSH per-connection server daemon (10.200.16.10:37066). Mar 11 01:23:13.086680 sshd[2150]: Accepted publickey for core from 10.200.16.10 port 37066 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:23:13.087515 sshd[2150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:23:13.091326 systemd-logind[1690]: New session 8 of user core. Mar 11 01:23:13.098582 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 11 01:23:13.339417 sudo[2154]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 11 01:23:13.339754 sudo[2154]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 01:23:13.343263 sudo[2154]: pam_unix(sudo:session): session closed for user root Mar 11 01:23:13.347716 sudo[2153]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 11 01:23:13.347968 sudo[2153]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 01:23:13.359857 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 11 01:23:13.360820 auditctl[2157]: No rules Mar 11 01:23:13.361235 systemd[1]: audit-rules.service: Deactivated successfully. Mar 11 01:23:13.361380 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 11 01:23:13.363953 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 11 01:23:13.384231 augenrules[2175]: No rules Mar 11 01:23:13.385449 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 11 01:23:13.386413 sudo[2153]: pam_unix(sudo:session): session closed for user root Mar 11 01:23:13.462325 sshd[2150]: pam_unix(sshd:session): session closed for user core Mar 11 01:23:13.465607 systemd[1]: sshd@5-10.200.20.12:22-10.200.16.10:37066.service: Deactivated successfully. Mar 11 01:23:13.467026 systemd[1]: session-8.scope: Deactivated successfully. Mar 11 01:23:13.467668 systemd-logind[1690]: Session 8 logged out. Waiting for processes to exit. Mar 11 01:23:13.468369 systemd-logind[1690]: Removed session 8. Mar 11 01:23:13.548849 systemd[1]: Started sshd@6-10.200.20.12:22-10.200.16.10:37074.service - OpenSSH per-connection server daemon (10.200.16.10:37074). Mar 11 01:23:14.033094 sshd[2183]: Accepted publickey for core from 10.200.16.10 port 37074 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:23:14.034365 sshd[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:23:14.038007 systemd-logind[1690]: New session 9 of user core. Mar 11 01:23:14.046575 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 11 01:23:14.306341 sudo[2186]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 11 01:23:14.306628 sudo[2186]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 11 01:23:14.670760 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 11 01:23:14.677603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:23:14.771430 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:23:14.775449 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 01:23:14.877442 kubelet[2202]: E0311 01:23:14.877344 2202 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 01:23:14.879625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 01:23:14.879766 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 01:23:15.673766 chronyd[1672]: Selected source PHC0 Mar 11 01:23:16.175664 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 11 01:23:16.175764 (dockerd)[2215]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 11 01:23:16.946472 dockerd[2215]: time="2026-03-11T01:23:16.945809963Z" level=info msg="Starting up" Mar 11 01:23:17.493121 dockerd[2215]: time="2026-03-11T01:23:17.493084364Z" level=info msg="Loading containers: start." Mar 11 01:23:17.701478 kernel: Initializing XFRM netlink socket Mar 11 01:23:17.986195 systemd-networkd[1612]: docker0: Link UP Mar 11 01:23:18.004497 dockerd[2215]: time="2026-03-11T01:23:18.004438392Z" level=info msg="Loading containers: done." Mar 11 01:23:18.014299 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3502784377-merged.mount: Deactivated successfully. Mar 11 01:23:18.021782 dockerd[2215]: time="2026-03-11T01:23:18.021736820Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 11 01:23:18.021875 dockerd[2215]: time="2026-03-11T01:23:18.021854540Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 11 01:23:18.021989 dockerd[2215]: time="2026-03-11T01:23:18.021973100Z" level=info msg="Daemon has completed initialization" Mar 11 01:23:18.071983 dockerd[2215]: time="2026-03-11T01:23:18.071567501Z" level=info msg="API listen on /run/docker.sock" Mar 11 01:23:18.072304 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 11 01:23:18.539281 containerd[1717]: time="2026-03-11T01:23:18.539245579Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 11 01:23:19.567815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount80319480.mount: Deactivated successfully. Mar 11 01:23:21.450585 containerd[1717]: time="2026-03-11T01:23:21.449533260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:21.452533 containerd[1717]: time="2026-03-11T01:23:21.452500745Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=24583252" Mar 11 01:23:21.455589 containerd[1717]: time="2026-03-11T01:23:21.455552190Z" level=info msg="ImageCreate event name:\"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:21.460383 containerd[1717]: time="2026-03-11T01:23:21.459958997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:21.461782 containerd[1717]: time="2026-03-11T01:23:21.460982399Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"24579851\" in 2.92169946s" Mar 11 01:23:21.461782 containerd[1717]: time="2026-03-11T01:23:21.461018479Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\"" Mar 11 01:23:21.461782 containerd[1717]: time="2026-03-11T01:23:21.461525560Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 11 01:23:23.048240 containerd[1717]: time="2026-03-11T01:23:23.048192614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:23.055477 containerd[1717]: time="2026-03-11T01:23:23.055445985Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=19139641" Mar 11 01:23:23.058961 containerd[1717]: time="2026-03-11T01:23:23.058598751Z" level=info msg="ImageCreate event name:\"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:23.063814 containerd[1717]: time="2026-03-11T01:23:23.063787359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:23.064844 containerd[1717]: time="2026-03-11T01:23:23.064818281Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"20724045\" in 1.603266561s" Mar 11 01:23:23.064938 containerd[1717]: time="2026-03-11T01:23:23.064922241Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\"" Mar 11 01:23:23.065406 containerd[1717]: time="2026-03-11T01:23:23.065384802Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 11 01:23:24.288499 containerd[1717]: time="2026-03-11T01:23:24.288443624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:24.294359 containerd[1717]: time="2026-03-11T01:23:24.294128153Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=14195544" Mar 11 01:23:24.300469 containerd[1717]: time="2026-03-11T01:23:24.298759960Z" level=info msg="ImageCreate event name:\"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:24.306402 containerd[1717]: time="2026-03-11T01:23:24.306373733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:24.307553 containerd[1717]: time="2026-03-11T01:23:24.307522175Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"15779966\" in 1.241617212s" Mar 11 01:23:24.307656 containerd[1717]: time="2026-03-11T01:23:24.307632735Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\"" Mar 11 01:23:24.308286 containerd[1717]: time="2026-03-11T01:23:24.308248976Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 11 01:23:24.920735 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 11 01:23:24.928591 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:23:25.036742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:23:25.047684 (kubelet)[2428]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 11 01:23:25.165611 kubelet[2428]: E0311 01:23:25.165552 2428 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 11 01:23:25.168227 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 11 01:23:25.168380 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 11 01:23:25.831662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3047290834.mount: Deactivated successfully. Mar 11 01:23:26.069875 containerd[1717]: time="2026-03-11T01:23:26.069820748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:26.072597 containerd[1717]: time="2026-03-11T01:23:26.072570472Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=22697088" Mar 11 01:23:26.075337 containerd[1717]: time="2026-03-11T01:23:26.075306997Z" level=info msg="ImageCreate event name:\"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:26.078915 containerd[1717]: time="2026-03-11T01:23:26.078872122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:26.079547 containerd[1717]: time="2026-03-11T01:23:26.079421283Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"22696107\" in 1.771142907s" Mar 11 01:23:26.079547 containerd[1717]: time="2026-03-11T01:23:26.079449523Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\"" Mar 11 01:23:26.080275 containerd[1717]: time="2026-03-11T01:23:26.080252525Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 11 01:23:26.736746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3121900601.mount: Deactivated successfully. Mar 11 01:23:27.916166 containerd[1717]: time="2026-03-11T01:23:27.916118057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:27.922221 containerd[1717]: time="2026-03-11T01:23:27.922124507Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Mar 11 01:23:27.923016 containerd[1717]: time="2026-03-11T01:23:27.922984108Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:27.927644 containerd[1717]: time="2026-03-11T01:23:27.927250235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:27.928472 containerd[1717]: time="2026-03-11T01:23:27.928432757Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.848149672s" Mar 11 01:23:27.928562 containerd[1717]: time="2026-03-11T01:23:27.928546797Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Mar 11 01:23:27.928992 containerd[1717]: time="2026-03-11T01:23:27.928970798Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 11 01:23:28.493048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3125340085.mount: Deactivated successfully. Mar 11 01:23:28.513747 containerd[1717]: time="2026-03-11T01:23:28.513703704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:28.516071 containerd[1717]: time="2026-03-11T01:23:28.515902588Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 11 01:23:28.518616 containerd[1717]: time="2026-03-11T01:23:28.518492872Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:28.523909 containerd[1717]: time="2026-03-11T01:23:28.522948079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:28.523909 containerd[1717]: time="2026-03-11T01:23:28.523625200Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 594.562362ms" Mar 11 01:23:28.523909 containerd[1717]: time="2026-03-11T01:23:28.523651440Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 11 01:23:28.524313 containerd[1717]: time="2026-03-11T01:23:28.524292042Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 11 01:23:29.570775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3366716064.mount: Deactivated successfully. Mar 11 01:23:30.800489 containerd[1717]: time="2026-03-11T01:23:30.799547965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:30.802048 containerd[1717]: time="2026-03-11T01:23:30.801836289Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21125515" Mar 11 01:23:30.804794 containerd[1717]: time="2026-03-11T01:23:30.804328253Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:30.809523 containerd[1717]: time="2026-03-11T01:23:30.809496381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:30.810669 containerd[1717]: time="2026-03-11T01:23:30.810640583Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 2.286243701s" Mar 11 01:23:30.810772 containerd[1717]: time="2026-03-11T01:23:30.810757183Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Mar 11 01:23:35.170762 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 11 01:23:35.183741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:23:36.517801 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 11 01:23:36.517890 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 11 01:23:36.518125 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:23:36.533765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:23:36.675875 systemd[1]: Reloading requested from client PID 2593 ('systemctl') (unit session-9.scope)... Mar 11 01:23:36.675888 systemd[1]: Reloading... Mar 11 01:23:36.779525 zram_generator::config[2633]: No configuration found. Mar 11 01:23:36.884810 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 01:23:36.965297 systemd[1]: Reloading finished in 289 ms. Mar 11 01:23:37.008523 update_engine[1695]: I20260311 01:23:37.005548 1695 update_attempter.cc:509] Updating boot flags... Mar 11 01:23:37.011023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:23:37.021693 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:23:37.022263 systemd[1]: kubelet.service: Deactivated successfully. Mar 11 01:23:37.022477 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:23:37.024218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:23:37.697476 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (2709) Mar 11 01:23:38.026143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:23:38.032674 (kubelet)[2741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 11 01:23:38.066910 kubelet[2741]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 11 01:23:38.066910 kubelet[2741]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 01:23:38.067673 kubelet[2741]: I0311 01:23:38.067634 2741 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 11 01:23:39.220847 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 11 01:23:39.404359 kubelet[2741]: I0311 01:23:39.404322 2741 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 11 01:23:39.404359 kubelet[2741]: I0311 01:23:39.404349 2741 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 11 01:23:39.404751 kubelet[2741]: I0311 01:23:39.404372 2741 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 11 01:23:39.404751 kubelet[2741]: I0311 01:23:39.404379 2741 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 11 01:23:39.404751 kubelet[2741]: I0311 01:23:39.404616 2741 server.go:956] "Client rotation is on, will bootstrap in background" Mar 11 01:23:39.413633 kubelet[2741]: E0311 01:23:39.413588 2741 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 11 01:23:39.416133 kubelet[2741]: I0311 01:23:39.416104 2741 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 11 01:23:39.418905 kubelet[2741]: E0311 01:23:39.418879 2741 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 11 01:23:39.418970 kubelet[2741]: I0311 01:23:39.418921 2741 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 11 01:23:39.421273 kubelet[2741]: I0311 01:23:39.421257 2741 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 11 01:23:39.421464 kubelet[2741]: I0311 01:23:39.421435 2741 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 11 01:23:39.421597 kubelet[2741]: I0311 01:23:39.421466 2741 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-49f1e4db19","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 11 01:23:39.421597 kubelet[2741]: I0311 01:23:39.421596 2741 topology_manager.go:138] "Creating topology manager with none policy" Mar 11 01:23:39.421696 kubelet[2741]: I0311 01:23:39.421605 2741 container_manager_linux.go:306] "Creating device plugin manager" Mar 11 01:23:39.421696 kubelet[2741]: I0311 01:23:39.421679 2741 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 11 01:23:39.426389 kubelet[2741]: I0311 01:23:39.426375 2741 state_mem.go:36] "Initialized new in-memory state store" Mar 11 01:23:39.427541 kubelet[2741]: I0311 01:23:39.427525 2741 kubelet.go:475] "Attempting to sync node with API server" Mar 11 01:23:39.427584 kubelet[2741]: I0311 01:23:39.427545 2741 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 11 01:23:39.427584 kubelet[2741]: I0311 01:23:39.427572 2741 kubelet.go:387] "Adding apiserver pod source" Mar 11 01:23:39.428496 kubelet[2741]: I0311 01:23:39.427588 2741 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 11 01:23:39.428783 kubelet[2741]: E0311 01:23:39.428678 2741 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-49f1e4db19&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 11 01:23:39.428783 kubelet[2741]: E0311 01:23:39.428763 2741 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 11 01:23:39.430620 kubelet[2741]: I0311 01:23:39.429716 2741 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 11 01:23:39.430620 kubelet[2741]: I0311 01:23:39.430246 2741 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 11 01:23:39.430620 kubelet[2741]: I0311 01:23:39.430272 2741 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 11 01:23:39.430620 kubelet[2741]: W0311 01:23:39.430299 2741 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 11 01:23:39.433538 kubelet[2741]: I0311 01:23:39.433524 2741 server.go:1262] "Started kubelet" Mar 11 01:23:39.434715 kubelet[2741]: I0311 01:23:39.434694 2741 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 11 01:23:39.438112 kubelet[2741]: E0311 01:23:39.437140 2741 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-49f1e4db19.189ba4edc5d20779 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-49f1e4db19,UID:ci-4081.3.6-n-49f1e4db19,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-49f1e4db19,},FirstTimestamp:2026-03-11 01:23:39.433502585 +0000 UTC m=+1.398229226,LastTimestamp:2026-03-11 01:23:39.433502585 +0000 UTC m=+1.398229226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-49f1e4db19,}" Mar 11 01:23:39.439542 kubelet[2741]: E0311 01:23:39.439521 2741 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 11 01:23:39.440201 kubelet[2741]: I0311 01:23:39.439653 2741 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 11 01:23:39.440409 kubelet[2741]: I0311 01:23:39.440385 2741 server.go:310] "Adding debug handlers to kubelet server" Mar 11 01:23:39.443047 kubelet[2741]: I0311 01:23:39.443032 2741 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 11 01:23:39.443538 kubelet[2741]: I0311 01:23:39.443158 2741 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 11 01:23:39.443538 kubelet[2741]: I0311 01:23:39.443220 2741 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 11 01:23:39.443538 kubelet[2741]: E0311 01:23:39.443275 2741 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-49f1e4db19\" not found" Mar 11 01:23:39.443538 kubelet[2741]: I0311 01:23:39.443377 2741 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 11 01:23:39.443667 kubelet[2741]: I0311 01:23:39.443636 2741 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 11 01:23:39.445944 kubelet[2741]: I0311 01:23:39.445916 2741 factory.go:223] Registration of the systemd container factory successfully Mar 11 01:23:39.446013 kubelet[2741]: I0311 01:23:39.446002 2741 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 11 01:23:39.446068 kubelet[2741]: I0311 01:23:39.446021 2741 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 11 01:23:39.446311 kubelet[2741]: I0311 01:23:39.446298 2741 reconciler.go:29] "Reconciler: start to sync state" Mar 11 01:23:39.446823 kubelet[2741]: E0311 01:23:39.446081 2741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-49f1e4db19?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="200ms" Mar 11 01:23:39.447547 kubelet[2741]: E0311 01:23:39.447213 2741 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 11 01:23:39.448977 kubelet[2741]: I0311 01:23:39.448949 2741 factory.go:223] Registration of the containerd container factory successfully Mar 11 01:23:39.454670 kubelet[2741]: I0311 01:23:39.454563 2741 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 11 01:23:39.456263 kubelet[2741]: I0311 01:23:39.456237 2741 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 11 01:23:39.456657 kubelet[2741]: I0311 01:23:39.456355 2741 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 11 01:23:39.456657 kubelet[2741]: I0311 01:23:39.456387 2741 kubelet.go:2428] "Starting kubelet main sync loop" Mar 11 01:23:39.456657 kubelet[2741]: E0311 01:23:39.456439 2741 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 11 01:23:39.462336 kubelet[2741]: E0311 01:23:39.462316 2741 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 11 01:23:39.544936 kubelet[2741]: E0311 01:23:39.544836 2741 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-49f1e4db19\" not found" Mar 11 01:23:39.549906 kubelet[2741]: I0311 01:23:39.549656 2741 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 11 01:23:39.549906 kubelet[2741]: I0311 01:23:39.549673 2741 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 11 01:23:39.549906 kubelet[2741]: I0311 01:23:39.549692 2741 state_mem.go:36] "Initialized new in-memory state store" Mar 11 01:23:39.553518 kubelet[2741]: I0311 01:23:39.553500 2741 policy_none.go:49] "None policy: Start" Mar 11 01:23:39.553796 kubelet[2741]: I0311 01:23:39.553600 2741 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 11 01:23:39.553796 kubelet[2741]: I0311 01:23:39.553617 2741 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 11 01:23:39.557418 kubelet[2741]: I0311 01:23:39.557184 2741 policy_none.go:47] "Start" Mar 11 01:23:39.557418 kubelet[2741]: E0311 01:23:39.557257 2741 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 11 01:23:39.561146 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 11 01:23:39.571272 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 11 01:23:39.573891 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 11 01:23:39.580156 kubelet[2741]: E0311 01:23:39.580138 2741 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 11 01:23:39.580946 kubelet[2741]: I0311 01:23:39.580603 2741 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 11 01:23:39.580946 kubelet[2741]: I0311 01:23:39.580621 2741 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 11 01:23:39.580946 kubelet[2741]: I0311 01:23:39.580840 2741 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 11 01:23:39.582446 kubelet[2741]: E0311 01:23:39.582427 2741 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 11 01:23:39.582534 kubelet[2741]: E0311 01:23:39.582475 2741 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-49f1e4db19\" not found" Mar 11 01:23:39.647534 kubelet[2741]: E0311 01:23:39.647496 2741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-49f1e4db19?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="400ms" Mar 11 01:23:39.683386 kubelet[2741]: I0311 01:23:39.683349 2741 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:39.683684 kubelet[2741]: E0311 01:23:39.683654 2741 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:39.770286 systemd[1]: Created slice kubepods-burstable-pod5d12b83d98385355db5d6ed138c008ba.slice - libcontainer container kubepods-burstable-pod5d12b83d98385355db5d6ed138c008ba.slice. Mar 11 01:23:39.776108 kubelet[2741]: E0311 01:23:39.776081 2741 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-49f1e4db19\" not found" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:39.780850 systemd[1]: Created slice kubepods-burstable-pod7013c85f0861388abf73fa052977a723.slice - libcontainer container kubepods-burstable-pod7013c85f0861388abf73fa052977a723.slice. Mar 11 01:23:39.791466 kubelet[2741]: E0311 01:23:39.791434 2741 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-49f1e4db19\" not found" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:39.794046 systemd[1]: Created slice kubepods-burstable-pod747989d66d6d0dda7682610997eb9a06.slice - libcontainer container kubepods-burstable-pod747989d66d6d0dda7682610997eb9a06.slice. Mar 11 01:23:39.795531 kubelet[2741]: E0311 01:23:39.795471 2741 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-49f1e4db19\" not found" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:39.848952 kubelet[2741]: I0311 01:23:39.848928 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d12b83d98385355db5d6ed138c008ba-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-49f1e4db19\" (UID: \"5d12b83d98385355db5d6ed138c008ba\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:39.849026 kubelet[2741]: I0311 01:23:39.848955 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7013c85f0861388abf73fa052977a723-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-49f1e4db19\" (UID: \"7013c85f0861388abf73fa052977a723\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:39.849026 kubelet[2741]: I0311 01:23:39.848972 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7013c85f0861388abf73fa052977a723-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-49f1e4db19\" (UID: \"7013c85f0861388abf73fa052977a723\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:39.849026 kubelet[2741]: I0311 01:23:39.848986 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7013c85f0861388abf73fa052977a723-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-49f1e4db19\" (UID: \"7013c85f0861388abf73fa052977a723\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:39.849026 kubelet[2741]: I0311 01:23:39.849001 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/747989d66d6d0dda7682610997eb9a06-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-49f1e4db19\" (UID: \"747989d66d6d0dda7682610997eb9a06\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:39.849026 kubelet[2741]: I0311 01:23:39.849015 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/747989d66d6d0dda7682610997eb9a06-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-49f1e4db19\" (UID: \"747989d66d6d0dda7682610997eb9a06\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:39.849135 kubelet[2741]: I0311 01:23:39.849030 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/747989d66d6d0dda7682610997eb9a06-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-49f1e4db19\" (UID: \"747989d66d6d0dda7682610997eb9a06\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:39.849135 kubelet[2741]: I0311 01:23:39.849043 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/747989d66d6d0dda7682610997eb9a06-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-49f1e4db19\" (UID: \"747989d66d6d0dda7682610997eb9a06\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:39.849135 kubelet[2741]: I0311 01:23:39.849057 2741 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/747989d66d6d0dda7682610997eb9a06-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-49f1e4db19\" (UID: \"747989d66d6d0dda7682610997eb9a06\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:39.885630 kubelet[2741]: I0311 01:23:39.885317 2741 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:39.885751 kubelet[2741]: E0311 01:23:39.885727 2741 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:40.048364 kubelet[2741]: E0311 01:23:40.048265 2741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-49f1e4db19?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="800ms" Mar 11 01:23:40.081232 containerd[1717]: time="2026-03-11T01:23:40.081176663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-49f1e4db19,Uid:5d12b83d98385355db5d6ed138c008ba,Namespace:kube-system,Attempt:0,}" Mar 11 01:23:40.096073 containerd[1717]: time="2026-03-11T01:23:40.096027248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-49f1e4db19,Uid:7013c85f0861388abf73fa052977a723,Namespace:kube-system,Attempt:0,}" Mar 11 01:23:40.100674 containerd[1717]: time="2026-03-11T01:23:40.100492336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-49f1e4db19,Uid:747989d66d6d0dda7682610997eb9a06,Namespace:kube-system,Attempt:0,}" Mar 11 01:23:40.288310 kubelet[2741]: I0311 01:23:40.288006 2741 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:40.288310 kubelet[2741]: E0311 01:23:40.288273 2741 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:40.508626 kubelet[2741]: E0311 01:23:40.508593 2741 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 11 01:23:40.582782 kubelet[2741]: E0311 01:23:40.582736 2741 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-49f1e4db19&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 11 01:23:40.725068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1798576239.mount: Deactivated successfully. Mar 11 01:23:40.750932 containerd[1717]: time="2026-03-11T01:23:40.750889374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 01:23:40.753005 containerd[1717]: time="2026-03-11T01:23:40.752972618Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 11 01:23:40.756903 containerd[1717]: time="2026-03-11T01:23:40.756872864Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 01:23:40.760192 kubelet[2741]: E0311 01:23:40.760106 2741 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 11 01:23:40.761728 containerd[1717]: time="2026-03-11T01:23:40.760983871Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 01:23:40.765414 containerd[1717]: time="2026-03-11T01:23:40.765385839Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 11 01:23:40.770634 containerd[1717]: time="2026-03-11T01:23:40.769629166Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 01:23:40.771466 containerd[1717]: time="2026-03-11T01:23:40.771114529Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 11 01:23:40.775302 containerd[1717]: time="2026-03-11T01:23:40.775255776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 11 01:23:40.777475 containerd[1717]: time="2026-03-11T01:23:40.775981057Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 679.882729ms" Mar 11 01:23:40.778350 containerd[1717]: time="2026-03-11T01:23:40.778318261Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 677.777605ms" Mar 11 01:23:40.779082 containerd[1717]: time="2026-03-11T01:23:40.778947182Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 697.325879ms" Mar 11 01:23:40.849039 kubelet[2741]: E0311 01:23:40.848992 2741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-49f1e4db19?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="1.6s" Mar 11 01:23:40.964305 kubelet[2741]: E0311 01:23:40.964265 2741 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 11 01:23:41.090321 kubelet[2741]: I0311 01:23:41.090229 2741 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:41.090836 kubelet[2741]: E0311 01:23:41.090812 2741 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:41.602155 kubelet[2741]: E0311 01:23:41.602121 2741 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 11 01:23:41.697900 containerd[1717]: time="2026-03-11T01:23:41.694624717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:23:41.697900 containerd[1717]: time="2026-03-11T01:23:41.696414360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:23:41.697900 containerd[1717]: time="2026-03-11T01:23:41.696523240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:23:41.697900 containerd[1717]: time="2026-03-11T01:23:41.696539400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:23:41.697900 containerd[1717]: time="2026-03-11T01:23:41.696582440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:23:41.697900 containerd[1717]: time="2026-03-11T01:23:41.696554120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:23:41.697900 containerd[1717]: time="2026-03-11T01:23:41.696687800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:23:41.698839 containerd[1717]: time="2026-03-11T01:23:41.698783324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:23:41.705552 containerd[1717]: time="2026-03-11T01:23:41.705430495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:23:41.705724 containerd[1717]: time="2026-03-11T01:23:41.705525055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:23:41.705920 containerd[1717]: time="2026-03-11T01:23:41.705884096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:23:41.706906 containerd[1717]: time="2026-03-11T01:23:41.706851098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:23:41.724626 systemd[1]: Started cri-containerd-8f0150724686fa2f0a67188fbf79a54ca73684991a66c7b828337eb2c3a2489d.scope - libcontainer container 8f0150724686fa2f0a67188fbf79a54ca73684991a66c7b828337eb2c3a2489d. Mar 11 01:23:41.726716 systemd[1]: Started cri-containerd-c13e5b5317a111d515b4044ee241e719dd6c6e48affeb7bcee885da51f48db3b.scope - libcontainer container c13e5b5317a111d515b4044ee241e719dd6c6e48affeb7bcee885da51f48db3b. Mar 11 01:23:41.744591 systemd[1]: Started cri-containerd-98cb864e044ed44faa80f26b49554f2c8db2aaf02e4ca2b0b5004b83957726af.scope - libcontainer container 98cb864e044ed44faa80f26b49554f2c8db2aaf02e4ca2b0b5004b83957726af. Mar 11 01:23:41.777821 containerd[1717]: time="2026-03-11T01:23:41.777655459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-49f1e4db19,Uid:7013c85f0861388abf73fa052977a723,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f0150724686fa2f0a67188fbf79a54ca73684991a66c7b828337eb2c3a2489d\"" Mar 11 01:23:41.784518 containerd[1717]: time="2026-03-11T01:23:41.784069270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-49f1e4db19,Uid:5d12b83d98385355db5d6ed138c008ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"c13e5b5317a111d515b4044ee241e719dd6c6e48affeb7bcee885da51f48db3b\"" Mar 11 01:23:41.789786 containerd[1717]: time="2026-03-11T01:23:41.789668920Z" level=info msg="CreateContainer within sandbox \"8f0150724686fa2f0a67188fbf79a54ca73684991a66c7b828337eb2c3a2489d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 11 01:23:41.793136 containerd[1717]: time="2026-03-11T01:23:41.793102046Z" level=info msg="CreateContainer within sandbox \"c13e5b5317a111d515b4044ee241e719dd6c6e48affeb7bcee885da51f48db3b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 11 01:23:41.795965 containerd[1717]: time="2026-03-11T01:23:41.795400250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-49f1e4db19,Uid:747989d66d6d0dda7682610997eb9a06,Namespace:kube-system,Attempt:0,} returns sandbox id \"98cb864e044ed44faa80f26b49554f2c8db2aaf02e4ca2b0b5004b83957726af\"" Mar 11 01:23:41.802381 containerd[1717]: time="2026-03-11T01:23:41.802356782Z" level=info msg="CreateContainer within sandbox \"98cb864e044ed44faa80f26b49554f2c8db2aaf02e4ca2b0b5004b83957726af\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 11 01:23:41.821939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1294266898.mount: Deactivated successfully. Mar 11 01:23:41.851711 containerd[1717]: time="2026-03-11T01:23:41.851672627Z" level=info msg="CreateContainer within sandbox \"8f0150724686fa2f0a67188fbf79a54ca73684991a66c7b828337eb2c3a2489d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4c30f7c114c3315a1c2040c1f6b178a237de2e3b8ca513145acdbe773f542c96\"" Mar 11 01:23:41.852530 containerd[1717]: time="2026-03-11T01:23:41.852396548Z" level=info msg="StartContainer for \"4c30f7c114c3315a1c2040c1f6b178a237de2e3b8ca513145acdbe773f542c96\"" Mar 11 01:23:41.858738 containerd[1717]: time="2026-03-11T01:23:41.858621359Z" level=info msg="CreateContainer within sandbox \"c13e5b5317a111d515b4044ee241e719dd6c6e48affeb7bcee885da51f48db3b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"33a49ad0879d9c59e4e1c32ba419ded3f523fd2a23e0dfb3feaef07dfc90ed11\"" Mar 11 01:23:41.859610 containerd[1717]: time="2026-03-11T01:23:41.859179360Z" level=info msg="StartContainer for \"33a49ad0879d9c59e4e1c32ba419ded3f523fd2a23e0dfb3feaef07dfc90ed11\"" Mar 11 01:23:41.860226 containerd[1717]: time="2026-03-11T01:23:41.860199801Z" level=info msg="CreateContainer within sandbox \"98cb864e044ed44faa80f26b49554f2c8db2aaf02e4ca2b0b5004b83957726af\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"074dddc73a0dadbce68727cd2f8400f43b0679aa04111409afd08a1e00e1b440\"" Mar 11 01:23:41.860606 containerd[1717]: time="2026-03-11T01:23:41.860583002Z" level=info msg="StartContainer for \"074dddc73a0dadbce68727cd2f8400f43b0679aa04111409afd08a1e00e1b440\"" Mar 11 01:23:41.884610 systemd[1]: Started cri-containerd-4c30f7c114c3315a1c2040c1f6b178a237de2e3b8ca513145acdbe773f542c96.scope - libcontainer container 4c30f7c114c3315a1c2040c1f6b178a237de2e3b8ca513145acdbe773f542c96. Mar 11 01:23:41.900614 systemd[1]: Started cri-containerd-33a49ad0879d9c59e4e1c32ba419ded3f523fd2a23e0dfb3feaef07dfc90ed11.scope - libcontainer container 33a49ad0879d9c59e4e1c32ba419ded3f523fd2a23e0dfb3feaef07dfc90ed11. Mar 11 01:23:41.903759 systemd[1]: Started cri-containerd-074dddc73a0dadbce68727cd2f8400f43b0679aa04111409afd08a1e00e1b440.scope - libcontainer container 074dddc73a0dadbce68727cd2f8400f43b0679aa04111409afd08a1e00e1b440. Mar 11 01:23:41.950737 containerd[1717]: time="2026-03-11T01:23:41.950685557Z" level=info msg="StartContainer for \"074dddc73a0dadbce68727cd2f8400f43b0679aa04111409afd08a1e00e1b440\" returns successfully" Mar 11 01:23:41.951164 containerd[1717]: time="2026-03-11T01:23:41.950760717Z" level=info msg="StartContainer for \"4c30f7c114c3315a1c2040c1f6b178a237de2e3b8ca513145acdbe773f542c96\" returns successfully" Mar 11 01:23:41.959023 containerd[1717]: time="2026-03-11T01:23:41.958909011Z" level=info msg="StartContainer for \"33a49ad0879d9c59e4e1c32ba419ded3f523fd2a23e0dfb3feaef07dfc90ed11\" returns successfully" Mar 11 01:23:42.489321 kubelet[2741]: E0311 01:23:42.489292 2741 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-49f1e4db19\" not found" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:42.491443 kubelet[2741]: E0311 01:23:42.491421 2741 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-49f1e4db19\" not found" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:42.494396 kubelet[2741]: E0311 01:23:42.494377 2741 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-49f1e4db19\" not found" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:42.694771 kubelet[2741]: I0311 01:23:42.694745 2741 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:43.503931 kubelet[2741]: E0311 01:23:43.503896 2741 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-49f1e4db19\" not found" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:43.504387 kubelet[2741]: E0311 01:23:43.504293 2741 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-49f1e4db19\" not found" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:44.430860 kubelet[2741]: I0311 01:23:44.430637 2741 apiserver.go:52] "Watching apiserver" Mar 11 01:23:44.493804 kubelet[2741]: E0311 01:23:44.493766 2741 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-49f1e4db19\" not found" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:44.499682 kubelet[2741]: E0311 01:23:44.499663 2741 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-49f1e4db19\" not found" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:44.530535 kubelet[2741]: E0311 01:23:44.530445 2741 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.6-n-49f1e4db19.189ba4edc5d20779 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-49f1e4db19,UID:ci-4081.3.6-n-49f1e4db19,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-49f1e4db19,},FirstTimestamp:2026-03-11 01:23:39.433502585 +0000 UTC m=+1.398229226,LastTimestamp:2026-03-11 01:23:39.433502585 +0000 UTC m=+1.398229226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-49f1e4db19,}" Mar 11 01:23:44.542684 kubelet[2741]: I0311 01:23:44.542661 2741 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:44.543621 kubelet[2741]: I0311 01:23:44.543604 2741 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:44.547077 kubelet[2741]: I0311 01:23:44.547052 2741 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 11 01:23:44.596784 kubelet[2741]: E0311 01:23:44.596669 2741 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.6-n-49f1e4db19.189ba4edc62da813 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-49f1e4db19,UID:ci-4081.3.6-n-49f1e4db19,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-49f1e4db19,},FirstTimestamp:2026-03-11 01:23:39.439507475 +0000 UTC m=+1.404234116,LastTimestamp:2026-03-11 01:23:39.439507475 +0000 UTC m=+1.404234116,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-49f1e4db19,}" Mar 11 01:23:44.641115 kubelet[2741]: E0311 01:23:44.641081 2741 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-49f1e4db19\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:44.641115 kubelet[2741]: I0311 01:23:44.641111 2741 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:44.650473 kubelet[2741]: E0311 01:23:44.649664 2741 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-49f1e4db19\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:44.650473 kubelet[2741]: I0311 01:23:44.649690 2741 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:44.653086 kubelet[2741]: E0311 01:23:44.653055 2741 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-49f1e4db19\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:46.975195 systemd[1]: Reloading requested from client PID 3025 ('systemctl') (unit session-9.scope)... Mar 11 01:23:46.975210 systemd[1]: Reloading... Mar 11 01:23:47.073492 zram_generator::config[3068]: No configuration found. Mar 11 01:23:47.162248 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 11 01:23:47.256413 systemd[1]: Reloading finished in 280 ms. Mar 11 01:23:47.292141 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:23:47.309524 systemd[1]: kubelet.service: Deactivated successfully. Mar 11 01:23:47.309737 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:23:47.309787 systemd[1]: kubelet.service: Consumed 1.739s CPU time, 124.1M memory peak, 0B memory swap peak. Mar 11 01:23:47.314761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 11 01:23:47.511048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 11 01:23:47.522701 (kubelet)[3129]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 11 01:23:47.565496 kubelet[3129]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 11 01:23:47.565496 kubelet[3129]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 01:23:47.565496 kubelet[3129]: I0311 01:23:47.564594 3129 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 11 01:23:47.571403 kubelet[3129]: I0311 01:23:47.570913 3129 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 11 01:23:47.572751 kubelet[3129]: I0311 01:23:47.572691 3129 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 11 01:23:47.572751 kubelet[3129]: I0311 01:23:47.572726 3129 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 11 01:23:47.572855 kubelet[3129]: I0311 01:23:47.572732 3129 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 11 01:23:47.574109 kubelet[3129]: I0311 01:23:47.573503 3129 server.go:956] "Client rotation is on, will bootstrap in background" Mar 11 01:23:47.575470 kubelet[3129]: I0311 01:23:47.575423 3129 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 11 01:23:47.578120 kubelet[3129]: I0311 01:23:47.578095 3129 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 11 01:23:47.580532 kubelet[3129]: E0311 01:23:47.580503 3129 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 11 01:23:47.580643 kubelet[3129]: I0311 01:23:47.580549 3129 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 11 01:23:47.583144 kubelet[3129]: I0311 01:23:47.583128 3129 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 11 01:23:47.583310 kubelet[3129]: I0311 01:23:47.583285 3129 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 11 01:23:47.583493 kubelet[3129]: I0311 01:23:47.583308 3129 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-49f1e4db19","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 11 01:23:47.583585 kubelet[3129]: I0311 01:23:47.583495 3129 topology_manager.go:138] "Creating topology manager with none policy" Mar 11 01:23:47.583585 kubelet[3129]: I0311 01:23:47.583504 3129 container_manager_linux.go:306] "Creating device plugin manager" Mar 11 01:23:47.583585 kubelet[3129]: I0311 01:23:47.583526 3129 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 11 01:23:47.583699 kubelet[3129]: I0311 01:23:47.583684 3129 state_mem.go:36] "Initialized new in-memory state store" Mar 11 01:23:47.583818 kubelet[3129]: I0311 01:23:47.583798 3129 kubelet.go:475] "Attempting to sync node with API server" Mar 11 01:23:47.583818 kubelet[3129]: I0311 01:23:47.583814 3129 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 11 01:23:47.584382 kubelet[3129]: I0311 01:23:47.583840 3129 kubelet.go:387] "Adding apiserver pod source" Mar 11 01:23:47.584382 kubelet[3129]: I0311 01:23:47.583850 3129 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 11 01:23:47.585805 kubelet[3129]: I0311 01:23:47.585787 3129 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 11 01:23:47.586307 kubelet[3129]: I0311 01:23:47.586290 3129 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 11 01:23:47.586366 kubelet[3129]: I0311 01:23:47.586318 3129 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 11 01:23:47.591865 kubelet[3129]: I0311 01:23:47.591845 3129 server.go:1262] "Started kubelet" Mar 11 01:23:47.594507 kubelet[3129]: I0311 01:23:47.594321 3129 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 11 01:23:47.604577 kubelet[3129]: I0311 01:23:47.604557 3129 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 11 01:23:47.605478 kubelet[3129]: I0311 01:23:47.605402 3129 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 11 01:23:47.606371 kubelet[3129]: I0311 01:23:47.604744 3129 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 11 01:23:47.606371 kubelet[3129]: I0311 01:23:47.606326 3129 reconciler.go:29] "Reconciler: start to sync state" Mar 11 01:23:47.606853 kubelet[3129]: I0311 01:23:47.606809 3129 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 11 01:23:47.606967 kubelet[3129]: I0311 01:23:47.606954 3129 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 11 01:23:47.607279 kubelet[3129]: I0311 01:23:47.607263 3129 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 11 01:23:47.613473 kubelet[3129]: I0311 01:23:47.611728 3129 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 11 01:23:47.615714 kubelet[3129]: E0311 01:23:47.604896 3129 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-49f1e4db19\" not found" Mar 11 01:23:47.615807 kubelet[3129]: I0311 01:23:47.615684 3129 server.go:310] "Adding debug handlers to kubelet server" Mar 11 01:23:47.619915 kubelet[3129]: I0311 01:23:47.619896 3129 factory.go:223] Registration of the systemd container factory successfully Mar 11 01:23:47.620805 kubelet[3129]: I0311 01:23:47.620780 3129 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 11 01:23:47.636892 kubelet[3129]: E0311 01:23:47.636864 3129 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 11 01:23:47.636995 kubelet[3129]: I0311 01:23:47.636979 3129 factory.go:223] Registration of the containerd container factory successfully Mar 11 01:23:47.641855 kubelet[3129]: I0311 01:23:47.640410 3129 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 11 01:23:47.646529 kubelet[3129]: I0311 01:23:47.646450 3129 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 11 01:23:47.646529 kubelet[3129]: I0311 01:23:47.646512 3129 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 11 01:23:47.646529 kubelet[3129]: I0311 01:23:47.646531 3129 kubelet.go:2428] "Starting kubelet main sync loop" Mar 11 01:23:47.646649 kubelet[3129]: E0311 01:23:47.646573 3129 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 11 01:23:47.672219 kubelet[3129]: I0311 01:23:47.672195 3129 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 11 01:23:47.672448 kubelet[3129]: I0311 01:23:47.672354 3129 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 11 01:23:47.672783 kubelet[3129]: I0311 01:23:47.672770 3129 state_mem.go:36] "Initialized new in-memory state store" Mar 11 01:23:47.672993 kubelet[3129]: I0311 01:23:47.672960 3129 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 11 01:23:47.673703 kubelet[3129]: I0311 01:23:47.673672 3129 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 11 01:23:47.673807 kubelet[3129]: I0311 01:23:47.673791 3129 policy_none.go:49] "None policy: Start" Mar 11 01:23:47.673866 kubelet[3129]: I0311 01:23:47.673856 3129 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 11 01:23:47.673922 kubelet[3129]: I0311 01:23:47.673914 3129 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 11 01:23:47.674101 kubelet[3129]: I0311 01:23:47.674089 3129 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 11 01:23:47.674166 kubelet[3129]: I0311 01:23:47.674158 3129 policy_none.go:47] "Start" Mar 11 01:23:47.681289 kubelet[3129]: E0311 01:23:47.681271 3129 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 11 01:23:47.681534 kubelet[3129]: I0311 01:23:47.681521 3129 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 11 01:23:47.681640 kubelet[3129]: I0311 01:23:47.681611 3129 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 11 01:23:47.682513 kubelet[3129]: I0311 01:23:47.682498 3129 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 11 01:23:47.683635 kubelet[3129]: E0311 01:23:47.683618 3129 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 11 01:23:47.748608 kubelet[3129]: I0311 01:23:47.748159 3129 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:47.749099 kubelet[3129]: I0311 01:23:47.748927 3129 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:47.750275 kubelet[3129]: I0311 01:23:47.750257 3129 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:47.769815 kubelet[3129]: I0311 01:23:47.769720 3129 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 11 01:23:47.775445 kubelet[3129]: I0311 01:23:47.775425 3129 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 11 01:23:47.782970 kubelet[3129]: I0311 01:23:47.782656 3129 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 11 01:23:47.784347 kubelet[3129]: I0311 01:23:47.784325 3129 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:47.815714 kubelet[3129]: I0311 01:23:47.815683 3129 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:47.815793 kubelet[3129]: I0311 01:23:47.815767 3129 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:47.907371 kubelet[3129]: I0311 01:23:47.907328 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7013c85f0861388abf73fa052977a723-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-49f1e4db19\" (UID: \"7013c85f0861388abf73fa052977a723\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:47.907371 kubelet[3129]: I0311 01:23:47.907365 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7013c85f0861388abf73fa052977a723-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-49f1e4db19\" (UID: \"7013c85f0861388abf73fa052977a723\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:47.907371 kubelet[3129]: I0311 01:23:47.907391 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7013c85f0861388abf73fa052977a723-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-49f1e4db19\" (UID: \"7013c85f0861388abf73fa052977a723\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:47.907371 kubelet[3129]: I0311 01:23:47.907409 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/747989d66d6d0dda7682610997eb9a06-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-49f1e4db19\" (UID: \"747989d66d6d0dda7682610997eb9a06\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:47.907371 kubelet[3129]: I0311 01:23:47.907427 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/747989d66d6d0dda7682610997eb9a06-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-49f1e4db19\" (UID: \"747989d66d6d0dda7682610997eb9a06\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:47.907886 kubelet[3129]: I0311 01:23:47.907445 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/747989d66d6d0dda7682610997eb9a06-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-49f1e4db19\" (UID: \"747989d66d6d0dda7682610997eb9a06\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:47.907886 kubelet[3129]: I0311 01:23:47.907473 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/747989d66d6d0dda7682610997eb9a06-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-49f1e4db19\" (UID: \"747989d66d6d0dda7682610997eb9a06\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:47.907886 kubelet[3129]: I0311 01:23:47.907493 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/747989d66d6d0dda7682610997eb9a06-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-49f1e4db19\" (UID: \"747989d66d6d0dda7682610997eb9a06\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:47.907886 kubelet[3129]: I0311 01:23:47.907508 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d12b83d98385355db5d6ed138c008ba-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-49f1e4db19\" (UID: \"5d12b83d98385355db5d6ed138c008ba\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:48.585319 kubelet[3129]: I0311 01:23:48.585100 3129 apiserver.go:52] "Watching apiserver" Mar 11 01:23:48.608568 kubelet[3129]: I0311 01:23:48.608534 3129 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 11 01:23:48.662882 kubelet[3129]: I0311 01:23:48.662766 3129 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:48.675508 kubelet[3129]: I0311 01:23:48.675474 3129 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 11 01:23:48.675624 kubelet[3129]: E0311 01:23:48.675530 3129 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-49f1e4db19\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-49f1e4db19" Mar 11 01:23:48.689757 kubelet[3129]: I0311 01:23:48.689587 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-49f1e4db19" podStartSLOduration=1.689574693 podStartE2EDuration="1.689574693s" podCreationTimestamp="2026-03-11 01:23:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 01:23:48.688586571 +0000 UTC m=+1.163311591" watchObservedRunningTime="2026-03-11 01:23:48.689574693 +0000 UTC m=+1.164299713" Mar 11 01:23:48.727039 kubelet[3129]: I0311 01:23:48.726868 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-49f1e4db19" podStartSLOduration=1.726854156 podStartE2EDuration="1.726854156s" podCreationTimestamp="2026-03-11 01:23:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 01:23:48.704733119 +0000 UTC m=+1.179458139" watchObservedRunningTime="2026-03-11 01:23:48.726854156 +0000 UTC m=+1.201579176" Mar 11 01:23:48.727039 kubelet[3129]: I0311 01:23:48.726949 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-49f1e4db19" podStartSLOduration=1.7269449959999998 podStartE2EDuration="1.726944996s" podCreationTimestamp="2026-03-11 01:23:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 01:23:48.726933756 +0000 UTC m=+1.201658776" watchObservedRunningTime="2026-03-11 01:23:48.726944996 +0000 UTC m=+1.201669976" Mar 11 01:23:53.472967 kubelet[3129]: I0311 01:23:53.472928 3129 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 11 01:23:53.474309 containerd[1717]: time="2026-03-11T01:23:53.474002227Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 11 01:23:53.474570 kubelet[3129]: I0311 01:23:53.474181 3129 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 11 01:23:54.281520 systemd[1]: Created slice kubepods-besteffort-pod9d8d088b_1c00_4970_8e4c_42096d9a7425.slice - libcontainer container kubepods-besteffort-pod9d8d088b_1c00_4970_8e4c_42096d9a7425.slice. Mar 11 01:23:54.342085 kubelet[3129]: I0311 01:23:54.342049 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d8d088b-1c00-4970-8e4c-42096d9a7425-xtables-lock\") pod \"kube-proxy-lxtzk\" (UID: \"9d8d088b-1c00-4970-8e4c-42096d9a7425\") " pod="kube-system/kube-proxy-lxtzk" Mar 11 01:23:54.342085 kubelet[3129]: I0311 01:23:54.342087 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d8d088b-1c00-4970-8e4c-42096d9a7425-lib-modules\") pod \"kube-proxy-lxtzk\" (UID: \"9d8d088b-1c00-4970-8e4c-42096d9a7425\") " pod="kube-system/kube-proxy-lxtzk" Mar 11 01:23:54.342240 kubelet[3129]: I0311 01:23:54.342107 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlgcb\" (UniqueName: \"kubernetes.io/projected/9d8d088b-1c00-4970-8e4c-42096d9a7425-kube-api-access-tlgcb\") pod \"kube-proxy-lxtzk\" (UID: \"9d8d088b-1c00-4970-8e4c-42096d9a7425\") " pod="kube-system/kube-proxy-lxtzk" Mar 11 01:23:54.342240 kubelet[3129]: I0311 01:23:54.342126 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d8d088b-1c00-4970-8e4c-42096d9a7425-kube-proxy\") pod \"kube-proxy-lxtzk\" (UID: \"9d8d088b-1c00-4970-8e4c-42096d9a7425\") " pod="kube-system/kube-proxy-lxtzk" Mar 11 01:23:54.594760 containerd[1717]: time="2026-03-11T01:23:54.594276981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lxtzk,Uid:9d8d088b-1c00-4970-8e4c-42096d9a7425,Namespace:kube-system,Attempt:0,}" Mar 11 01:23:54.627908 containerd[1717]: time="2026-03-11T01:23:54.627824678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:23:54.627908 containerd[1717]: time="2026-03-11T01:23:54.627879478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:23:54.628203 containerd[1717]: time="2026-03-11T01:23:54.628063279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:23:54.628203 containerd[1717]: time="2026-03-11T01:23:54.628165799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:23:54.652597 systemd[1]: Started cri-containerd-3728246d328871602e461ce14aef5ca4ef9d3eea02d12bf5efa0b4d44c54ca20.scope - libcontainer container 3728246d328871602e461ce14aef5ca4ef9d3eea02d12bf5efa0b4d44c54ca20. Mar 11 01:23:54.696616 containerd[1717]: time="2026-03-11T01:23:54.696540156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lxtzk,Uid:9d8d088b-1c00-4970-8e4c-42096d9a7425,Namespace:kube-system,Attempt:0,} returns sandbox id \"3728246d328871602e461ce14aef5ca4ef9d3eea02d12bf5efa0b4d44c54ca20\"" Mar 11 01:23:54.710247 containerd[1717]: time="2026-03-11T01:23:54.710033019Z" level=info msg="CreateContainer within sandbox \"3728246d328871602e461ce14aef5ca4ef9d3eea02d12bf5efa0b4d44c54ca20\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 11 01:23:54.716079 systemd[1]: Created slice kubepods-besteffort-podd9cdc03f_9f61_4259_9911_b5f178254642.slice - libcontainer container kubepods-besteffort-podd9cdc03f_9f61_4259_9911_b5f178254642.slice. Mar 11 01:23:54.745388 kubelet[3129]: I0311 01:23:54.745306 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d9cdc03f-9f61-4259-9911-b5f178254642-var-lib-calico\") pod \"tigera-operator-5588576f44-mwknr\" (UID: \"d9cdc03f-9f61-4259-9911-b5f178254642\") " pod="tigera-operator/tigera-operator-5588576f44-mwknr" Mar 11 01:23:54.745388 kubelet[3129]: I0311 01:23:54.745347 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjndk\" (UniqueName: \"kubernetes.io/projected/d9cdc03f-9f61-4259-9911-b5f178254642-kube-api-access-tjndk\") pod \"tigera-operator-5588576f44-mwknr\" (UID: \"d9cdc03f-9f61-4259-9911-b5f178254642\") " pod="tigera-operator/tigera-operator-5588576f44-mwknr" Mar 11 01:23:54.751016 containerd[1717]: time="2026-03-11T01:23:54.749448726Z" level=info msg="CreateContainer within sandbox \"3728246d328871602e461ce14aef5ca4ef9d3eea02d12bf5efa0b4d44c54ca20\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ce304b88a43bcd85696799e7df01b27753a04ba4e0f6c850ad52d0edace5bfca\"" Mar 11 01:23:54.752618 containerd[1717]: time="2026-03-11T01:23:54.751414889Z" level=info msg="StartContainer for \"ce304b88a43bcd85696799e7df01b27753a04ba4e0f6c850ad52d0edace5bfca\"" Mar 11 01:23:54.774595 systemd[1]: Started cri-containerd-ce304b88a43bcd85696799e7df01b27753a04ba4e0f6c850ad52d0edace5bfca.scope - libcontainer container ce304b88a43bcd85696799e7df01b27753a04ba4e0f6c850ad52d0edace5bfca. Mar 11 01:23:54.801124 containerd[1717]: time="2026-03-11T01:23:54.801079014Z" level=info msg="StartContainer for \"ce304b88a43bcd85696799e7df01b27753a04ba4e0f6c850ad52d0edace5bfca\" returns successfully" Mar 11 01:23:55.031641 containerd[1717]: time="2026-03-11T01:23:55.031109607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-mwknr,Uid:d9cdc03f-9f61-4259-9911-b5f178254642,Namespace:tigera-operator,Attempt:0,}" Mar 11 01:23:55.063003 containerd[1717]: time="2026-03-11T01:23:55.062912622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:23:55.063255 containerd[1717]: time="2026-03-11T01:23:55.062975062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:23:55.063255 containerd[1717]: time="2026-03-11T01:23:55.062989902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:23:55.063255 containerd[1717]: time="2026-03-11T01:23:55.063064662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:23:55.080586 systemd[1]: Started cri-containerd-fe00a5f4cfdbd2c3da8d4f093a73bf5d155e49cec1305aa6bae331a3dd4a5159.scope - libcontainer container fe00a5f4cfdbd2c3da8d4f093a73bf5d155e49cec1305aa6bae331a3dd4a5159. Mar 11 01:23:55.106506 containerd[1717]: time="2026-03-11T01:23:55.106412056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-mwknr,Uid:d9cdc03f-9f61-4259-9911-b5f178254642,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fe00a5f4cfdbd2c3da8d4f093a73bf5d155e49cec1305aa6bae331a3dd4a5159\"" Mar 11 01:23:55.108824 containerd[1717]: time="2026-03-11T01:23:55.108104419Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 11 01:23:55.466228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount240277998.mount: Deactivated successfully. Mar 11 01:23:55.687547 kubelet[3129]: I0311 01:23:55.687489 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lxtzk" podStartSLOduration=1.687475449 podStartE2EDuration="1.687475449s" podCreationTimestamp="2026-03-11 01:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 01:23:55.686753888 +0000 UTC m=+8.161478908" watchObservedRunningTime="2026-03-11 01:23:55.687475449 +0000 UTC m=+8.162200469" Mar 11 01:23:56.653810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount939796087.mount: Deactivated successfully. Mar 11 01:23:57.331315 containerd[1717]: time="2026-03-11T01:23:57.331268633Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:57.333364 containerd[1717]: time="2026-03-11T01:23:57.333213716Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 11 01:23:57.336251 containerd[1717]: time="2026-03-11T01:23:57.336013759Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:57.340914 containerd[1717]: time="2026-03-11T01:23:57.340878005Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:23:57.341693 containerd[1717]: time="2026-03-11T01:23:57.341664166Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.232868346s" Mar 11 01:23:57.341751 containerd[1717]: time="2026-03-11T01:23:57.341694406Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 11 01:23:57.348054 containerd[1717]: time="2026-03-11T01:23:57.347940574Z" level=info msg="CreateContainer within sandbox \"fe00a5f4cfdbd2c3da8d4f093a73bf5d155e49cec1305aa6bae331a3dd4a5159\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 11 01:23:57.368040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4037886766.mount: Deactivated successfully. Mar 11 01:23:57.380673 containerd[1717]: time="2026-03-11T01:23:57.380645295Z" level=info msg="CreateContainer within sandbox \"fe00a5f4cfdbd2c3da8d4f093a73bf5d155e49cec1305aa6bae331a3dd4a5159\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fd95a9f381c1de68afc14f3ff2ea2059815b816e8bdc86433263d61c4459b514\"" Mar 11 01:23:57.381169 containerd[1717]: time="2026-03-11T01:23:57.381141415Z" level=info msg="StartContainer for \"fd95a9f381c1de68afc14f3ff2ea2059815b816e8bdc86433263d61c4459b514\"" Mar 11 01:23:57.405584 systemd[1]: Started cri-containerd-fd95a9f381c1de68afc14f3ff2ea2059815b816e8bdc86433263d61c4459b514.scope - libcontainer container fd95a9f381c1de68afc14f3ff2ea2059815b816e8bdc86433263d61c4459b514. Mar 11 01:23:57.430053 containerd[1717]: time="2026-03-11T01:23:57.429935996Z" level=info msg="StartContainer for \"fd95a9f381c1de68afc14f3ff2ea2059815b816e8bdc86433263d61c4459b514\" returns successfully" Mar 11 01:23:57.694802 kubelet[3129]: I0311 01:23:57.694211 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-mwknr" podStartSLOduration=1.459372616 podStartE2EDuration="3.694196165s" podCreationTimestamp="2026-03-11 01:23:54 +0000 UTC" firstStartedPulling="2026-03-11 01:23:55.107761738 +0000 UTC m=+7.582486758" lastFinishedPulling="2026-03-11 01:23:57.342585327 +0000 UTC m=+9.817310307" observedRunningTime="2026-03-11 01:23:57.693816724 +0000 UTC m=+10.168541744" watchObservedRunningTime="2026-03-11 01:23:57.694196165 +0000 UTC m=+10.168921185" Mar 11 01:24:03.371370 sudo[2186]: pam_unix(sudo:session): session closed for user root Mar 11 01:24:03.450494 sshd[2183]: pam_unix(sshd:session): session closed for user core Mar 11 01:24:03.457927 systemd[1]: sshd@6-10.200.20.12:22-10.200.16.10:37074.service: Deactivated successfully. Mar 11 01:24:03.462187 systemd[1]: session-9.scope: Deactivated successfully. Mar 11 01:24:03.462350 systemd[1]: session-9.scope: Consumed 6.550s CPU time, 154.4M memory peak, 0B memory swap peak. Mar 11 01:24:03.463859 systemd-logind[1690]: Session 9 logged out. Waiting for processes to exit. Mar 11 01:24:03.464947 systemd-logind[1690]: Removed session 9. Mar 11 01:24:11.983335 systemd[1]: Created slice kubepods-besteffort-pod9aa401c5_d3c5_4214_88b9_527ba463ee2b.slice - libcontainer container kubepods-besteffort-pod9aa401c5_d3c5_4214_88b9_527ba463ee2b.slice. Mar 11 01:24:12.042425 kubelet[3129]: I0311 01:24:12.042379 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa401c5-d3c5-4214-88b9-527ba463ee2b-tigera-ca-bundle\") pod \"calico-typha-69c97df57c-mlb9g\" (UID: \"9aa401c5-d3c5-4214-88b9-527ba463ee2b\") " pod="calico-system/calico-typha-69c97df57c-mlb9g" Mar 11 01:24:12.042425 kubelet[3129]: I0311 01:24:12.042419 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhsg7\" (UniqueName: \"kubernetes.io/projected/9aa401c5-d3c5-4214-88b9-527ba463ee2b-kube-api-access-dhsg7\") pod \"calico-typha-69c97df57c-mlb9g\" (UID: \"9aa401c5-d3c5-4214-88b9-527ba463ee2b\") " pod="calico-system/calico-typha-69c97df57c-mlb9g" Mar 11 01:24:12.042844 kubelet[3129]: I0311 01:24:12.042438 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9aa401c5-d3c5-4214-88b9-527ba463ee2b-typha-certs\") pod \"calico-typha-69c97df57c-mlb9g\" (UID: \"9aa401c5-d3c5-4214-88b9-527ba463ee2b\") " pod="calico-system/calico-typha-69c97df57c-mlb9g" Mar 11 01:24:12.110855 systemd[1]: Created slice kubepods-besteffort-poda28f7d4d_50cb_461b_a20b_e7f0ee518ecf.slice - libcontainer container kubepods-besteffort-poda28f7d4d_50cb_461b_a20b_e7f0ee518ecf.slice. Mar 11 01:24:12.143723 kubelet[3129]: I0311 01:24:12.142569 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a28f7d4d-50cb-461b-a20b-e7f0ee518ecf-lib-modules\") pod \"calico-node-bd69x\" (UID: \"a28f7d4d-50cb-461b-a20b-e7f0ee518ecf\") " pod="calico-system/calico-node-bd69x" Mar 11 01:24:12.143723 kubelet[3129]: I0311 01:24:12.142608 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a28f7d4d-50cb-461b-a20b-e7f0ee518ecf-var-lib-calico\") pod \"calico-node-bd69x\" (UID: \"a28f7d4d-50cb-461b-a20b-e7f0ee518ecf\") " pod="calico-system/calico-node-bd69x" Mar 11 01:24:12.143723 kubelet[3129]: I0311 01:24:12.142637 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7vp2\" (UniqueName: \"kubernetes.io/projected/a28f7d4d-50cb-461b-a20b-e7f0ee518ecf-kube-api-access-z7vp2\") pod \"calico-node-bd69x\" (UID: \"a28f7d4d-50cb-461b-a20b-e7f0ee518ecf\") " pod="calico-system/calico-node-bd69x" Mar 11 01:24:12.143723 kubelet[3129]: I0311 01:24:12.142655 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/a28f7d4d-50cb-461b-a20b-e7f0ee518ecf-bpffs\") pod \"calico-node-bd69x\" (UID: \"a28f7d4d-50cb-461b-a20b-e7f0ee518ecf\") " pod="calico-system/calico-node-bd69x" Mar 11 01:24:12.143723 kubelet[3129]: I0311 01:24:12.142668 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a28f7d4d-50cb-461b-a20b-e7f0ee518ecf-flexvol-driver-host\") pod \"calico-node-bd69x\" (UID: \"a28f7d4d-50cb-461b-a20b-e7f0ee518ecf\") " pod="calico-system/calico-node-bd69x" Mar 11 01:24:12.143999 kubelet[3129]: I0311 01:24:12.142682 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/a28f7d4d-50cb-461b-a20b-e7f0ee518ecf-nodeproc\") pod \"calico-node-bd69x\" (UID: \"a28f7d4d-50cb-461b-a20b-e7f0ee518ecf\") " pod="calico-system/calico-node-bd69x" Mar 11 01:24:12.143999 kubelet[3129]: I0311 01:24:12.142697 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a28f7d4d-50cb-461b-a20b-e7f0ee518ecf-tigera-ca-bundle\") pod \"calico-node-bd69x\" (UID: \"a28f7d4d-50cb-461b-a20b-e7f0ee518ecf\") " pod="calico-system/calico-node-bd69x" Mar 11 01:24:12.143999 kubelet[3129]: I0311 01:24:12.142713 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a28f7d4d-50cb-461b-a20b-e7f0ee518ecf-xtables-lock\") pod \"calico-node-bd69x\" (UID: \"a28f7d4d-50cb-461b-a20b-e7f0ee518ecf\") " pod="calico-system/calico-node-bd69x" Mar 11 01:24:12.143999 kubelet[3129]: I0311 01:24:12.142750 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a28f7d4d-50cb-461b-a20b-e7f0ee518ecf-cni-bin-dir\") pod \"calico-node-bd69x\" (UID: \"a28f7d4d-50cb-461b-a20b-e7f0ee518ecf\") " pod="calico-system/calico-node-bd69x" Mar 11 01:24:12.143999 kubelet[3129]: I0311 01:24:12.142763 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a28f7d4d-50cb-461b-a20b-e7f0ee518ecf-var-run-calico\") pod \"calico-node-bd69x\" (UID: \"a28f7d4d-50cb-461b-a20b-e7f0ee518ecf\") " pod="calico-system/calico-node-bd69x" Mar 11 01:24:12.144143 kubelet[3129]: I0311 01:24:12.142811 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a28f7d4d-50cb-461b-a20b-e7f0ee518ecf-policysync\") pod \"calico-node-bd69x\" (UID: \"a28f7d4d-50cb-461b-a20b-e7f0ee518ecf\") " pod="calico-system/calico-node-bd69x" Mar 11 01:24:12.144143 kubelet[3129]: I0311 01:24:12.142825 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a28f7d4d-50cb-461b-a20b-e7f0ee518ecf-cni-log-dir\") pod \"calico-node-bd69x\" (UID: \"a28f7d4d-50cb-461b-a20b-e7f0ee518ecf\") " pod="calico-system/calico-node-bd69x" Mar 11 01:24:12.144143 kubelet[3129]: I0311 01:24:12.142843 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a28f7d4d-50cb-461b-a20b-e7f0ee518ecf-node-certs\") pod \"calico-node-bd69x\" (UID: \"a28f7d4d-50cb-461b-a20b-e7f0ee518ecf\") " pod="calico-system/calico-node-bd69x" Mar 11 01:24:12.144143 kubelet[3129]: I0311 01:24:12.142855 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/a28f7d4d-50cb-461b-a20b-e7f0ee518ecf-sys-fs\") pod \"calico-node-bd69x\" (UID: \"a28f7d4d-50cb-461b-a20b-e7f0ee518ecf\") " pod="calico-system/calico-node-bd69x" Mar 11 01:24:12.144143 kubelet[3129]: I0311 01:24:12.142870 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a28f7d4d-50cb-461b-a20b-e7f0ee518ecf-cni-net-dir\") pod \"calico-node-bd69x\" (UID: \"a28f7d4d-50cb-461b-a20b-e7f0ee518ecf\") " pod="calico-system/calico-node-bd69x" Mar 11 01:24:12.221580 kubelet[3129]: E0311 01:24:12.221540 3129 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tjh4h" podUID="c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b" Mar 11 01:24:12.243489 kubelet[3129]: I0311 01:24:12.243195 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b-socket-dir\") pod \"csi-node-driver-tjh4h\" (UID: \"c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b\") " pod="calico-system/csi-node-driver-tjh4h" Mar 11 01:24:12.243489 kubelet[3129]: I0311 01:24:12.243232 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b-varrun\") pod \"csi-node-driver-tjh4h\" (UID: \"c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b\") " pod="calico-system/csi-node-driver-tjh4h" Mar 11 01:24:12.243489 kubelet[3129]: I0311 01:24:12.243257 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b-kubelet-dir\") pod \"csi-node-driver-tjh4h\" (UID: \"c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b\") " pod="calico-system/csi-node-driver-tjh4h" Mar 11 01:24:12.243489 kubelet[3129]: I0311 01:24:12.243274 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b-registration-dir\") pod \"csi-node-driver-tjh4h\" (UID: \"c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b\") " pod="calico-system/csi-node-driver-tjh4h" Mar 11 01:24:12.243489 kubelet[3129]: I0311 01:24:12.243288 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2g66\" (UniqueName: \"kubernetes.io/projected/c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b-kube-api-access-x2g66\") pod \"csi-node-driver-tjh4h\" (UID: \"c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b\") " pod="calico-system/csi-node-driver-tjh4h" Mar 11 01:24:12.252062 kubelet[3129]: E0311 01:24:12.251955 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.252653 kubelet[3129]: W0311 01:24:12.252409 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.252653 kubelet[3129]: E0311 01:24:12.252525 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.265034 kubelet[3129]: E0311 01:24:12.264941 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.265034 kubelet[3129]: W0311 01:24:12.264998 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.265034 kubelet[3129]: E0311 01:24:12.265018 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.292602 containerd[1717]: time="2026-03-11T01:24:12.292541335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69c97df57c-mlb9g,Uid:9aa401c5-d3c5-4214-88b9-527ba463ee2b,Namespace:calico-system,Attempt:0,}" Mar 11 01:24:12.330126 containerd[1717]: time="2026-03-11T01:24:12.330043879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:24:12.330306 containerd[1717]: time="2026-03-11T01:24:12.330139239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:24:12.330306 containerd[1717]: time="2026-03-11T01:24:12.330152839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:12.330306 containerd[1717]: time="2026-03-11T01:24:12.330223279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:12.344174 kubelet[3129]: E0311 01:24:12.344041 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.344174 kubelet[3129]: W0311 01:24:12.344063 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.344174 kubelet[3129]: E0311 01:24:12.344082 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.344431 kubelet[3129]: E0311 01:24:12.344387 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.344431 kubelet[3129]: W0311 01:24:12.344400 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.344431 kubelet[3129]: E0311 01:24:12.344413 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.344634 kubelet[3129]: E0311 01:24:12.344619 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.344634 kubelet[3129]: W0311 01:24:12.344630 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.344719 kubelet[3129]: E0311 01:24:12.344642 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.344799 kubelet[3129]: E0311 01:24:12.344784 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.344799 kubelet[3129]: W0311 01:24:12.344796 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.344880 kubelet[3129]: E0311 01:24:12.344805 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.345041 kubelet[3129]: E0311 01:24:12.345028 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.345041 kubelet[3129]: W0311 01:24:12.345040 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.345125 kubelet[3129]: E0311 01:24:12.345050 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.345366 kubelet[3129]: E0311 01:24:12.345348 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.345366 kubelet[3129]: W0311 01:24:12.345361 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.345552 kubelet[3129]: E0311 01:24:12.345376 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.345645 kubelet[3129]: E0311 01:24:12.345632 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.345645 kubelet[3129]: W0311 01:24:12.345644 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.345811 kubelet[3129]: E0311 01:24:12.345656 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.345927 kubelet[3129]: E0311 01:24:12.345893 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.345927 kubelet[3129]: W0311 01:24:12.345906 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.346051 kubelet[3129]: E0311 01:24:12.345919 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.346206 kubelet[3129]: E0311 01:24:12.346194 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.346302 kubelet[3129]: W0311 01:24:12.346268 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.346302 kubelet[3129]: E0311 01:24:12.346286 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.346552 kubelet[3129]: E0311 01:24:12.346535 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.346682 kubelet[3129]: W0311 01:24:12.346624 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.346682 kubelet[3129]: E0311 01:24:12.346642 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.346825 kubelet[3129]: E0311 01:24:12.346810 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.346825 kubelet[3129]: W0311 01:24:12.346824 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.346919 kubelet[3129]: E0311 01:24:12.346834 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.346992 kubelet[3129]: E0311 01:24:12.346981 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.346992 kubelet[3129]: W0311 01:24:12.346991 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.347068 kubelet[3129]: E0311 01:24:12.346999 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.347226 kubelet[3129]: E0311 01:24:12.347214 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.347291 kubelet[3129]: W0311 01:24:12.347227 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.347291 kubelet[3129]: E0311 01:24:12.347238 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.347627 systemd[1]: Started cri-containerd-1c701b8474c1d9922c913e8fbd62ea5b257b0eadad7d85a65ded04d397f9cc6a.scope - libcontainer container 1c701b8474c1d9922c913e8fbd62ea5b257b0eadad7d85a65ded04d397f9cc6a. Mar 11 01:24:12.347884 kubelet[3129]: E0311 01:24:12.347863 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.347884 kubelet[3129]: W0311 01:24:12.347881 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.347990 kubelet[3129]: E0311 01:24:12.347899 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.348749 kubelet[3129]: E0311 01:24:12.348732 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.348979 kubelet[3129]: W0311 01:24:12.348841 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.348979 kubelet[3129]: E0311 01:24:12.348861 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.349286 kubelet[3129]: E0311 01:24:12.349218 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.349286 kubelet[3129]: W0311 01:24:12.349232 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.349286 kubelet[3129]: E0311 01:24:12.349245 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.349800 kubelet[3129]: E0311 01:24:12.349648 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.349800 kubelet[3129]: W0311 01:24:12.349661 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.349800 kubelet[3129]: E0311 01:24:12.349674 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.350360 kubelet[3129]: E0311 01:24:12.350345 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.350553 kubelet[3129]: W0311 01:24:12.350430 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.350553 kubelet[3129]: E0311 01:24:12.350449 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.351118 kubelet[3129]: E0311 01:24:12.350904 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.351118 kubelet[3129]: W0311 01:24:12.350957 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.351118 kubelet[3129]: E0311 01:24:12.350973 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.351680 kubelet[3129]: E0311 01:24:12.351442 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.351680 kubelet[3129]: W0311 01:24:12.351493 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.351680 kubelet[3129]: E0311 01:24:12.351508 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.352119 kubelet[3129]: E0311 01:24:12.352051 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.352119 kubelet[3129]: W0311 01:24:12.352083 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.352119 kubelet[3129]: E0311 01:24:12.352096 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.352559 kubelet[3129]: E0311 01:24:12.352526 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.352772 kubelet[3129]: W0311 01:24:12.352643 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.352772 kubelet[3129]: E0311 01:24:12.352664 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.353191 kubelet[3129]: E0311 01:24:12.353096 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.353191 kubelet[3129]: W0311 01:24:12.353125 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.353191 kubelet[3129]: E0311 01:24:12.353140 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.353673 kubelet[3129]: E0311 01:24:12.353545 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.353673 kubelet[3129]: W0311 01:24:12.353565 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.353673 kubelet[3129]: E0311 01:24:12.353578 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.353989 kubelet[3129]: E0311 01:24:12.353961 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.354062 kubelet[3129]: W0311 01:24:12.354037 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.354140 kubelet[3129]: E0311 01:24:12.354107 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.366363 kubelet[3129]: E0311 01:24:12.366323 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:12.366363 kubelet[3129]: W0311 01:24:12.366339 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:12.366620 kubelet[3129]: E0311 01:24:12.366353 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:12.382647 containerd[1717]: time="2026-03-11T01:24:12.382547048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69c97df57c-mlb9g,Uid:9aa401c5-d3c5-4214-88b9-527ba463ee2b,Namespace:calico-system,Attempt:0,} returns sandbox id \"1c701b8474c1d9922c913e8fbd62ea5b257b0eadad7d85a65ded04d397f9cc6a\"" Mar 11 01:24:12.384355 containerd[1717]: time="2026-03-11T01:24:12.384335131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 11 01:24:12.419265 containerd[1717]: time="2026-03-11T01:24:12.418970790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bd69x,Uid:a28f7d4d-50cb-461b-a20b-e7f0ee518ecf,Namespace:calico-system,Attempt:0,}" Mar 11 01:24:12.455970 containerd[1717]: time="2026-03-11T01:24:12.455742653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:24:12.455970 containerd[1717]: time="2026-03-11T01:24:12.455793253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:24:12.455970 containerd[1717]: time="2026-03-11T01:24:12.455808853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:12.455970 containerd[1717]: time="2026-03-11T01:24:12.455878853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:12.470613 systemd[1]: Started cri-containerd-a0246106417208f4dcdf1f82b9ae1faec16a112586657b234ea41981924681ea.scope - libcontainer container a0246106417208f4dcdf1f82b9ae1faec16a112586657b234ea41981924681ea. Mar 11 01:24:12.487770 containerd[1717]: time="2026-03-11T01:24:12.487634587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bd69x,Uid:a28f7d4d-50cb-461b-a20b-e7f0ee518ecf,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0246106417208f4dcdf1f82b9ae1faec16a112586657b234ea41981924681ea\"" Mar 11 01:24:13.598803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3510708526.mount: Deactivated successfully. Mar 11 01:24:13.647686 kubelet[3129]: E0311 01:24:13.647641 3129 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tjh4h" podUID="c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b" Mar 11 01:24:14.088432 containerd[1717]: time="2026-03-11T01:24:14.087658191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:14.090506 containerd[1717]: time="2026-03-11T01:24:14.090477636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 11 01:24:14.093246 containerd[1717]: time="2026-03-11T01:24:14.093202481Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:14.100059 containerd[1717]: time="2026-03-11T01:24:14.100028532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:14.100770 containerd[1717]: time="2026-03-11T01:24:14.100665293Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 1.716147801s" Mar 11 01:24:14.100770 containerd[1717]: time="2026-03-11T01:24:14.100698733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 11 01:24:14.102359 containerd[1717]: time="2026-03-11T01:24:14.102212296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 11 01:24:14.118114 containerd[1717]: time="2026-03-11T01:24:14.118006443Z" level=info msg="CreateContainer within sandbox \"1c701b8474c1d9922c913e8fbd62ea5b257b0eadad7d85a65ded04d397f9cc6a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 11 01:24:14.150671 containerd[1717]: time="2026-03-11T01:24:14.150633378Z" level=info msg="CreateContainer within sandbox \"1c701b8474c1d9922c913e8fbd62ea5b257b0eadad7d85a65ded04d397f9cc6a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f22ad891def75c4e6c914db41f2d23441202cd486ad08c1f2b6c9257c5ab62f3\"" Mar 11 01:24:14.151120 containerd[1717]: time="2026-03-11T01:24:14.151020899Z" level=info msg="StartContainer for \"f22ad891def75c4e6c914db41f2d23441202cd486ad08c1f2b6c9257c5ab62f3\"" Mar 11 01:24:14.180599 systemd[1]: Started cri-containerd-f22ad891def75c4e6c914db41f2d23441202cd486ad08c1f2b6c9257c5ab62f3.scope - libcontainer container f22ad891def75c4e6c914db41f2d23441202cd486ad08c1f2b6c9257c5ab62f3. Mar 11 01:24:14.214536 containerd[1717]: time="2026-03-11T01:24:14.214495647Z" level=info msg="StartContainer for \"f22ad891def75c4e6c914db41f2d23441202cd486ad08c1f2b6c9257c5ab62f3\" returns successfully" Mar 11 01:24:14.729364 kubelet[3129]: I0311 01:24:14.729229 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69c97df57c-mlb9g" podStartSLOduration=2.011475519 podStartE2EDuration="3.729216364s" podCreationTimestamp="2026-03-11 01:24:11 +0000 UTC" firstStartedPulling="2026-03-11 01:24:12.38381753 +0000 UTC m=+24.858542510" lastFinishedPulling="2026-03-11 01:24:14.101558335 +0000 UTC m=+26.576283355" observedRunningTime="2026-03-11 01:24:14.728800523 +0000 UTC m=+27.203525543" watchObservedRunningTime="2026-03-11 01:24:14.729216364 +0000 UTC m=+27.203941384" Mar 11 01:24:14.753576 kubelet[3129]: E0311 01:24:14.753440 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.753576 kubelet[3129]: W0311 01:24:14.753480 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.753576 kubelet[3129]: E0311 01:24:14.753499 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.754038 kubelet[3129]: E0311 01:24:14.753653 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.754038 kubelet[3129]: W0311 01:24:14.753661 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.754038 kubelet[3129]: E0311 01:24:14.753706 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.754038 kubelet[3129]: E0311 01:24:14.753845 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.754038 kubelet[3129]: W0311 01:24:14.753858 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.754038 kubelet[3129]: E0311 01:24:14.753868 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.754311 kubelet[3129]: E0311 01:24:14.754203 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.754311 kubelet[3129]: W0311 01:24:14.754214 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.754311 kubelet[3129]: E0311 01:24:14.754224 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.754568 kubelet[3129]: E0311 01:24:14.754379 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.754568 kubelet[3129]: W0311 01:24:14.754388 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.754568 kubelet[3129]: E0311 01:24:14.754404 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.754686 kubelet[3129]: E0311 01:24:14.754675 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.754810 kubelet[3129]: W0311 01:24:14.754729 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.754810 kubelet[3129]: E0311 01:24:14.754741 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.755029 kubelet[3129]: E0311 01:24:14.754976 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.755029 kubelet[3129]: W0311 01:24:14.754989 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.755029 kubelet[3129]: E0311 01:24:14.754999 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.755343 kubelet[3129]: E0311 01:24:14.755246 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.755343 kubelet[3129]: W0311 01:24:14.755256 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.755343 kubelet[3129]: E0311 01:24:14.755265 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.755559 kubelet[3129]: E0311 01:24:14.755508 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.755559 kubelet[3129]: W0311 01:24:14.755517 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.755559 kubelet[3129]: E0311 01:24:14.755526 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.755865 kubelet[3129]: E0311 01:24:14.755777 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.755865 kubelet[3129]: W0311 01:24:14.755788 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.755865 kubelet[3129]: E0311 01:24:14.755797 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.756060 kubelet[3129]: E0311 01:24:14.755931 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.756060 kubelet[3129]: W0311 01:24:14.755939 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.756060 kubelet[3129]: E0311 01:24:14.755948 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.756349 kubelet[3129]: E0311 01:24:14.756251 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.756349 kubelet[3129]: W0311 01:24:14.756261 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.756349 kubelet[3129]: E0311 01:24:14.756271 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.756599 kubelet[3129]: E0311 01:24:14.756501 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.756599 kubelet[3129]: W0311 01:24:14.756512 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.756599 kubelet[3129]: E0311 01:24:14.756521 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.756830 kubelet[3129]: E0311 01:24:14.756738 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.756830 kubelet[3129]: W0311 01:24:14.756748 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.756830 kubelet[3129]: E0311 01:24:14.756758 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.757008 kubelet[3129]: E0311 01:24:14.756961 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.757008 kubelet[3129]: W0311 01:24:14.756972 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.757008 kubelet[3129]: E0311 01:24:14.756982 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.763204 kubelet[3129]: E0311 01:24:14.763188 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.763204 kubelet[3129]: W0311 01:24:14.763202 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.763304 kubelet[3129]: E0311 01:24:14.763214 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.763395 kubelet[3129]: E0311 01:24:14.763379 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.763395 kubelet[3129]: W0311 01:24:14.763393 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.763503 kubelet[3129]: E0311 01:24:14.763402 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.763688 kubelet[3129]: E0311 01:24:14.763673 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.763723 kubelet[3129]: W0311 01:24:14.763688 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.763723 kubelet[3129]: E0311 01:24:14.763706 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.763927 kubelet[3129]: E0311 01:24:14.763915 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.763927 kubelet[3129]: W0311 01:24:14.763926 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.764015 kubelet[3129]: E0311 01:24:14.763936 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.764081 kubelet[3129]: E0311 01:24:14.764070 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.764081 kubelet[3129]: W0311 01:24:14.764079 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.764161 kubelet[3129]: E0311 01:24:14.764087 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.764219 kubelet[3129]: E0311 01:24:14.764208 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.764219 kubelet[3129]: W0311 01:24:14.764217 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.764314 kubelet[3129]: E0311 01:24:14.764225 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.764382 kubelet[3129]: E0311 01:24:14.764368 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.764382 kubelet[3129]: W0311 01:24:14.764377 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.764446 kubelet[3129]: E0311 01:24:14.764387 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.764794 kubelet[3129]: E0311 01:24:14.764778 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.764941 kubelet[3129]: W0311 01:24:14.764872 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.764941 kubelet[3129]: E0311 01:24:14.764890 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.765266 kubelet[3129]: E0311 01:24:14.765152 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.765266 kubelet[3129]: W0311 01:24:14.765167 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.765266 kubelet[3129]: E0311 01:24:14.765179 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.765468 kubelet[3129]: E0311 01:24:14.765416 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.765468 kubelet[3129]: W0311 01:24:14.765427 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.765468 kubelet[3129]: E0311 01:24:14.765439 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.765723 kubelet[3129]: E0311 01:24:14.765705 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.765723 kubelet[3129]: W0311 01:24:14.765718 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.765834 kubelet[3129]: E0311 01:24:14.765730 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.765903 kubelet[3129]: E0311 01:24:14.765892 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.765942 kubelet[3129]: W0311 01:24:14.765903 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.765942 kubelet[3129]: E0311 01:24:14.765912 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.766075 kubelet[3129]: E0311 01:24:14.766064 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.766075 kubelet[3129]: W0311 01:24:14.766074 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.766148 kubelet[3129]: E0311 01:24:14.766084 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.766405 kubelet[3129]: E0311 01:24:14.766391 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.766512 kubelet[3129]: W0311 01:24:14.766499 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.766644 kubelet[3129]: E0311 01:24:14.766575 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.766963 kubelet[3129]: E0311 01:24:14.766845 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.766963 kubelet[3129]: W0311 01:24:14.766856 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.766963 kubelet[3129]: E0311 01:24:14.766867 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.767123 kubelet[3129]: E0311 01:24:14.767113 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.767304 kubelet[3129]: W0311 01:24:14.767233 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.767304 kubelet[3129]: E0311 01:24:14.767250 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.767535 kubelet[3129]: E0311 01:24:14.767521 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.767535 kubelet[3129]: W0311 01:24:14.767534 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.767611 kubelet[3129]: E0311 01:24:14.767547 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:14.767946 kubelet[3129]: E0311 01:24:14.767894 3129 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 11 01:24:14.767946 kubelet[3129]: W0311 01:24:14.767908 3129 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 11 01:24:14.767946 kubelet[3129]: E0311 01:24:14.767921 3129 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 11 01:24:15.273966 containerd[1717]: time="2026-03-11T01:24:15.273920811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:15.276603 containerd[1717]: time="2026-03-11T01:24:15.276576816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 11 01:24:15.279447 containerd[1717]: time="2026-03-11T01:24:15.279420940Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:15.282875 containerd[1717]: time="2026-03-11T01:24:15.282796346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:15.284028 containerd[1717]: time="2026-03-11T01:24:15.283577427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.181328411s" Mar 11 01:24:15.284028 containerd[1717]: time="2026-03-11T01:24:15.283611387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 11 01:24:15.291261 containerd[1717]: time="2026-03-11T01:24:15.291234960Z" level=info msg="CreateContainer within sandbox \"a0246106417208f4dcdf1f82b9ae1faec16a112586657b234ea41981924681ea\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 11 01:24:15.321763 containerd[1717]: time="2026-03-11T01:24:15.321732172Z" level=info msg="CreateContainer within sandbox \"a0246106417208f4dcdf1f82b9ae1faec16a112586657b234ea41981924681ea\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5cb528c0c4629dad153aaf548da02393842213cdfdcd1b791dfb62d101e03923\"" Mar 11 01:24:15.322776 containerd[1717]: time="2026-03-11T01:24:15.322233733Z" level=info msg="StartContainer for \"5cb528c0c4629dad153aaf548da02393842213cdfdcd1b791dfb62d101e03923\"" Mar 11 01:24:15.352587 systemd[1]: Started cri-containerd-5cb528c0c4629dad153aaf548da02393842213cdfdcd1b791dfb62d101e03923.scope - libcontainer container 5cb528c0c4629dad153aaf548da02393842213cdfdcd1b791dfb62d101e03923. Mar 11 01:24:15.379588 containerd[1717]: time="2026-03-11T01:24:15.379493991Z" level=info msg="StartContainer for \"5cb528c0c4629dad153aaf548da02393842213cdfdcd1b791dfb62d101e03923\" returns successfully" Mar 11 01:24:15.385413 systemd[1]: cri-containerd-5cb528c0c4629dad153aaf548da02393842213cdfdcd1b791dfb62d101e03923.scope: Deactivated successfully. Mar 11 01:24:15.405440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cb528c0c4629dad153aaf548da02393842213cdfdcd1b791dfb62d101e03923-rootfs.mount: Deactivated successfully. Mar 11 01:24:15.857513 kubelet[3129]: E0311 01:24:15.647693 3129 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tjh4h" podUID="c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b" Mar 11 01:24:16.477163 containerd[1717]: time="2026-03-11T01:24:16.477077259Z" level=info msg="shim disconnected" id=5cb528c0c4629dad153aaf548da02393842213cdfdcd1b791dfb62d101e03923 namespace=k8s.io Mar 11 01:24:16.477163 containerd[1717]: time="2026-03-11T01:24:16.477156820Z" level=warning msg="cleaning up after shim disconnected" id=5cb528c0c4629dad153aaf548da02393842213cdfdcd1b791dfb62d101e03923 namespace=k8s.io Mar 11 01:24:16.477163 containerd[1717]: time="2026-03-11T01:24:16.477167020Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 01:24:16.722107 containerd[1717]: time="2026-03-11T01:24:16.722026437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 11 01:24:17.648309 kubelet[3129]: E0311 01:24:17.647980 3129 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tjh4h" podUID="c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b" Mar 11 01:24:19.649042 kubelet[3129]: E0311 01:24:19.648710 3129 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tjh4h" podUID="c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b" Mar 11 01:24:21.474027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3229883655.mount: Deactivated successfully. Mar 11 01:24:21.531990 containerd[1717]: time="2026-03-11T01:24:21.531939609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:21.534507 containerd[1717]: time="2026-03-11T01:24:21.534343573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 11 01:24:21.537472 containerd[1717]: time="2026-03-11T01:24:21.537085218Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:21.541142 containerd[1717]: time="2026-03-11T01:24:21.541096384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:21.541982 containerd[1717]: time="2026-03-11T01:24:21.541832466Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 4.819757469s" Mar 11 01:24:21.541982 containerd[1717]: time="2026-03-11T01:24:21.541879426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 11 01:24:21.549924 containerd[1717]: time="2026-03-11T01:24:21.549890119Z" level=info msg="CreateContainer within sandbox \"a0246106417208f4dcdf1f82b9ae1faec16a112586657b234ea41981924681ea\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 11 01:24:21.586400 containerd[1717]: time="2026-03-11T01:24:21.586312501Z" level=info msg="CreateContainer within sandbox \"a0246106417208f4dcdf1f82b9ae1faec16a112586657b234ea41981924681ea\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"3b36288cafe84b66094e8dd9cd5290fb4f0a75941894e34af619bb4588db9a9c\"" Mar 11 01:24:21.586770 containerd[1717]: time="2026-03-11T01:24:21.586751302Z" level=info msg="StartContainer for \"3b36288cafe84b66094e8dd9cd5290fb4f0a75941894e34af619bb4588db9a9c\"" Mar 11 01:24:21.618609 systemd[1]: Started cri-containerd-3b36288cafe84b66094e8dd9cd5290fb4f0a75941894e34af619bb4588db9a9c.scope - libcontainer container 3b36288cafe84b66094e8dd9cd5290fb4f0a75941894e34af619bb4588db9a9c. Mar 11 01:24:21.644694 containerd[1717]: time="2026-03-11T01:24:21.644591280Z" level=info msg="StartContainer for \"3b36288cafe84b66094e8dd9cd5290fb4f0a75941894e34af619bb4588db9a9c\" returns successfully" Mar 11 01:24:21.648489 kubelet[3129]: E0311 01:24:21.647601 3129 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tjh4h" podUID="c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b" Mar 11 01:24:21.688900 systemd[1]: cri-containerd-3b36288cafe84b66094e8dd9cd5290fb4f0a75941894e34af619bb4588db9a9c.scope: Deactivated successfully. Mar 11 01:24:22.474358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b36288cafe84b66094e8dd9cd5290fb4f0a75941894e34af619bb4588db9a9c-rootfs.mount: Deactivated successfully. Mar 11 01:24:23.230741 containerd[1717]: time="2026-03-11T01:24:23.230607365Z" level=info msg="shim disconnected" id=3b36288cafe84b66094e8dd9cd5290fb4f0a75941894e34af619bb4588db9a9c namespace=k8s.io Mar 11 01:24:23.230741 containerd[1717]: time="2026-03-11T01:24:23.230672805Z" level=warning msg="cleaning up after shim disconnected" id=3b36288cafe84b66094e8dd9cd5290fb4f0a75941894e34af619bb4588db9a9c namespace=k8s.io Mar 11 01:24:23.230741 containerd[1717]: time="2026-03-11T01:24:23.230681605Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 01:24:23.648874 kubelet[3129]: E0311 01:24:23.646969 3129 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tjh4h" podUID="c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b" Mar 11 01:24:23.737819 containerd[1717]: time="2026-03-11T01:24:23.737763863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 11 01:24:25.648404 kubelet[3129]: E0311 01:24:25.647251 3129 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tjh4h" podUID="c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b" Mar 11 01:24:26.220223 containerd[1717]: time="2026-03-11T01:24:26.220179186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:26.223636 containerd[1717]: time="2026-03-11T01:24:26.223607272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 11 01:24:26.229673 containerd[1717]: time="2026-03-11T01:24:26.229493282Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:26.234373 containerd[1717]: time="2026-03-11T01:24:26.234343290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:26.235470 containerd[1717]: time="2026-03-11T01:24:26.234930331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.497114428s" Mar 11 01:24:26.235470 containerd[1717]: time="2026-03-11T01:24:26.235279372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 11 01:24:26.242380 containerd[1717]: time="2026-03-11T01:24:26.242354744Z" level=info msg="CreateContainer within sandbox \"a0246106417208f4dcdf1f82b9ae1faec16a112586657b234ea41981924681ea\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 11 01:24:26.274924 containerd[1717]: time="2026-03-11T01:24:26.274887959Z" level=info msg="CreateContainer within sandbox \"a0246106417208f4dcdf1f82b9ae1faec16a112586657b234ea41981924681ea\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9b3240e384449051aa15de987db1dee805ae666a80f0a4009a1a870cb7da8ef4\"" Mar 11 01:24:26.275600 containerd[1717]: time="2026-03-11T01:24:26.275574640Z" level=info msg="StartContainer for \"9b3240e384449051aa15de987db1dee805ae666a80f0a4009a1a870cb7da8ef4\"" Mar 11 01:24:26.302610 systemd[1]: Started cri-containerd-9b3240e384449051aa15de987db1dee805ae666a80f0a4009a1a870cb7da8ef4.scope - libcontainer container 9b3240e384449051aa15de987db1dee805ae666a80f0a4009a1a870cb7da8ef4. Mar 11 01:24:26.333051 containerd[1717]: time="2026-03-11T01:24:26.333007977Z" level=info msg="StartContainer for \"9b3240e384449051aa15de987db1dee805ae666a80f0a4009a1a870cb7da8ef4\" returns successfully" Mar 11 01:24:27.502476 containerd[1717]: time="2026-03-11T01:24:27.502420757Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 11 01:24:27.505386 systemd[1]: cri-containerd-9b3240e384449051aa15de987db1dee805ae666a80f0a4009a1a870cb7da8ef4.scope: Deactivated successfully. Mar 11 01:24:27.524560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b3240e384449051aa15de987db1dee805ae666a80f0a4009a1a870cb7da8ef4-rootfs.mount: Deactivated successfully. Mar 11 01:24:27.560466 kubelet[3129]: I0311 01:24:27.560431 3129 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 11 01:24:28.382111 containerd[1717]: time="2026-03-11T01:24:28.382020570Z" level=info msg="shim disconnected" id=9b3240e384449051aa15de987db1dee805ae666a80f0a4009a1a870cb7da8ef4 namespace=k8s.io Mar 11 01:24:28.382937 containerd[1717]: time="2026-03-11T01:24:28.382613411Z" level=warning msg="cleaning up after shim disconnected" id=9b3240e384449051aa15de987db1dee805ae666a80f0a4009a1a870cb7da8ef4 namespace=k8s.io Mar 11 01:24:28.382937 containerd[1717]: time="2026-03-11T01:24:28.382640651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 11 01:24:28.384331 systemd[1]: Created slice kubepods-besteffort-podc6f6763b_72d4_4230_abc8_f8b4f7ba7e3b.slice - libcontainer container kubepods-besteffort-podc6f6763b_72d4_4230_abc8_f8b4f7ba7e3b.slice. Mar 11 01:24:28.394476 containerd[1717]: time="2026-03-11T01:24:28.394426071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tjh4h,Uid:c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b,Namespace:calico-system,Attempt:0,}" Mar 11 01:24:28.395268 systemd[1]: Created slice kubepods-burstable-pod1923a631_58e0_434b_b9f7_d22db35c541a.slice - libcontainer container kubepods-burstable-pod1923a631_58e0_434b_b9f7_d22db35c541a.slice. Mar 11 01:24:28.406932 systemd[1]: Created slice kubepods-burstable-pod48c32900_5045_4f60_bbe5_13570adfb73f.slice - libcontainer container kubepods-burstable-pod48c32900_5045_4f60_bbe5_13570adfb73f.slice. Mar 11 01:24:28.412085 systemd[1]: Created slice kubepods-besteffort-pod0b7b3ee1_30bd_400b_8e2b_54c143148a44.slice - libcontainer container kubepods-besteffort-pod0b7b3ee1_30bd_400b_8e2b_54c143148a44.slice. Mar 11 01:24:28.424405 systemd[1]: Created slice kubepods-besteffort-podb4eb406e_fa82_44d9_b52c_29c1dd09bc5c.slice - libcontainer container kubepods-besteffort-podb4eb406e_fa82_44d9_b52c_29c1dd09bc5c.slice. Mar 11 01:24:28.438038 systemd[1]: Created slice kubepods-besteffort-pod4d253aa8_9d1e_4431_94a0_bc4663390ef6.slice - libcontainer container kubepods-besteffort-pod4d253aa8_9d1e_4431_94a0_bc4663390ef6.slice. Mar 11 01:24:28.450760 systemd[1]: Created slice kubepods-besteffort-pod9c69ff73_f4c7_42e1_b1b3_d584530adfe4.slice - libcontainer container kubepods-besteffort-pod9c69ff73_f4c7_42e1_b1b3_d584530adfe4.slice. Mar 11 01:24:28.458104 kubelet[3129]: I0311 01:24:28.457061 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b7b3ee1-30bd-400b-8e2b-54c143148a44-tigera-ca-bundle\") pod \"calico-kube-controllers-946cb894d-x99qm\" (UID: \"0b7b3ee1-30bd-400b-8e2b-54c143148a44\") " pod="calico-system/calico-kube-controllers-946cb894d-x99qm" Mar 11 01:24:28.458104 kubelet[3129]: I0311 01:24:28.457103 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48c32900-5045-4f60-bbe5-13570adfb73f-config-volume\") pod \"coredns-66bc5c9577-2dzt4\" (UID: \"48c32900-5045-4f60-bbe5-13570adfb73f\") " pod="kube-system/coredns-66bc5c9577-2dzt4" Mar 11 01:24:28.458104 kubelet[3129]: I0311 01:24:28.457178 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgsm4\" (UniqueName: \"kubernetes.io/projected/48c32900-5045-4f60-bbe5-13570adfb73f-kube-api-access-wgsm4\") pod \"coredns-66bc5c9577-2dzt4\" (UID: \"48c32900-5045-4f60-bbe5-13570adfb73f\") " pod="kube-system/coredns-66bc5c9577-2dzt4" Mar 11 01:24:28.458104 kubelet[3129]: I0311 01:24:28.457210 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4d253aa8-9d1e-4431-94a0-bc4663390ef6-calico-apiserver-certs\") pod \"calico-apiserver-59b9679847-btrhp\" (UID: \"4d253aa8-9d1e-4431-94a0-bc4663390ef6\") " pod="calico-system/calico-apiserver-59b9679847-btrhp" Mar 11 01:24:28.458104 kubelet[3129]: I0311 01:24:28.457354 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crkgs\" (UniqueName: \"kubernetes.io/projected/1923a631-58e0-434b-b9f7-d22db35c541a-kube-api-access-crkgs\") pod \"coredns-66bc5c9577-vq8w4\" (UID: \"1923a631-58e0-434b-b9f7-d22db35c541a\") " pod="kube-system/coredns-66bc5c9577-vq8w4" Mar 11 01:24:28.458286 kubelet[3129]: I0311 01:24:28.457495 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cjsk\" (UniqueName: \"kubernetes.io/projected/0b7b3ee1-30bd-400b-8e2b-54c143148a44-kube-api-access-9cjsk\") pod \"calico-kube-controllers-946cb894d-x99qm\" (UID: \"0b7b3ee1-30bd-400b-8e2b-54c143148a44\") " pod="calico-system/calico-kube-controllers-946cb894d-x99qm" Mar 11 01:24:28.458286 kubelet[3129]: I0311 01:24:28.457523 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9c69ff73-f4c7-42e1-b1b3-d584530adfe4-nginx-config\") pod \"whisker-6cfbf665c8-8wp24\" (UID: \"9c69ff73-f4c7-42e1-b1b3-d584530adfe4\") " pod="calico-system/whisker-6cfbf665c8-8wp24" Mar 11 01:24:28.458286 kubelet[3129]: I0311 01:24:28.457693 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76q4j\" (UniqueName: \"kubernetes.io/projected/e98682e7-a223-4af4-85e4-1cd58ea80d8c-kube-api-access-76q4j\") pod \"goldmane-cccfbd5cf-tglk5\" (UID: \"e98682e7-a223-4af4-85e4-1cd58ea80d8c\") " pod="calico-system/goldmane-cccfbd5cf-tglk5" Mar 11 01:24:28.458286 kubelet[3129]: I0311 01:24:28.457725 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1923a631-58e0-434b-b9f7-d22db35c541a-config-volume\") pod \"coredns-66bc5c9577-vq8w4\" (UID: \"1923a631-58e0-434b-b9f7-d22db35c541a\") " pod="kube-system/coredns-66bc5c9577-vq8w4" Mar 11 01:24:28.458286 kubelet[3129]: I0311 01:24:28.457850 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpntv\" (UniqueName: \"kubernetes.io/projected/4d253aa8-9d1e-4431-94a0-bc4663390ef6-kube-api-access-gpntv\") pod \"calico-apiserver-59b9679847-btrhp\" (UID: \"4d253aa8-9d1e-4431-94a0-bc4663390ef6\") " pod="calico-system/calico-apiserver-59b9679847-btrhp" Mar 11 01:24:28.458393 kubelet[3129]: I0311 01:24:28.457875 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e98682e7-a223-4af4-85e4-1cd58ea80d8c-config\") pod \"goldmane-cccfbd5cf-tglk5\" (UID: \"e98682e7-a223-4af4-85e4-1cd58ea80d8c\") " pod="calico-system/goldmane-cccfbd5cf-tglk5" Mar 11 01:24:28.458393 kubelet[3129]: I0311 01:24:28.458031 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e98682e7-a223-4af4-85e4-1cd58ea80d8c-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-tglk5\" (UID: \"e98682e7-a223-4af4-85e4-1cd58ea80d8c\") " pod="calico-system/goldmane-cccfbd5cf-tglk5" Mar 11 01:24:28.458393 kubelet[3129]: I0311 01:24:28.458186 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c69ff73-f4c7-42e1-b1b3-d584530adfe4-whisker-ca-bundle\") pod \"whisker-6cfbf665c8-8wp24\" (UID: \"9c69ff73-f4c7-42e1-b1b3-d584530adfe4\") " pod="calico-system/whisker-6cfbf665c8-8wp24" Mar 11 01:24:28.458393 kubelet[3129]: I0311 01:24:28.458238 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b4eb406e-fa82-44d9-b52c-29c1dd09bc5c-calico-apiserver-certs\") pod \"calico-apiserver-59b9679847-vdzb5\" (UID: \"b4eb406e-fa82-44d9-b52c-29c1dd09bc5c\") " pod="calico-system/calico-apiserver-59b9679847-vdzb5" Mar 11 01:24:28.458493 kubelet[3129]: I0311 01:24:28.458394 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh8lq\" (UniqueName: \"kubernetes.io/projected/b4eb406e-fa82-44d9-b52c-29c1dd09bc5c-kube-api-access-jh8lq\") pod \"calico-apiserver-59b9679847-vdzb5\" (UID: \"b4eb406e-fa82-44d9-b52c-29c1dd09bc5c\") " pod="calico-system/calico-apiserver-59b9679847-vdzb5" Mar 11 01:24:28.458819 kubelet[3129]: I0311 01:24:28.458563 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9c69ff73-f4c7-42e1-b1b3-d584530adfe4-whisker-backend-key-pair\") pod \"whisker-6cfbf665c8-8wp24\" (UID: \"9c69ff73-f4c7-42e1-b1b3-d584530adfe4\") " pod="calico-system/whisker-6cfbf665c8-8wp24" Mar 11 01:24:28.458819 kubelet[3129]: I0311 01:24:28.458599 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27fsf\" (UniqueName: \"kubernetes.io/projected/9c69ff73-f4c7-42e1-b1b3-d584530adfe4-kube-api-access-27fsf\") pod \"whisker-6cfbf665c8-8wp24\" (UID: \"9c69ff73-f4c7-42e1-b1b3-d584530adfe4\") " pod="calico-system/whisker-6cfbf665c8-8wp24" Mar 11 01:24:28.458819 kubelet[3129]: I0311 01:24:28.458756 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e98682e7-a223-4af4-85e4-1cd58ea80d8c-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-tglk5\" (UID: \"e98682e7-a223-4af4-85e4-1cd58ea80d8c\") " pod="calico-system/goldmane-cccfbd5cf-tglk5" Mar 11 01:24:28.473478 systemd[1]: Created slice kubepods-besteffort-pode98682e7_a223_4af4_85e4_1cd58ea80d8c.slice - libcontainer container kubepods-besteffort-pode98682e7_a223_4af4_85e4_1cd58ea80d8c.slice. Mar 11 01:24:28.509414 containerd[1717]: time="2026-03-11T01:24:28.509373786Z" level=error msg="Failed to destroy network for sandbox \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:28.511469 containerd[1717]: time="2026-03-11T01:24:28.510022107Z" level=error msg="encountered an error cleaning up failed sandbox \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:28.511628 containerd[1717]: time="2026-03-11T01:24:28.511600350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tjh4h,Uid:c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:28.511908 kubelet[3129]: E0311 01:24:28.511871 3129 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:28.511977 kubelet[3129]: E0311 01:24:28.511928 3129 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tjh4h" Mar 11 01:24:28.511977 kubelet[3129]: E0311 01:24:28.511947 3129 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tjh4h" Mar 11 01:24:28.512024 kubelet[3129]: E0311 01:24:28.511993 3129 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tjh4h_calico-system(c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tjh4h_calico-system(c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tjh4h" podUID="c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b" Mar 11 01:24:28.512711 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55-shm.mount: Deactivated successfully. Mar 11 01:24:28.706595 containerd[1717]: time="2026-03-11T01:24:28.706556121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vq8w4,Uid:1923a631-58e0-434b-b9f7-d22db35c541a,Namespace:kube-system,Attempt:0,}" Mar 11 01:24:28.715422 containerd[1717]: time="2026-03-11T01:24:28.715161856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2dzt4,Uid:48c32900-5045-4f60-bbe5-13570adfb73f,Namespace:kube-system,Attempt:0,}" Mar 11 01:24:28.722248 containerd[1717]: time="2026-03-11T01:24:28.722213148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-946cb894d-x99qm,Uid:0b7b3ee1-30bd-400b-8e2b-54c143148a44,Namespace:calico-system,Attempt:0,}" Mar 11 01:24:28.738435 containerd[1717]: time="2026-03-11T01:24:28.738400215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b9679847-vdzb5,Uid:b4eb406e-fa82-44d9-b52c-29c1dd09bc5c,Namespace:calico-system,Attempt:0,}" Mar 11 01:24:28.748013 containerd[1717]: time="2026-03-11T01:24:28.747715791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b9679847-btrhp,Uid:4d253aa8-9d1e-4431-94a0-bc4663390ef6,Namespace:calico-system,Attempt:0,}" Mar 11 01:24:28.754382 kubelet[3129]: I0311 01:24:28.754304 3129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Mar 11 01:24:28.755638 containerd[1717]: time="2026-03-11T01:24:28.755449724Z" level=info msg="StopPodSandbox for \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\"" Mar 11 01:24:28.755883 containerd[1717]: time="2026-03-11T01:24:28.755752245Z" level=info msg="Ensure that sandbox b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55 in task-service has been cleanup successfully" Mar 11 01:24:28.781748 containerd[1717]: time="2026-03-11T01:24:28.781702169Z" level=info msg="CreateContainer within sandbox \"a0246106417208f4dcdf1f82b9ae1faec16a112586657b234ea41981924681ea\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 11 01:24:28.800588 containerd[1717]: time="2026-03-11T01:24:28.800541841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cfbf665c8-8wp24,Uid:9c69ff73-f4c7-42e1-b1b3-d584530adfe4,Namespace:calico-system,Attempt:0,}" Mar 11 01:24:28.801025 containerd[1717]: time="2026-03-11T01:24:28.800996922Z" level=error msg="StopPodSandbox for \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\" failed" error="failed to destroy network for sandbox \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:28.802055 containerd[1717]: time="2026-03-11T01:24:28.801346762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-tglk5,Uid:e98682e7-a223-4af4-85e4-1cd58ea80d8c,Namespace:calico-system,Attempt:0,}" Mar 11 01:24:28.802140 kubelet[3129]: E0311 01:24:28.801887 3129 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Mar 11 01:24:28.802140 kubelet[3129]: E0311 01:24:28.801933 3129 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55"} Mar 11 01:24:28.802140 kubelet[3129]: E0311 01:24:28.801975 3129 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 11 01:24:28.802140 kubelet[3129]: E0311 01:24:28.802002 3129 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tjh4h" podUID="c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b" Mar 11 01:24:28.830757 containerd[1717]: time="2026-03-11T01:24:28.830703492Z" level=error msg="Failed to destroy network for sandbox \"abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:28.831151 containerd[1717]: time="2026-03-11T01:24:28.831126853Z" level=error msg="encountered an error cleaning up failed sandbox \"abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:28.831257 containerd[1717]: time="2026-03-11T01:24:28.831237373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vq8w4,Uid:1923a631-58e0-434b-b9f7-d22db35c541a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:28.831733 kubelet[3129]: E0311 01:24:28.831526 3129 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:28.831733 kubelet[3129]: E0311 01:24:28.831617 3129 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vq8w4" Mar 11 01:24:28.831733 kubelet[3129]: E0311 01:24:28.831636 3129 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vq8w4" Mar 11 01:24:28.831869 kubelet[3129]: E0311 01:24:28.831695 3129 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-vq8w4_kube-system(1923a631-58e0-434b-b9f7-d22db35c541a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-vq8w4_kube-system(1923a631-58e0-434b-b9f7-d22db35c541a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-vq8w4" podUID="1923a631-58e0-434b-b9f7-d22db35c541a" Mar 11 01:24:28.920254 containerd[1717]: time="2026-03-11T01:24:28.919975164Z" level=error msg="Failed to destroy network for sandbox \"e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:28.920254 containerd[1717]: time="2026-03-11T01:24:28.920248884Z" level=error msg="encountered an error cleaning up failed sandbox \"e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:28.920423 containerd[1717]: time="2026-03-11T01:24:28.920292924Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2dzt4,Uid:48c32900-5045-4f60-bbe5-13570adfb73f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:28.920607 kubelet[3129]: E0311 01:24:28.920567 3129 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:28.920673 kubelet[3129]: E0311 01:24:28.920642 3129 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2dzt4" Mar 11 01:24:28.920673 kubelet[3129]: E0311 01:24:28.920663 3129 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2dzt4" Mar 11 01:24:28.920813 kubelet[3129]: E0311 01:24:28.920722 3129 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-2dzt4_kube-system(48c32900-5045-4f60-bbe5-13570adfb73f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-2dzt4_kube-system(48c32900-5045-4f60-bbe5-13570adfb73f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-2dzt4" podUID="48c32900-5045-4f60-bbe5-13570adfb73f" Mar 11 01:24:28.938931 containerd[1717]: time="2026-03-11T01:24:28.938894996Z" level=info msg="CreateContainer within sandbox \"a0246106417208f4dcdf1f82b9ae1faec16a112586657b234ea41981924681ea\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1a1da4d43af08c4535c1cc5c976d5b2516161498bd13c546a68e2c5340460198\"" Mar 11 01:24:28.941252 containerd[1717]: time="2026-03-11T01:24:28.940329118Z" level=info msg="StartContainer for \"1a1da4d43af08c4535c1cc5c976d5b2516161498bd13c546a68e2c5340460198\"" Mar 11 01:24:28.999949 systemd[1]: Started cri-containerd-1a1da4d43af08c4535c1cc5c976d5b2516161498bd13c546a68e2c5340460198.scope - libcontainer container 1a1da4d43af08c4535c1cc5c976d5b2516161498bd13c546a68e2c5340460198. Mar 11 01:24:29.025229 containerd[1717]: time="2026-03-11T01:24:29.024142661Z" level=error msg="Failed to destroy network for sandbox \"51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:29.025857 containerd[1717]: time="2026-03-11T01:24:29.025739703Z" level=error msg="encountered an error cleaning up failed sandbox \"51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:29.025857 containerd[1717]: time="2026-03-11T01:24:29.025821903Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-946cb894d-x99qm,Uid:0b7b3ee1-30bd-400b-8e2b-54c143148a44,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:29.026431 kubelet[3129]: E0311 01:24:29.026255 3129 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:29.026431 kubelet[3129]: E0311 01:24:29.026315 3129 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-946cb894d-x99qm" Mar 11 01:24:29.026431 kubelet[3129]: E0311 01:24:29.026333 3129 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-946cb894d-x99qm" Mar 11 01:24:29.027905 kubelet[3129]: E0311 01:24:29.026405 3129 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-946cb894d-x99qm_calico-system(0b7b3ee1-30bd-400b-8e2b-54c143148a44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-946cb894d-x99qm_calico-system(0b7b3ee1-30bd-400b-8e2b-54c143148a44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-946cb894d-x99qm" podUID="0b7b3ee1-30bd-400b-8e2b-54c143148a44" Mar 11 01:24:29.054629 containerd[1717]: time="2026-03-11T01:24:29.054502472Z" level=info msg="StartContainer for \"1a1da4d43af08c4535c1cc5c976d5b2516161498bd13c546a68e2c5340460198\" returns successfully" Mar 11 01:24:29.056793 containerd[1717]: time="2026-03-11T01:24:29.056655636Z" level=error msg="Failed to destroy network for sandbox \"8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:29.058013 containerd[1717]: time="2026-03-11T01:24:29.057978158Z" level=error msg="encountered an error cleaning up failed sandbox \"8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:29.059013 containerd[1717]: time="2026-03-11T01:24:29.058694959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b9679847-vdzb5,Uid:b4eb406e-fa82-44d9-b52c-29c1dd09bc5c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:29.059116 kubelet[3129]: E0311 01:24:29.058987 3129 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:29.059116 kubelet[3129]: E0311 01:24:29.059031 3129 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-59b9679847-vdzb5" Mar 11 01:24:29.059116 kubelet[3129]: E0311 01:24:29.059047 3129 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-59b9679847-vdzb5" Mar 11 01:24:29.059207 kubelet[3129]: E0311 01:24:29.059099 3129 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59b9679847-vdzb5_calico-system(b4eb406e-fa82-44d9-b52c-29c1dd09bc5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59b9679847-vdzb5_calico-system(b4eb406e-fa82-44d9-b52c-29c1dd09bc5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-59b9679847-vdzb5" podUID="b4eb406e-fa82-44d9-b52c-29c1dd09bc5c" Mar 11 01:24:29.091483 containerd[1717]: time="2026-03-11T01:24:29.091411055Z" level=error msg="Failed to destroy network for sandbox \"fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:29.091708 containerd[1717]: time="2026-03-11T01:24:29.091632175Z" level=error msg="Failed to destroy network for sandbox \"e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:29.091997 containerd[1717]: time="2026-03-11T01:24:29.091961696Z" level=error msg="encountered an error cleaning up failed sandbox \"e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:29.092056 containerd[1717]: time="2026-03-11T01:24:29.092025896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-tglk5,Uid:e98682e7-a223-4af4-85e4-1cd58ea80d8c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:29.092112 containerd[1717]: time="2026-03-11T01:24:29.091978576Z" level=error msg="encountered an error cleaning up failed sandbox \"fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:29.092134 containerd[1717]: time="2026-03-11T01:24:29.092122016Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b9679847-btrhp,Uid:4d253aa8-9d1e-4431-94a0-bc4663390ef6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:29.092793 kubelet[3129]: E0311 01:24:29.092598 3129 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:29.092793 kubelet[3129]: E0311 01:24:29.092649 3129 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-59b9679847-btrhp" Mar 11 01:24:29.092793 kubelet[3129]: E0311 01:24:29.092666 3129 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-59b9679847-btrhp" Mar 11 01:24:29.092929 kubelet[3129]: E0311 01:24:29.092725 3129 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59b9679847-btrhp_calico-system(4d253aa8-9d1e-4431-94a0-bc4663390ef6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59b9679847-btrhp_calico-system(4d253aa8-9d1e-4431-94a0-bc4663390ef6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-59b9679847-btrhp" podUID="4d253aa8-9d1e-4431-94a0-bc4663390ef6" Mar 11 01:24:29.092929 kubelet[3129]: E0311 01:24:29.092765 3129 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:29.092929 kubelet[3129]: E0311 01:24:29.092787 3129 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-tglk5" Mar 11 01:24:29.093767 kubelet[3129]: E0311 01:24:29.092799 3129 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-tglk5" Mar 11 01:24:29.093767 kubelet[3129]: E0311 01:24:29.092821 3129 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-tglk5_calico-system(e98682e7-a223-4af4-85e4-1cd58ea80d8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-tglk5_calico-system(e98682e7-a223-4af4-85e4-1cd58ea80d8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-tglk5" podUID="e98682e7-a223-4af4-85e4-1cd58ea80d8c" Mar 11 01:24:29.106972 containerd[1717]: time="2026-03-11T01:24:29.106934641Z" level=error msg="Failed to destroy network for sandbox \"e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:29.107522 containerd[1717]: time="2026-03-11T01:24:29.107295442Z" level=error msg="encountered an error cleaning up failed sandbox \"e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:29.107522 containerd[1717]: time="2026-03-11T01:24:29.107341002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cfbf665c8-8wp24,Uid:9c69ff73-f4c7-42e1-b1b3-d584530adfe4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:29.108493 kubelet[3129]: E0311 01:24:29.107946 3129 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 11 01:24:29.108493 kubelet[3129]: E0311 01:24:29.107987 3129 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cfbf665c8-8wp24" Mar 11 01:24:29.108493 kubelet[3129]: E0311 01:24:29.108007 3129 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cfbf665c8-8wp24" Mar 11 01:24:29.108742 kubelet[3129]: E0311 01:24:29.108046 3129 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6cfbf665c8-8wp24_calico-system(9c69ff73-f4c7-42e1-b1b3-d584530adfe4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6cfbf665c8-8wp24_calico-system(9c69ff73-f4c7-42e1-b1b3-d584530adfe4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6cfbf665c8-8wp24" podUID="9c69ff73-f4c7-42e1-b1b3-d584530adfe4" Mar 11 01:24:29.756575 kubelet[3129]: I0311 01:24:29.756541 3129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Mar 11 01:24:29.757926 containerd[1717]: time="2026-03-11T01:24:29.756970865Z" level=info msg="StopPodSandbox for \"e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e\"" Mar 11 01:24:29.757926 containerd[1717]: time="2026-03-11T01:24:29.757129106Z" level=info msg="Ensure that sandbox e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e in task-service has been cleanup successfully" Mar 11 01:24:29.759499 kubelet[3129]: I0311 01:24:29.759196 3129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Mar 11 01:24:29.760706 containerd[1717]: time="2026-03-11T01:24:29.759791390Z" level=info msg="StopPodSandbox for \"fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589\"" Mar 11 01:24:29.760706 containerd[1717]: time="2026-03-11T01:24:29.759922390Z" level=info msg="Ensure that sandbox fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589 in task-service has been cleanup successfully" Mar 11 01:24:29.766815 kubelet[3129]: I0311 01:24:29.766786 3129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Mar 11 01:24:29.767734 containerd[1717]: time="2026-03-11T01:24:29.767578164Z" level=info msg="StopPodSandbox for \"51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1\"" Mar 11 01:24:29.768545 containerd[1717]: time="2026-03-11T01:24:29.768378725Z" level=info msg="Ensure that sandbox 51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1 in task-service has been cleanup successfully" Mar 11 01:24:29.769589 kubelet[3129]: I0311 01:24:29.769388 3129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Mar 11 01:24:29.770158 containerd[1717]: time="2026-03-11T01:24:29.770134288Z" level=info msg="StopPodSandbox for \"e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3\"" Mar 11 01:24:29.770369 containerd[1717]: time="2026-03-11T01:24:29.770351288Z" level=info msg="Ensure that sandbox e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3 in task-service has been cleanup successfully" Mar 11 01:24:29.777318 kubelet[3129]: I0311 01:24:29.777297 3129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Mar 11 01:24:29.778447 containerd[1717]: time="2026-03-11T01:24:29.778343102Z" level=info msg="StopPodSandbox for \"abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52\"" Mar 11 01:24:29.778692 containerd[1717]: time="2026-03-11T01:24:29.778673462Z" level=info msg="Ensure that sandbox abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52 in task-service has been cleanup successfully" Mar 11 01:24:29.787830 kubelet[3129]: I0311 01:24:29.787800 3129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Mar 11 01:24:29.791160 containerd[1717]: time="2026-03-11T01:24:29.790714523Z" level=info msg="StopPodSandbox for \"8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3\"" Mar 11 01:24:29.791160 containerd[1717]: time="2026-03-11T01:24:29.790931203Z" level=info msg="Ensure that sandbox 8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3 in task-service has been cleanup successfully" Mar 11 01:24:29.803071 kubelet[3129]: I0311 01:24:29.803040 3129 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Mar 11 01:24:29.804338 containerd[1717]: time="2026-03-11T01:24:29.804301426Z" level=info msg="StopPodSandbox for \"e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936\"" Mar 11 01:24:29.805326 containerd[1717]: time="2026-03-11T01:24:29.805289628Z" level=info msg="Ensure that sandbox e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936 in task-service has been cleanup successfully" Mar 11 01:24:29.861078 kubelet[3129]: I0311 01:24:29.860156 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bd69x" podStartSLOduration=4.112924097 podStartE2EDuration="17.8599608s" podCreationTimestamp="2026-03-11 01:24:12 +0000 UTC" firstStartedPulling="2026-03-11 01:24:12.48922655 +0000 UTC m=+24.963951570" lastFinishedPulling="2026-03-11 01:24:26.236263253 +0000 UTC m=+38.710988273" observedRunningTime="2026-03-11 01:24:29.858568798 +0000 UTC m=+42.333293818" watchObservedRunningTime="2026-03-11 01:24:29.8599608 +0000 UTC m=+42.334685820" Mar 11 01:24:30.079390 containerd[1717]: 2026-03-11 01:24:29.960 [INFO][4293] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Mar 11 01:24:30.079390 containerd[1717]: 2026-03-11 01:24:29.960 [INFO][4293] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" iface="eth0" netns="/var/run/netns/cni-72360a2d-88fa-374a-0135-b5c1b2f35d75" Mar 11 01:24:30.079390 containerd[1717]: 2026-03-11 01:24:29.960 [INFO][4293] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" iface="eth0" netns="/var/run/netns/cni-72360a2d-88fa-374a-0135-b5c1b2f35d75" Mar 11 01:24:30.079390 containerd[1717]: 2026-03-11 01:24:29.961 [INFO][4293] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" iface="eth0" netns="/var/run/netns/cni-72360a2d-88fa-374a-0135-b5c1b2f35d75" Mar 11 01:24:30.079390 containerd[1717]: 2026-03-11 01:24:29.961 [INFO][4293] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Mar 11 01:24:30.079390 containerd[1717]: 2026-03-11 01:24:29.961 [INFO][4293] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Mar 11 01:24:30.079390 containerd[1717]: 2026-03-11 01:24:30.030 [INFO][4372] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" HandleID="k8s-pod-network.51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0" Mar 11 01:24:30.079390 containerd[1717]: 2026-03-11 01:24:30.030 [INFO][4372] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:30.079390 containerd[1717]: 2026-03-11 01:24:30.030 [INFO][4372] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:30.079390 containerd[1717]: 2026-03-11 01:24:30.062 [WARNING][4372] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" HandleID="k8s-pod-network.51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0" Mar 11 01:24:30.079390 containerd[1717]: 2026-03-11 01:24:30.062 [INFO][4372] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" HandleID="k8s-pod-network.51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0" Mar 11 01:24:30.079390 containerd[1717]: 2026-03-11 01:24:30.065 [INFO][4372] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:30.079390 containerd[1717]: 2026-03-11 01:24:30.075 [INFO][4293] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Mar 11 01:24:30.082414 systemd[1]: run-netns-cni\x2d72360a2d\x2d88fa\x2d374a\x2d0135\x2db5c1b2f35d75.mount: Deactivated successfully. Mar 11 01:24:30.082833 containerd[1717]: time="2026-03-11T01:24:30.082521819Z" level=info msg="TearDown network for sandbox \"51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1\" successfully" Mar 11 01:24:30.082833 containerd[1717]: time="2026-03-11T01:24:30.082549019Z" level=info msg="StopPodSandbox for \"51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1\" returns successfully" Mar 11 01:24:30.091617 containerd[1717]: time="2026-03-11T01:24:30.091244033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-946cb894d-x99qm,Uid:0b7b3ee1-30bd-400b-8e2b-54c143148a44,Namespace:calico-system,Attempt:1,}" Mar 11 01:24:30.107813 containerd[1717]: 2026-03-11 01:24:29.948 [INFO][4256] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Mar 11 01:24:30.107813 containerd[1717]: 2026-03-11 01:24:29.949 [INFO][4256] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" iface="eth0" netns="/var/run/netns/cni-2f4b1881-58d8-f840-4a9f-6aff9f8eca05" Mar 11 01:24:30.107813 containerd[1717]: 2026-03-11 01:24:29.951 [INFO][4256] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" iface="eth0" netns="/var/run/netns/cni-2f4b1881-58d8-f840-4a9f-6aff9f8eca05" Mar 11 01:24:30.107813 containerd[1717]: 2026-03-11 01:24:29.953 [INFO][4256] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" iface="eth0" netns="/var/run/netns/cni-2f4b1881-58d8-f840-4a9f-6aff9f8eca05" Mar 11 01:24:30.107813 containerd[1717]: 2026-03-11 01:24:29.953 [INFO][4256] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Mar 11 01:24:30.107813 containerd[1717]: 2026-03-11 01:24:29.953 [INFO][4256] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Mar 11 01:24:30.107813 containerd[1717]: 2026-03-11 01:24:30.037 [INFO][4367] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" HandleID="k8s-pod-network.e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Workload="ci--4081.3.6--n--49f1e4db19-k8s-whisker--6cfbf665c8--8wp24-eth0" Mar 11 01:24:30.107813 containerd[1717]: 2026-03-11 01:24:30.037 [INFO][4367] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:30.107813 containerd[1717]: 2026-03-11 01:24:30.067 [INFO][4367] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:30.107813 containerd[1717]: 2026-03-11 01:24:30.094 [WARNING][4367] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" HandleID="k8s-pod-network.e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Workload="ci--4081.3.6--n--49f1e4db19-k8s-whisker--6cfbf665c8--8wp24-eth0" Mar 11 01:24:30.107813 containerd[1717]: 2026-03-11 01:24:30.094 [INFO][4367] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" HandleID="k8s-pod-network.e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Workload="ci--4081.3.6--n--49f1e4db19-k8s-whisker--6cfbf665c8--8wp24-eth0" Mar 11 01:24:30.107813 containerd[1717]: 2026-03-11 01:24:30.096 [INFO][4367] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:30.107813 containerd[1717]: 2026-03-11 01:24:30.104 [INFO][4256] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Mar 11 01:24:30.108707 containerd[1717]: time="2026-03-11T01:24:30.108428983Z" level=info msg="TearDown network for sandbox \"e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e\" successfully" Mar 11 01:24:30.108707 containerd[1717]: time="2026-03-11T01:24:30.108495783Z" level=info msg="StopPodSandbox for \"e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e\" returns successfully" Mar 11 01:24:30.114148 systemd[1]: run-netns-cni\x2d2f4b1881\x2d58d8\x2df840\x2d4a9f\x2d6aff9f8eca05.mount: Deactivated successfully. Mar 11 01:24:30.133905 containerd[1717]: 2026-03-11 01:24:30.003 [INFO][4308] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Mar 11 01:24:30.133905 containerd[1717]: 2026-03-11 01:24:30.003 [INFO][4308] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" iface="eth0" netns="/var/run/netns/cni-ea38b353-9461-4e7f-98fe-0a2784a9eb64" Mar 11 01:24:30.133905 containerd[1717]: 2026-03-11 01:24:30.003 [INFO][4308] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" iface="eth0" netns="/var/run/netns/cni-ea38b353-9461-4e7f-98fe-0a2784a9eb64" Mar 11 01:24:30.133905 containerd[1717]: 2026-03-11 01:24:30.003 [INFO][4308] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" iface="eth0" netns="/var/run/netns/cni-ea38b353-9461-4e7f-98fe-0a2784a9eb64" Mar 11 01:24:30.133905 containerd[1717]: 2026-03-11 01:24:30.003 [INFO][4308] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Mar 11 01:24:30.133905 containerd[1717]: 2026-03-11 01:24:30.004 [INFO][4308] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Mar 11 01:24:30.133905 containerd[1717]: 2026-03-11 01:24:30.092 [INFO][4382] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" HandleID="k8s-pod-network.abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0" Mar 11 01:24:30.133905 containerd[1717]: 2026-03-11 01:24:30.092 [INFO][4382] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:30.133905 containerd[1717]: 2026-03-11 01:24:30.096 [INFO][4382] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:30.133905 containerd[1717]: 2026-03-11 01:24:30.113 [WARNING][4382] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" HandleID="k8s-pod-network.abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0" Mar 11 01:24:30.133905 containerd[1717]: 2026-03-11 01:24:30.113 [INFO][4382] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" HandleID="k8s-pod-network.abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0" Mar 11 01:24:30.133905 containerd[1717]: 2026-03-11 01:24:30.123 [INFO][4382] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:30.133905 containerd[1717]: 2026-03-11 01:24:30.127 [INFO][4308] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Mar 11 01:24:30.135847 containerd[1717]: time="2026-03-11T01:24:30.135771789Z" level=info msg="TearDown network for sandbox \"abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52\" successfully" Mar 11 01:24:30.135847 containerd[1717]: time="2026-03-11T01:24:30.135800789Z" level=info msg="StopPodSandbox for \"abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52\" returns successfully" Mar 11 01:24:30.141033 systemd[1]: run-netns-cni\x2dea38b353\x2d9461\x2d4e7f\x2d98fe\x2d0a2784a9eb64.mount: Deactivated successfully. Mar 11 01:24:30.143149 containerd[1717]: time="2026-03-11T01:24:30.142703081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vq8w4,Uid:1923a631-58e0-434b-b9f7-d22db35c541a,Namespace:kube-system,Attempt:1,}" Mar 11 01:24:30.151295 containerd[1717]: 2026-03-11 01:24:29.999 [INFO][4269] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Mar 11 01:24:30.151295 containerd[1717]: 2026-03-11 01:24:29.999 [INFO][4269] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" iface="eth0" netns="/var/run/netns/cni-936e59c9-3bd0-a1de-4bef-ea5940a61e4a" Mar 11 01:24:30.151295 containerd[1717]: 2026-03-11 01:24:29.999 [INFO][4269] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" iface="eth0" netns="/var/run/netns/cni-936e59c9-3bd0-a1de-4bef-ea5940a61e4a" Mar 11 01:24:30.151295 containerd[1717]: 2026-03-11 01:24:30.000 [INFO][4269] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" iface="eth0" netns="/var/run/netns/cni-936e59c9-3bd0-a1de-4bef-ea5940a61e4a" Mar 11 01:24:30.151295 containerd[1717]: 2026-03-11 01:24:30.000 [INFO][4269] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Mar 11 01:24:30.151295 containerd[1717]: 2026-03-11 01:24:30.000 [INFO][4269] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Mar 11 01:24:30.151295 containerd[1717]: 2026-03-11 01:24:30.107 [INFO][4380] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" HandleID="k8s-pod-network.fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0" Mar 11 01:24:30.151295 containerd[1717]: 2026-03-11 01:24:30.107 [INFO][4380] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:30.151295 containerd[1717]: 2026-03-11 01:24:30.118 [INFO][4380] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:30.151295 containerd[1717]: 2026-03-11 01:24:30.145 [WARNING][4380] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" HandleID="k8s-pod-network.fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0" Mar 11 01:24:30.151295 containerd[1717]: 2026-03-11 01:24:30.145 [INFO][4380] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" HandleID="k8s-pod-network.fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0" Mar 11 01:24:30.151295 containerd[1717]: 2026-03-11 01:24:30.146 [INFO][4380] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:30.151295 containerd[1717]: 2026-03-11 01:24:30.149 [INFO][4269] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Mar 11 01:24:30.153938 containerd[1717]: time="2026-03-11T01:24:30.153912140Z" level=info msg="TearDown network for sandbox \"fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589\" successfully" Mar 11 01:24:30.154038 containerd[1717]: time="2026-03-11T01:24:30.154022940Z" level=info msg="StopPodSandbox for \"fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589\" returns successfully" Mar 11 01:24:30.158988 containerd[1717]: time="2026-03-11T01:24:30.158967508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b9679847-btrhp,Uid:4d253aa8-9d1e-4431-94a0-bc4663390ef6,Namespace:calico-system,Attempt:1,}" Mar 11 01:24:30.172844 containerd[1717]: 2026-03-11 01:24:29.992 [INFO][4291] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Mar 11 01:24:30.172844 containerd[1717]: 2026-03-11 01:24:29.992 [INFO][4291] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" iface="eth0" netns="/var/run/netns/cni-4596e889-0bef-b266-3eaa-34eaa302083a" Mar 11 01:24:30.172844 containerd[1717]: 2026-03-11 01:24:29.994 [INFO][4291] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" iface="eth0" netns="/var/run/netns/cni-4596e889-0bef-b266-3eaa-34eaa302083a" Mar 11 01:24:30.172844 containerd[1717]: 2026-03-11 01:24:29.994 [INFO][4291] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" iface="eth0" netns="/var/run/netns/cni-4596e889-0bef-b266-3eaa-34eaa302083a" Mar 11 01:24:30.172844 containerd[1717]: 2026-03-11 01:24:29.994 [INFO][4291] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Mar 11 01:24:30.172844 containerd[1717]: 2026-03-11 01:24:29.995 [INFO][4291] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Mar 11 01:24:30.172844 containerd[1717]: 2026-03-11 01:24:30.116 [INFO][4378] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" HandleID="k8s-pod-network.e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0" Mar 11 01:24:30.172844 containerd[1717]: 2026-03-11 01:24:30.116 [INFO][4378] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:30.172844 containerd[1717]: 2026-03-11 01:24:30.149 [INFO][4378] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:30.172844 containerd[1717]: 2026-03-11 01:24:30.166 [WARNING][4378] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" HandleID="k8s-pod-network.e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0" Mar 11 01:24:30.172844 containerd[1717]: 2026-03-11 01:24:30.166 [INFO][4378] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" HandleID="k8s-pod-network.e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0" Mar 11 01:24:30.172844 containerd[1717]: 2026-03-11 01:24:30.167 [INFO][4378] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:30.172844 containerd[1717]: 2026-03-11 01:24:30.171 [INFO][4291] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Mar 11 01:24:30.173280 containerd[1717]: time="2026-03-11T01:24:30.172968612Z" level=info msg="TearDown network for sandbox \"e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3\" successfully" Mar 11 01:24:30.173280 containerd[1717]: time="2026-03-11T01:24:30.172988612Z" level=info msg="StopPodSandbox for \"e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3\" returns successfully" Mar 11 01:24:30.173840 kubelet[3129]: I0311 01:24:30.173615 3129 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9c69ff73-f4c7-42e1-b1b3-d584530adfe4-whisker-backend-key-pair\") pod \"9c69ff73-f4c7-42e1-b1b3-d584530adfe4\" (UID: \"9c69ff73-f4c7-42e1-b1b3-d584530adfe4\") " Mar 11 01:24:30.173840 kubelet[3129]: I0311 01:24:30.173754 3129 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c69ff73-f4c7-42e1-b1b3-d584530adfe4-whisker-ca-bundle\") pod \"9c69ff73-f4c7-42e1-b1b3-d584530adfe4\" (UID: \"9c69ff73-f4c7-42e1-b1b3-d584530adfe4\") " Mar 11 01:24:30.173840 kubelet[3129]: I0311 01:24:30.173786 3129 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9c69ff73-f4c7-42e1-b1b3-d584530adfe4-nginx-config\") pod \"9c69ff73-f4c7-42e1-b1b3-d584530adfe4\" (UID: \"9c69ff73-f4c7-42e1-b1b3-d584530adfe4\") " Mar 11 01:24:30.174000 kubelet[3129]: I0311 01:24:30.173980 3129 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27fsf\" (UniqueName: \"kubernetes.io/projected/9c69ff73-f4c7-42e1-b1b3-d584530adfe4-kube-api-access-27fsf\") pod \"9c69ff73-f4c7-42e1-b1b3-d584530adfe4\" (UID: \"9c69ff73-f4c7-42e1-b1b3-d584530adfe4\") " Mar 11 01:24:30.182404 kubelet[3129]: I0311 01:24:30.181359 3129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c69ff73-f4c7-42e1-b1b3-d584530adfe4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9c69ff73-f4c7-42e1-b1b3-d584530adfe4" (UID: "9c69ff73-f4c7-42e1-b1b3-d584530adfe4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 11 01:24:30.182404 kubelet[3129]: I0311 01:24:30.181431 3129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c69ff73-f4c7-42e1-b1b3-d584530adfe4-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "9c69ff73-f4c7-42e1-b1b3-d584530adfe4" (UID: "9c69ff73-f4c7-42e1-b1b3-d584530adfe4"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 11 01:24:30.182404 kubelet[3129]: I0311 01:24:30.181790 3129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c69ff73-f4c7-42e1-b1b3-d584530adfe4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9c69ff73-f4c7-42e1-b1b3-d584530adfe4" (UID: "9c69ff73-f4c7-42e1-b1b3-d584530adfe4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 11 01:24:30.182657 containerd[1717]: time="2026-03-11T01:24:30.182623109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2dzt4,Uid:48c32900-5045-4f60-bbe5-13570adfb73f,Namespace:kube-system,Attempt:1,}" Mar 11 01:24:30.182860 kubelet[3129]: I0311 01:24:30.182829 3129 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c69ff73-f4c7-42e1-b1b3-d584530adfe4-kube-api-access-27fsf" (OuterVolumeSpecName: "kube-api-access-27fsf") pod "9c69ff73-f4c7-42e1-b1b3-d584530adfe4" (UID: "9c69ff73-f4c7-42e1-b1b3-d584530adfe4"). InnerVolumeSpecName "kube-api-access-27fsf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 11 01:24:30.196562 containerd[1717]: 2026-03-11 01:24:30.008 [INFO][4329] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Mar 11 01:24:30.196562 containerd[1717]: 2026-03-11 01:24:30.009 [INFO][4329] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" iface="eth0" netns="/var/run/netns/cni-c9abb529-e952-c60f-bd41-1c1e7a191cf1" Mar 11 01:24:30.196562 containerd[1717]: 2026-03-11 01:24:30.010 [INFO][4329] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" iface="eth0" netns="/var/run/netns/cni-c9abb529-e952-c60f-bd41-1c1e7a191cf1" Mar 11 01:24:30.196562 containerd[1717]: 2026-03-11 01:24:30.011 [INFO][4329] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" iface="eth0" netns="/var/run/netns/cni-c9abb529-e952-c60f-bd41-1c1e7a191cf1" Mar 11 01:24:30.196562 containerd[1717]: 2026-03-11 01:24:30.011 [INFO][4329] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Mar 11 01:24:30.196562 containerd[1717]: 2026-03-11 01:24:30.011 [INFO][4329] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Mar 11 01:24:30.196562 containerd[1717]: 2026-03-11 01:24:30.126 [INFO][4397] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" HandleID="k8s-pod-network.e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Workload="ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0" Mar 11 01:24:30.196562 containerd[1717]: 2026-03-11 01:24:30.126 [INFO][4397] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:30.196562 containerd[1717]: 2026-03-11 01:24:30.168 [INFO][4397] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:30.196562 containerd[1717]: 2026-03-11 01:24:30.185 [WARNING][4397] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" HandleID="k8s-pod-network.e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Workload="ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0" Mar 11 01:24:30.196562 containerd[1717]: 2026-03-11 01:24:30.185 [INFO][4397] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" HandleID="k8s-pod-network.e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Workload="ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0" Mar 11 01:24:30.196562 containerd[1717]: 2026-03-11 01:24:30.186 [INFO][4397] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:30.196562 containerd[1717]: 2026-03-11 01:24:30.189 [INFO][4329] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Mar 11 01:24:30.196945 containerd[1717]: time="2026-03-11T01:24:30.196917453Z" level=info msg="TearDown network for sandbox \"e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936\" successfully" Mar 11 01:24:30.197013 containerd[1717]: time="2026-03-11T01:24:30.196939413Z" level=info msg="StopPodSandbox for \"e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936\" returns successfully" Mar 11 01:24:30.202051 containerd[1717]: time="2026-03-11T01:24:30.202021942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-tglk5,Uid:e98682e7-a223-4af4-85e4-1cd58ea80d8c,Namespace:calico-system,Attempt:1,}" Mar 11 01:24:30.207070 containerd[1717]: 2026-03-11 01:24:30.004 [INFO][4321] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Mar 11 01:24:30.207070 containerd[1717]: 2026-03-11 01:24:30.004 [INFO][4321] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" iface="eth0" netns="/var/run/netns/cni-b9e8a477-0b66-3101-24ea-c5942c6c6f7b" Mar 11 01:24:30.207070 containerd[1717]: 2026-03-11 01:24:30.004 [INFO][4321] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" iface="eth0" netns="/var/run/netns/cni-b9e8a477-0b66-3101-24ea-c5942c6c6f7b" Mar 11 01:24:30.207070 containerd[1717]: 2026-03-11 01:24:30.005 [INFO][4321] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" iface="eth0" netns="/var/run/netns/cni-b9e8a477-0b66-3101-24ea-c5942c6c6f7b" Mar 11 01:24:30.207070 containerd[1717]: 2026-03-11 01:24:30.005 [INFO][4321] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Mar 11 01:24:30.207070 containerd[1717]: 2026-03-11 01:24:30.005 [INFO][4321] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Mar 11 01:24:30.207070 containerd[1717]: 2026-03-11 01:24:30.133 [INFO][4384] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" HandleID="k8s-pod-network.8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0" Mar 11 01:24:30.207070 containerd[1717]: 2026-03-11 01:24:30.133 [INFO][4384] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:30.207070 containerd[1717]: 2026-03-11 01:24:30.186 [INFO][4384] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:30.207070 containerd[1717]: 2026-03-11 01:24:30.198 [WARNING][4384] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" HandleID="k8s-pod-network.8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0" Mar 11 01:24:30.207070 containerd[1717]: 2026-03-11 01:24:30.198 [INFO][4384] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" HandleID="k8s-pod-network.8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0" Mar 11 01:24:30.207070 containerd[1717]: 2026-03-11 01:24:30.200 [INFO][4384] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:30.207070 containerd[1717]: 2026-03-11 01:24:30.204 [INFO][4321] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Mar 11 01:24:30.207675 containerd[1717]: time="2026-03-11T01:24:30.207543831Z" level=info msg="TearDown network for sandbox \"8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3\" successfully" Mar 11 01:24:30.207675 containerd[1717]: time="2026-03-11T01:24:30.207569151Z" level=info msg="StopPodSandbox for \"8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3\" returns successfully" Mar 11 01:24:30.215689 containerd[1717]: time="2026-03-11T01:24:30.215552924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b9679847-vdzb5,Uid:b4eb406e-fa82-44d9-b52c-29c1dd09bc5c,Namespace:calico-system,Attempt:1,}" Mar 11 01:24:30.275265 kubelet[3129]: I0311 01:24:30.274799 3129 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-27fsf\" (UniqueName: \"kubernetes.io/projected/9c69ff73-f4c7-42e1-b1b3-d584530adfe4-kube-api-access-27fsf\") on node \"ci-4081.3.6-n-49f1e4db19\" DevicePath \"\"" Mar 11 01:24:30.275265 kubelet[3129]: I0311 01:24:30.274828 3129 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9c69ff73-f4c7-42e1-b1b3-d584530adfe4-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-49f1e4db19\" DevicePath \"\"" Mar 11 01:24:30.275265 kubelet[3129]: I0311 01:24:30.274838 3129 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c69ff73-f4c7-42e1-b1b3-d584530adfe4-whisker-ca-bundle\") on node \"ci-4081.3.6-n-49f1e4db19\" DevicePath \"\"" Mar 11 01:24:30.275265 kubelet[3129]: I0311 01:24:30.274848 3129 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9c69ff73-f4c7-42e1-b1b3-d584530adfe4-nginx-config\") on node \"ci-4081.3.6-n-49f1e4db19\" DevicePath \"\"" Mar 11 01:24:30.328590 systemd-networkd[1612]: calib5f6e015954: Link UP Mar 11 01:24:30.329947 systemd-networkd[1612]: calib5f6e015954: Gained carrier Mar 11 01:24:30.380130 containerd[1717]: 2026-03-11 01:24:30.174 [ERROR][4417] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 01:24:30.380130 containerd[1717]: 2026-03-11 01:24:30.198 [INFO][4417] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0 calico-kube-controllers-946cb894d- calico-system 0b7b3ee1-30bd-400b-8e2b-54c143148a44 919 0 2026-03-11 01:24:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:946cb894d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-49f1e4db19 calico-kube-controllers-946cb894d-x99qm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib5f6e015954 [] [] }} ContainerID="ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98" Namespace="calico-system" Pod="calico-kube-controllers-946cb894d-x99qm" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-" Mar 11 01:24:30.380130 containerd[1717]: 2026-03-11 01:24:30.199 [INFO][4417] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98" Namespace="calico-system" Pod="calico-kube-controllers-946cb894d-x99qm" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0" Mar 11 01:24:30.380130 containerd[1717]: 2026-03-11 01:24:30.235 [INFO][4436] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98" HandleID="k8s-pod-network.ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0" Mar 11 01:24:30.380130 containerd[1717]: 2026-03-11 01:24:30.247 [INFO][4436] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98" HandleID="k8s-pod-network.ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbdd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-49f1e4db19", "pod":"calico-kube-controllers-946cb894d-x99qm", "timestamp":"2026-03-11 01:24:30.235260078 +0000 UTC"}, Hostname:"ci-4081.3.6-n-49f1e4db19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000305340)} Mar 11 01:24:30.380130 containerd[1717]: 2026-03-11 01:24:30.248 [INFO][4436] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:30.380130 containerd[1717]: 2026-03-11 01:24:30.249 [INFO][4436] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:30.380130 containerd[1717]: 2026-03-11 01:24:30.249 [INFO][4436] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-49f1e4db19' Mar 11 01:24:30.380130 containerd[1717]: 2026-03-11 01:24:30.255 [INFO][4436] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.380130 containerd[1717]: 2026-03-11 01:24:30.262 [INFO][4436] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.380130 containerd[1717]: 2026-03-11 01:24:30.268 [INFO][4436] ipam/ipam.go 526: Trying affinity for 192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.380130 containerd[1717]: 2026-03-11 01:24:30.272 [INFO][4436] ipam/ipam.go 160: Attempting to load block cidr=192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.380130 containerd[1717]: 2026-03-11 01:24:30.274 [INFO][4436] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.380130 containerd[1717]: 2026-03-11 01:24:30.274 [INFO][4436] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.4.128/26 handle="k8s-pod-network.ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.380130 containerd[1717]: 2026-03-11 01:24:30.276 [INFO][4436] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98 Mar 11 01:24:30.380130 containerd[1717]: 2026-03-11 01:24:30.289 [INFO][4436] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.4.128/26 handle="k8s-pod-network.ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.380130 containerd[1717]: 2026-03-11 01:24:30.297 [INFO][4436] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.4.129/26] block=192.168.4.128/26 handle="k8s-pod-network.ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.380130 containerd[1717]: 2026-03-11 01:24:30.297 [INFO][4436] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.4.129/26] handle="k8s-pod-network.ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.380717 containerd[1717]: 2026-03-11 01:24:30.298 [INFO][4436] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:30.380717 containerd[1717]: 2026-03-11 01:24:30.298 [INFO][4436] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.4.129/26] IPv6=[] ContainerID="ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98" HandleID="k8s-pod-network.ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0" Mar 11 01:24:30.380717 containerd[1717]: 2026-03-11 01:24:30.305 [INFO][4417] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98" Namespace="calico-system" Pod="calico-kube-controllers-946cb894d-x99qm" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0", GenerateName:"calico-kube-controllers-946cb894d-", Namespace:"calico-system", SelfLink:"", UID:"0b7b3ee1-30bd-400b-8e2b-54c143148a44", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"946cb894d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"", Pod:"calico-kube-controllers-946cb894d-x99qm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib5f6e015954", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:30.380717 containerd[1717]: 2026-03-11 01:24:30.307 [INFO][4417] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.129/32] ContainerID="ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98" Namespace="calico-system" Pod="calico-kube-controllers-946cb894d-x99qm" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0" Mar 11 01:24:30.380717 containerd[1717]: 2026-03-11 01:24:30.307 [INFO][4417] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5f6e015954 ContainerID="ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98" Namespace="calico-system" Pod="calico-kube-controllers-946cb894d-x99qm" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0" Mar 11 01:24:30.380717 containerd[1717]: 2026-03-11 01:24:30.331 [INFO][4417] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98" Namespace="calico-system" Pod="calico-kube-controllers-946cb894d-x99qm" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0" Mar 11 01:24:30.380868 containerd[1717]: 2026-03-11 01:24:30.332 [INFO][4417] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98" Namespace="calico-system" Pod="calico-kube-controllers-946cb894d-x99qm" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0", GenerateName:"calico-kube-controllers-946cb894d-", Namespace:"calico-system", SelfLink:"", UID:"0b7b3ee1-30bd-400b-8e2b-54c143148a44", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"946cb894d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98", Pod:"calico-kube-controllers-946cb894d-x99qm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib5f6e015954", MAC:"a2:29:18:99:32:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:30.380868 containerd[1717]: 2026-03-11 01:24:30.360 [INFO][4417] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98" Namespace="calico-system" Pod="calico-kube-controllers-946cb894d-x99qm" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0" Mar 11 01:24:30.432117 containerd[1717]: time="2026-03-11T01:24:30.431908972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:24:30.432117 containerd[1717]: time="2026-03-11T01:24:30.431975412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:24:30.432733 containerd[1717]: time="2026-03-11T01:24:30.431993092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:30.432733 containerd[1717]: time="2026-03-11T01:24:30.432082892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:30.453295 systemd-networkd[1612]: cali44e9231e721: Link UP Mar 11 01:24:30.454974 systemd-networkd[1612]: cali44e9231e721: Gained carrier Mar 11 01:24:30.477641 systemd[1]: Started cri-containerd-ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98.scope - libcontainer container ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98. Mar 11 01:24:30.505572 containerd[1717]: 2026-03-11 01:24:30.236 [ERROR][4440] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 01:24:30.505572 containerd[1717]: 2026-03-11 01:24:30.259 [INFO][4440] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0 coredns-66bc5c9577- kube-system 1923a631-58e0-434b-b9f7-d22db35c541a 922 0 2026-03-11 01:23:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-49f1e4db19 coredns-66bc5c9577-vq8w4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali44e9231e721 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30" Namespace="kube-system" Pod="coredns-66bc5c9577-vq8w4" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-" Mar 11 01:24:30.505572 containerd[1717]: 2026-03-11 01:24:30.259 [INFO][4440] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30" Namespace="kube-system" Pod="coredns-66bc5c9577-vq8w4" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0" Mar 11 01:24:30.505572 containerd[1717]: 2026-03-11 01:24:30.304 [INFO][4465] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30" HandleID="k8s-pod-network.19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0" Mar 11 01:24:30.505572 containerd[1717]: 2026-03-11 01:24:30.327 [INFO][4465] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30" HandleID="k8s-pod-network.19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb4b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-49f1e4db19", "pod":"coredns-66bc5c9577-vq8w4", "timestamp":"2026-03-11 01:24:30.304620956 +0000 UTC"}, Hostname:"ci-4081.3.6-n-49f1e4db19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002531e0)} Mar 11 01:24:30.505572 containerd[1717]: 2026-03-11 01:24:30.327 [INFO][4465] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:30.505572 containerd[1717]: 2026-03-11 01:24:30.327 [INFO][4465] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:30.505572 containerd[1717]: 2026-03-11 01:24:30.327 [INFO][4465] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-49f1e4db19' Mar 11 01:24:30.505572 containerd[1717]: 2026-03-11 01:24:30.361 [INFO][4465] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.505572 containerd[1717]: 2026-03-11 01:24:30.386 [INFO][4465] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.505572 containerd[1717]: 2026-03-11 01:24:30.397 [INFO][4465] ipam/ipam.go 526: Trying affinity for 192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.505572 containerd[1717]: 2026-03-11 01:24:30.400 [INFO][4465] ipam/ipam.go 160: Attempting to load block cidr=192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.505572 containerd[1717]: 2026-03-11 01:24:30.403 [INFO][4465] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.505572 containerd[1717]: 2026-03-11 01:24:30.403 [INFO][4465] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.4.128/26 handle="k8s-pod-network.19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.505572 containerd[1717]: 2026-03-11 01:24:30.404 [INFO][4465] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30 Mar 11 01:24:30.505572 containerd[1717]: 2026-03-11 01:24:30.414 [INFO][4465] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.4.128/26 handle="k8s-pod-network.19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.505572 containerd[1717]: 2026-03-11 01:24:30.440 [INFO][4465] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.4.130/26] block=192.168.4.128/26 handle="k8s-pod-network.19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.505572 containerd[1717]: 2026-03-11 01:24:30.440 [INFO][4465] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.4.130/26] handle="k8s-pod-network.19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.505572 containerd[1717]: 2026-03-11 01:24:30.440 [INFO][4465] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:30.506159 containerd[1717]: 2026-03-11 01:24:30.441 [INFO][4465] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.4.130/26] IPv6=[] ContainerID="19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30" HandleID="k8s-pod-network.19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0" Mar 11 01:24:30.506159 containerd[1717]: 2026-03-11 01:24:30.448 [INFO][4440] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30" Namespace="kube-system" Pod="coredns-66bc5c9577-vq8w4" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1923a631-58e0-434b-b9f7-d22db35c541a", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 23, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"", Pod:"coredns-66bc5c9577-vq8w4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44e9231e721", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:30.506159 containerd[1717]: 2026-03-11 01:24:30.450 [INFO][4440] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.130/32] ContainerID="19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30" Namespace="kube-system" Pod="coredns-66bc5c9577-vq8w4" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0" Mar 11 01:24:30.506159 containerd[1717]: 2026-03-11 01:24:30.450 [INFO][4440] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44e9231e721 ContainerID="19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30" Namespace="kube-system" Pod="coredns-66bc5c9577-vq8w4" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0" Mar 11 01:24:30.506159 containerd[1717]: 2026-03-11 01:24:30.460 [INFO][4440] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30" Namespace="kube-system" Pod="coredns-66bc5c9577-vq8w4" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0" Mar 11 01:24:30.506309 containerd[1717]: 2026-03-11 01:24:30.469 [INFO][4440] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30" Namespace="kube-system" Pod="coredns-66bc5c9577-vq8w4" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1923a631-58e0-434b-b9f7-d22db35c541a", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 23, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30", Pod:"coredns-66bc5c9577-vq8w4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44e9231e721", MAC:"92:9e:17:e9:8c:80", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:30.506309 containerd[1717]: 2026-03-11 01:24:30.499 [INFO][4440] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30" Namespace="kube-system" Pod="coredns-66bc5c9577-vq8w4" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0" Mar 11 01:24:30.556294 systemd-networkd[1612]: cali499509217e6: Link UP Mar 11 01:24:30.557241 systemd-networkd[1612]: cali499509217e6: Gained carrier Mar 11 01:24:30.571392 containerd[1717]: time="2026-03-11T01:24:30.562727794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:24:30.571392 containerd[1717]: time="2026-03-11T01:24:30.562781154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:24:30.571392 containerd[1717]: time="2026-03-11T01:24:30.562793274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:30.571392 containerd[1717]: time="2026-03-11T01:24:30.571176249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:30.581796 systemd[1]: run-netns-cni\x2dc9abb529\x2de952\x2dc60f\x2dbd41\x2d1c1e7a191cf1.mount: Deactivated successfully. Mar 11 01:24:30.581884 systemd[1]: run-netns-cni\x2d936e59c9\x2d3bd0\x2da1de\x2d4bef\x2dea5940a61e4a.mount: Deactivated successfully. Mar 11 01:24:30.581966 systemd[1]: run-netns-cni\x2db9e8a477\x2d0b66\x2d3101\x2d24ea\x2dc5942c6c6f7b.mount: Deactivated successfully. Mar 11 01:24:30.582021 systemd[1]: run-netns-cni\x2d4596e889\x2d0bef\x2db266\x2d3eaa\x2d34eaa302083a.mount: Deactivated successfully. Mar 11 01:24:30.582070 systemd[1]: var-lib-kubelet-pods-9c69ff73\x2df4c7\x2d42e1\x2db1b3\x2dd584530adfe4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d27fsf.mount: Deactivated successfully. Mar 11 01:24:30.582125 systemd[1]: var-lib-kubelet-pods-9c69ff73\x2df4c7\x2d42e1\x2db1b3\x2dd584530adfe4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 11 01:24:30.598486 containerd[1717]: 2026-03-11 01:24:30.273 [ERROR][4453] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 01:24:30.598486 containerd[1717]: 2026-03-11 01:24:30.290 [INFO][4453] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0 calico-apiserver-59b9679847- calico-system 4d253aa8-9d1e-4431-94a0-bc4663390ef6 921 0 2026-03-11 01:24:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59b9679847 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-49f1e4db19 calico-apiserver-59b9679847-btrhp eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali499509217e6 [] [] }} ContainerID="7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f" Namespace="calico-system" Pod="calico-apiserver-59b9679847-btrhp" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-" Mar 11 01:24:30.598486 containerd[1717]: 2026-03-11 01:24:30.290 [INFO][4453] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f" Namespace="calico-system" Pod="calico-apiserver-59b9679847-btrhp" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0" Mar 11 01:24:30.598486 containerd[1717]: 2026-03-11 01:24:30.387 [INFO][4481] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f" HandleID="k8s-pod-network.7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0" Mar 11 01:24:30.598486 containerd[1717]: 2026-03-11 01:24:30.411 [INFO][4481] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f" HandleID="k8s-pod-network.7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038f0e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-49f1e4db19", "pod":"calico-apiserver-59b9679847-btrhp", "timestamp":"2026-03-11 01:24:30.387554737 +0000 UTC"}, Hostname:"ci-4081.3.6-n-49f1e4db19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003991e0)} Mar 11 01:24:30.598486 containerd[1717]: 2026-03-11 01:24:30.411 [INFO][4481] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:30.598486 containerd[1717]: 2026-03-11 01:24:30.440 [INFO][4481] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:30.598486 containerd[1717]: 2026-03-11 01:24:30.441 [INFO][4481] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-49f1e4db19' Mar 11 01:24:30.598486 containerd[1717]: 2026-03-11 01:24:30.461 [INFO][4481] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.598486 containerd[1717]: 2026-03-11 01:24:30.484 [INFO][4481] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.598486 containerd[1717]: 2026-03-11 01:24:30.502 [INFO][4481] ipam/ipam.go 526: Trying affinity for 192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.598486 containerd[1717]: 2026-03-11 01:24:30.506 [INFO][4481] ipam/ipam.go 160: Attempting to load block cidr=192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.598486 containerd[1717]: 2026-03-11 01:24:30.509 [INFO][4481] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.598486 containerd[1717]: 2026-03-11 01:24:30.510 [INFO][4481] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.4.128/26 handle="k8s-pod-network.7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.598486 containerd[1717]: 2026-03-11 01:24:30.513 [INFO][4481] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f Mar 11 01:24:30.598486 containerd[1717]: 2026-03-11 01:24:30.527 [INFO][4481] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.4.128/26 handle="k8s-pod-network.7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.598486 containerd[1717]: 2026-03-11 01:24:30.537 [INFO][4481] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.4.131/26] block=192.168.4.128/26 handle="k8s-pod-network.7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.598486 containerd[1717]: 2026-03-11 01:24:30.537 [INFO][4481] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.4.131/26] handle="k8s-pod-network.7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.598486 containerd[1717]: 2026-03-11 01:24:30.538 [INFO][4481] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:30.599042 containerd[1717]: 2026-03-11 01:24:30.538 [INFO][4481] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.4.131/26] IPv6=[] ContainerID="7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f" HandleID="k8s-pod-network.7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0" Mar 11 01:24:30.599042 containerd[1717]: 2026-03-11 01:24:30.550 [INFO][4453] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f" Namespace="calico-system" Pod="calico-apiserver-59b9679847-btrhp" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0", GenerateName:"calico-apiserver-59b9679847-", Namespace:"calico-system", SelfLink:"", UID:"4d253aa8-9d1e-4431-94a0-bc4663390ef6", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59b9679847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"", Pod:"calico-apiserver-59b9679847-btrhp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali499509217e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:30.599042 containerd[1717]: 2026-03-11 01:24:30.550 [INFO][4453] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.131/32] ContainerID="7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f" Namespace="calico-system" Pod="calico-apiserver-59b9679847-btrhp" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0" Mar 11 01:24:30.599042 containerd[1717]: 2026-03-11 01:24:30.551 [INFO][4453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali499509217e6 ContainerID="7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f" Namespace="calico-system" Pod="calico-apiserver-59b9679847-btrhp" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0" Mar 11 01:24:30.599042 containerd[1717]: 2026-03-11 01:24:30.558 [INFO][4453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f" Namespace="calico-system" Pod="calico-apiserver-59b9679847-btrhp" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0" Mar 11 01:24:30.599171 containerd[1717]: 2026-03-11 01:24:30.559 [INFO][4453] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f" Namespace="calico-system" Pod="calico-apiserver-59b9679847-btrhp" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0", GenerateName:"calico-apiserver-59b9679847-", Namespace:"calico-system", SelfLink:"", UID:"4d253aa8-9d1e-4431-94a0-bc4663390ef6", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59b9679847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f", Pod:"calico-apiserver-59b9679847-btrhp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali499509217e6", MAC:"9a:a6:68:70:40:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:30.599171 containerd[1717]: 2026-03-11 01:24:30.592 [INFO][4453] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f" Namespace="calico-system" Pod="calico-apiserver-59b9679847-btrhp" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0" Mar 11 01:24:30.629685 systemd[1]: Started cri-containerd-19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30.scope - libcontainer container 19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30. Mar 11 01:24:30.656268 systemd-networkd[1612]: caliceb51fc42c7: Link UP Mar 11 01:24:30.658604 systemd-networkd[1612]: caliceb51fc42c7: Gained carrier Mar 11 01:24:30.662267 containerd[1717]: time="2026-03-11T01:24:30.659150678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:24:30.662267 containerd[1717]: time="2026-03-11T01:24:30.659231398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:24:30.662267 containerd[1717]: time="2026-03-11T01:24:30.659243318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:30.662267 containerd[1717]: time="2026-03-11T01:24:30.659341078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:30.712005 containerd[1717]: time="2026-03-11T01:24:30.711965648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vq8w4,Uid:1923a631-58e0-434b-b9f7-d22db35c541a,Namespace:kube-system,Attempt:1,} returns sandbox id \"19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30\"" Mar 11 01:24:30.727206 containerd[1717]: 2026-03-11 01:24:30.344 [ERROR][4486] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 01:24:30.727206 containerd[1717]: 2026-03-11 01:24:30.384 [INFO][4486] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0 goldmane-cccfbd5cf- calico-system e98682e7-a223-4af4-85e4-1cd58ea80d8c 924 0 2026-03-11 01:24:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-49f1e4db19 goldmane-cccfbd5cf-tglk5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliceb51fc42c7 [] [] }} ContainerID="89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tglk5" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-" Mar 11 01:24:30.727206 containerd[1717]: 2026-03-11 01:24:30.385 [INFO][4486] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tglk5" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0" Mar 11 01:24:30.727206 containerd[1717]: 2026-03-11 01:24:30.492 [INFO][4520] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21" HandleID="k8s-pod-network.89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21" Workload="ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0" Mar 11 01:24:30.727206 containerd[1717]: 2026-03-11 01:24:30.518 [INFO][4520] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21" HandleID="k8s-pod-network.89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21" Workload="ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000399410), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-49f1e4db19", "pod":"goldmane-cccfbd5cf-tglk5", "timestamp":"2026-03-11 01:24:30.492383995 +0000 UTC"}, Hostname:"ci-4081.3.6-n-49f1e4db19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400039e420)} Mar 11 01:24:30.727206 containerd[1717]: 2026-03-11 01:24:30.518 [INFO][4520] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:30.727206 containerd[1717]: 2026-03-11 01:24:30.538 [INFO][4520] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:30.727206 containerd[1717]: 2026-03-11 01:24:30.538 [INFO][4520] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-49f1e4db19' Mar 11 01:24:30.727206 containerd[1717]: 2026-03-11 01:24:30.560 [INFO][4520] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.727206 containerd[1717]: 2026-03-11 01:24:30.585 [INFO][4520] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.727206 containerd[1717]: 2026-03-11 01:24:30.604 [INFO][4520] ipam/ipam.go 526: Trying affinity for 192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.727206 containerd[1717]: 2026-03-11 01:24:30.608 [INFO][4520] ipam/ipam.go 160: Attempting to load block cidr=192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.727206 containerd[1717]: 2026-03-11 01:24:30.613 [INFO][4520] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.727206 containerd[1717]: 2026-03-11 01:24:30.613 [INFO][4520] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.4.128/26 handle="k8s-pod-network.89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.727206 containerd[1717]: 2026-03-11 01:24:30.619 [INFO][4520] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21 Mar 11 01:24:30.727206 containerd[1717]: 2026-03-11 01:24:30.628 [INFO][4520] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.4.128/26 handle="k8s-pod-network.89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.727206 containerd[1717]: 2026-03-11 01:24:30.641 [INFO][4520] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.4.132/26] block=192.168.4.128/26 handle="k8s-pod-network.89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.727206 containerd[1717]: 2026-03-11 01:24:30.641 [INFO][4520] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.4.132/26] handle="k8s-pod-network.89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.727206 containerd[1717]: 2026-03-11 01:24:30.641 [INFO][4520] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:30.728734 containerd[1717]: 2026-03-11 01:24:30.641 [INFO][4520] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.4.132/26] IPv6=[] ContainerID="89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21" HandleID="k8s-pod-network.89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21" Workload="ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0" Mar 11 01:24:30.728734 containerd[1717]: 2026-03-11 01:24:30.649 [INFO][4486] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tglk5" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"e98682e7-a223-4af4-85e4-1cd58ea80d8c", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"", Pod:"goldmane-cccfbd5cf-tglk5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.4.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliceb51fc42c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:30.728734 containerd[1717]: 2026-03-11 01:24:30.649 [INFO][4486] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.132/32] ContainerID="89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tglk5" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0" Mar 11 01:24:30.728734 containerd[1717]: 2026-03-11 01:24:30.649 [INFO][4486] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliceb51fc42c7 ContainerID="89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tglk5" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0" Mar 11 01:24:30.728734 containerd[1717]: 2026-03-11 01:24:30.662 [INFO][4486] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tglk5" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0" Mar 11 01:24:30.728734 containerd[1717]: 2026-03-11 01:24:30.668 [INFO][4486] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tglk5" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"e98682e7-a223-4af4-85e4-1cd58ea80d8c", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21", Pod:"goldmane-cccfbd5cf-tglk5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.4.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliceb51fc42c7", MAC:"1a:f3:fd:a0:f6:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:30.728915 containerd[1717]: 2026-03-11 01:24:30.722 [INFO][4486] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tglk5" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0" Mar 11 01:24:30.739501 containerd[1717]: time="2026-03-11T01:24:30.738668213Z" level=info msg="CreateContainer within sandbox \"19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 11 01:24:30.764979 containerd[1717]: time="2026-03-11T01:24:30.764560697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:24:30.764979 containerd[1717]: time="2026-03-11T01:24:30.764614737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:24:30.764979 containerd[1717]: time="2026-03-11T01:24:30.764629177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:30.764979 containerd[1717]: time="2026-03-11T01:24:30.764698497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:30.778857 systemd-networkd[1612]: cali5629dc06b51: Link UP Mar 11 01:24:30.781815 systemd-networkd[1612]: cali5629dc06b51: Gained carrier Mar 11 01:24:30.810065 systemd[1]: Started cri-containerd-7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f.scope - libcontainer container 7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f. Mar 11 01:24:30.817855 systemd[1]: Started cri-containerd-89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21.scope - libcontainer container 89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21. Mar 11 01:24:30.837771 containerd[1717]: time="2026-03-11T01:24:30.837674381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-946cb894d-x99qm,Uid:0b7b3ee1-30bd-400b-8e2b-54c143148a44,Namespace:calico-system,Attempt:1,} returns sandbox id \"ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98\"" Mar 11 01:24:30.839834 containerd[1717]: 2026-03-11 01:24:30.402 [ERROR][4469] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 01:24:30.839834 containerd[1717]: 2026-03-11 01:24:30.447 [INFO][4469] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0 coredns-66bc5c9577- kube-system 48c32900-5045-4f60-bbe5-13570adfb73f 920 0 2026-03-11 01:23:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-49f1e4db19 coredns-66bc5c9577-2dzt4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5629dc06b51 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635" Namespace="kube-system" Pod="coredns-66bc5c9577-2dzt4" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-" Mar 11 01:24:30.839834 containerd[1717]: 2026-03-11 01:24:30.447 [INFO][4469] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635" Namespace="kube-system" Pod="coredns-66bc5c9577-2dzt4" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0" Mar 11 01:24:30.839834 containerd[1717]: 2026-03-11 01:24:30.520 [INFO][4573] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635" HandleID="k8s-pod-network.b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0" Mar 11 01:24:30.839834 containerd[1717]: 2026-03-11 01:24:30.540 [INFO][4573] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635" HandleID="k8s-pod-network.b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e3d50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-49f1e4db19", "pod":"coredns-66bc5c9577-2dzt4", "timestamp":"2026-03-11 01:24:30.520730003 +0000 UTC"}, Hostname:"ci-4081.3.6-n-49f1e4db19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000245ce0)} Mar 11 01:24:30.839834 containerd[1717]: 2026-03-11 01:24:30.540 [INFO][4573] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:30.839834 containerd[1717]: 2026-03-11 01:24:30.643 [INFO][4573] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:30.839834 containerd[1717]: 2026-03-11 01:24:30.644 [INFO][4573] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-49f1e4db19' Mar 11 01:24:30.839834 containerd[1717]: 2026-03-11 01:24:30.667 [INFO][4573] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.839834 containerd[1717]: 2026-03-11 01:24:30.705 [INFO][4573] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.839834 containerd[1717]: 2026-03-11 01:24:30.729 [INFO][4573] ipam/ipam.go 526: Trying affinity for 192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.839834 containerd[1717]: 2026-03-11 01:24:30.733 [INFO][4573] ipam/ipam.go 160: Attempting to load block cidr=192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.839834 containerd[1717]: 2026-03-11 01:24:30.739 [INFO][4573] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.839834 containerd[1717]: 2026-03-11 01:24:30.739 [INFO][4573] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.4.128/26 handle="k8s-pod-network.b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.839834 containerd[1717]: 2026-03-11 01:24:30.742 [INFO][4573] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635 Mar 11 01:24:30.839834 containerd[1717]: 2026-03-11 01:24:30.749 [INFO][4573] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.4.128/26 handle="k8s-pod-network.b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.839834 containerd[1717]: 2026-03-11 01:24:30.767 [INFO][4573] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.4.133/26] block=192.168.4.128/26 handle="k8s-pod-network.b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.839834 containerd[1717]: 2026-03-11 01:24:30.767 [INFO][4573] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.4.133/26] handle="k8s-pod-network.b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.839834 containerd[1717]: 2026-03-11 01:24:30.768 [INFO][4573] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:30.840316 containerd[1717]: 2026-03-11 01:24:30.768 [INFO][4573] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.4.133/26] IPv6=[] ContainerID="b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635" HandleID="k8s-pod-network.b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0" Mar 11 01:24:30.840316 containerd[1717]: 2026-03-11 01:24:30.773 [INFO][4469] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635" Namespace="kube-system" Pod="coredns-66bc5c9577-2dzt4" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"48c32900-5045-4f60-bbe5-13570adfb73f", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 23, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"", Pod:"coredns-66bc5c9577-2dzt4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5629dc06b51", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:30.840316 containerd[1717]: 2026-03-11 01:24:30.774 [INFO][4469] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.133/32] ContainerID="b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635" Namespace="kube-system" Pod="coredns-66bc5c9577-2dzt4" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0" Mar 11 01:24:30.840316 containerd[1717]: 2026-03-11 01:24:30.774 [INFO][4469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5629dc06b51 ContainerID="b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635" Namespace="kube-system" Pod="coredns-66bc5c9577-2dzt4" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0" Mar 11 01:24:30.840316 containerd[1717]: 2026-03-11 01:24:30.780 [INFO][4469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635" Namespace="kube-system" Pod="coredns-66bc5c9577-2dzt4" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0" Mar 11 01:24:30.840531 containerd[1717]: 2026-03-11 01:24:30.781 [INFO][4469] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635" Namespace="kube-system" Pod="coredns-66bc5c9577-2dzt4" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"48c32900-5045-4f60-bbe5-13570adfb73f", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 23, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635", Pod:"coredns-66bc5c9577-2dzt4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5629dc06b51", MAC:"b2:37:7d:07:b5:73", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:30.840531 containerd[1717]: 2026-03-11 01:24:30.825 [INFO][4469] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635" Namespace="kube-system" Pod="coredns-66bc5c9577-2dzt4" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0" Mar 11 01:24:30.851857 containerd[1717]: time="2026-03-11T01:24:30.850033642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 11 01:24:30.856033 systemd[1]: Removed slice kubepods-besteffort-pod9c69ff73_f4c7_42e1_b1b3_d584530adfe4.slice - libcontainer container kubepods-besteffort-pod9c69ff73_f4c7_42e1_b1b3_d584530adfe4.slice. Mar 11 01:24:30.896783 containerd[1717]: time="2026-03-11T01:24:30.896579081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:24:30.896783 containerd[1717]: time="2026-03-11T01:24:30.896635681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:24:30.896783 containerd[1717]: time="2026-03-11T01:24:30.896660002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:30.896783 containerd[1717]: time="2026-03-11T01:24:30.896745802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:30.916261 systemd-networkd[1612]: calib76d2880919: Link UP Mar 11 01:24:30.918806 systemd-networkd[1612]: calib76d2880919: Gained carrier Mar 11 01:24:30.957152 containerd[1717]: 2026-03-11 01:24:30.424 [ERROR][4498] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 01:24:30.957152 containerd[1717]: 2026-03-11 01:24:30.446 [INFO][4498] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0 calico-apiserver-59b9679847- calico-system b4eb406e-fa82-44d9-b52c-29c1dd09bc5c 923 0 2026-03-11 01:24:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59b9679847 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-49f1e4db19 calico-apiserver-59b9679847-vdzb5 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib76d2880919 [] [] }} ContainerID="9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6" Namespace="calico-system" Pod="calico-apiserver-59b9679847-vdzb5" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-" Mar 11 01:24:30.957152 containerd[1717]: 2026-03-11 01:24:30.446 [INFO][4498] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6" Namespace="calico-system" Pod="calico-apiserver-59b9679847-vdzb5" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0" Mar 11 01:24:30.957152 containerd[1717]: 2026-03-11 01:24:30.543 [INFO][4554] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6" HandleID="k8s-pod-network.9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0" Mar 11 01:24:30.957152 containerd[1717]: 2026-03-11 01:24:30.563 [INFO][4554] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6" HandleID="k8s-pod-network.9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000336910), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-49f1e4db19", "pod":"calico-apiserver-59b9679847-vdzb5", "timestamp":"2026-03-11 01:24:30.543950882 +0000 UTC"}, Hostname:"ci-4081.3.6-n-49f1e4db19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002b0dc0)} Mar 11 01:24:30.957152 containerd[1717]: 2026-03-11 01:24:30.563 [INFO][4554] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:30.957152 containerd[1717]: 2026-03-11 01:24:30.767 [INFO][4554] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:30.957152 containerd[1717]: 2026-03-11 01:24:30.767 [INFO][4554] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-49f1e4db19' Mar 11 01:24:30.957152 containerd[1717]: 2026-03-11 01:24:30.775 [INFO][4554] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.957152 containerd[1717]: 2026-03-11 01:24:30.788 [INFO][4554] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.957152 containerd[1717]: 2026-03-11 01:24:30.837 [INFO][4554] ipam/ipam.go 526: Trying affinity for 192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.957152 containerd[1717]: 2026-03-11 01:24:30.843 [INFO][4554] ipam/ipam.go 160: Attempting to load block cidr=192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.957152 containerd[1717]: 2026-03-11 01:24:30.854 [INFO][4554] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.957152 containerd[1717]: 2026-03-11 01:24:30.854 [INFO][4554] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.4.128/26 handle="k8s-pod-network.9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.957152 containerd[1717]: 2026-03-11 01:24:30.859 [INFO][4554] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6 Mar 11 01:24:30.957152 containerd[1717]: 2026-03-11 01:24:30.866 [INFO][4554] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.4.128/26 handle="k8s-pod-network.9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.957152 containerd[1717]: 2026-03-11 01:24:30.901 [INFO][4554] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.4.134/26] block=192.168.4.128/26 handle="k8s-pod-network.9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.957152 containerd[1717]: 2026-03-11 01:24:30.901 [INFO][4554] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.4.134/26] handle="k8s-pod-network.9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:30.957152 containerd[1717]: 2026-03-11 01:24:30.901 [INFO][4554] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:30.957696 containerd[1717]: 2026-03-11 01:24:30.902 [INFO][4554] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.4.134/26] IPv6=[] ContainerID="9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6" HandleID="k8s-pod-network.9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0" Mar 11 01:24:30.957696 containerd[1717]: 2026-03-11 01:24:30.906 [INFO][4498] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6" Namespace="calico-system" Pod="calico-apiserver-59b9679847-vdzb5" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0", GenerateName:"calico-apiserver-59b9679847-", Namespace:"calico-system", SelfLink:"", UID:"b4eb406e-fa82-44d9-b52c-29c1dd09bc5c", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59b9679847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"", Pod:"calico-apiserver-59b9679847-vdzb5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib76d2880919", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:30.957696 containerd[1717]: 2026-03-11 01:24:30.907 [INFO][4498] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.134/32] ContainerID="9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6" Namespace="calico-system" Pod="calico-apiserver-59b9679847-vdzb5" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0" Mar 11 01:24:30.957696 containerd[1717]: 2026-03-11 01:24:30.907 [INFO][4498] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib76d2880919 ContainerID="9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6" Namespace="calico-system" Pod="calico-apiserver-59b9679847-vdzb5" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0" Mar 11 01:24:30.957696 containerd[1717]: 2026-03-11 01:24:30.918 [INFO][4498] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6" Namespace="calico-system" Pod="calico-apiserver-59b9679847-vdzb5" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0" Mar 11 01:24:30.957876 containerd[1717]: 2026-03-11 01:24:30.921 [INFO][4498] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6" Namespace="calico-system" Pod="calico-apiserver-59b9679847-vdzb5" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0", GenerateName:"calico-apiserver-59b9679847-", Namespace:"calico-system", SelfLink:"", UID:"b4eb406e-fa82-44d9-b52c-29c1dd09bc5c", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59b9679847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6", Pod:"calico-apiserver-59b9679847-vdzb5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib76d2880919", MAC:"0a:c4:76:f4:67:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:30.957876 containerd[1717]: 2026-03-11 01:24:30.953 [INFO][4498] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6" Namespace="calico-system" Pod="calico-apiserver-59b9679847-vdzb5" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0" Mar 11 01:24:30.970755 systemd[1]: Started cri-containerd-b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635.scope - libcontainer container b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635. Mar 11 01:24:30.989148 containerd[1717]: time="2026-03-11T01:24:30.988983598Z" level=info msg="CreateContainer within sandbox \"19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc5afa1ac8e801c3bab4bbd5cd195cbe570f7f397690fa56ec1b40659919b28c\"" Mar 11 01:24:30.992261 containerd[1717]: time="2026-03-11T01:24:30.992225124Z" level=info msg="StartContainer for \"bc5afa1ac8e801c3bab4bbd5cd195cbe570f7f397690fa56ec1b40659919b28c\"" Mar 11 01:24:30.998445 systemd[1]: Created slice kubepods-besteffort-poddbaa2771_6401_4948_809a_f7e419f34efd.slice - libcontainer container kubepods-besteffort-poddbaa2771_6401_4948_809a_f7e419f34efd.slice. Mar 11 01:24:31.022696 containerd[1717]: time="2026-03-11T01:24:31.022217455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:24:31.022696 containerd[1717]: time="2026-03-11T01:24:31.022265695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:24:31.022696 containerd[1717]: time="2026-03-11T01:24:31.022280535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:31.025466 containerd[1717]: time="2026-03-11T01:24:31.023849698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:31.079658 systemd[1]: Started cri-containerd-bc5afa1ac8e801c3bab4bbd5cd195cbe570f7f397690fa56ec1b40659919b28c.scope - libcontainer container bc5afa1ac8e801c3bab4bbd5cd195cbe570f7f397690fa56ec1b40659919b28c. Mar 11 01:24:31.091535 systemd[1]: Started cri-containerd-9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6.scope - libcontainer container 9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6. Mar 11 01:24:31.095802 kubelet[3129]: I0311 01:24:31.095096 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbaa2771-6401-4948-809a-f7e419f34efd-whisker-ca-bundle\") pod \"whisker-55745ff44c-mkbtt\" (UID: \"dbaa2771-6401-4948-809a-f7e419f34efd\") " pod="calico-system/whisker-55745ff44c-mkbtt" Mar 11 01:24:31.095802 kubelet[3129]: I0311 01:24:31.095155 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frgq5\" (UniqueName: \"kubernetes.io/projected/dbaa2771-6401-4948-809a-f7e419f34efd-kube-api-access-frgq5\") pod \"whisker-55745ff44c-mkbtt\" (UID: \"dbaa2771-6401-4948-809a-f7e419f34efd\") " pod="calico-system/whisker-55745ff44c-mkbtt" Mar 11 01:24:31.095802 kubelet[3129]: I0311 01:24:31.095177 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/dbaa2771-6401-4948-809a-f7e419f34efd-nginx-config\") pod \"whisker-55745ff44c-mkbtt\" (UID: \"dbaa2771-6401-4948-809a-f7e419f34efd\") " pod="calico-system/whisker-55745ff44c-mkbtt" Mar 11 01:24:31.095802 kubelet[3129]: I0311 01:24:31.095196 3129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dbaa2771-6401-4948-809a-f7e419f34efd-whisker-backend-key-pair\") pod \"whisker-55745ff44c-mkbtt\" (UID: \"dbaa2771-6401-4948-809a-f7e419f34efd\") " pod="calico-system/whisker-55745ff44c-mkbtt" Mar 11 01:24:31.121496 containerd[1717]: time="2026-03-11T01:24:31.121387343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b9679847-btrhp,Uid:4d253aa8-9d1e-4431-94a0-bc4663390ef6,Namespace:calico-system,Attempt:1,} returns sandbox id \"7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f\"" Mar 11 01:24:31.128584 containerd[1717]: time="2026-03-11T01:24:31.127791274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-tglk5,Uid:e98682e7-a223-4af4-85e4-1cd58ea80d8c,Namespace:calico-system,Attempt:1,} returns sandbox id \"89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21\"" Mar 11 01:24:31.136671 containerd[1717]: time="2026-03-11T01:24:31.135770328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2dzt4,Uid:48c32900-5045-4f60-bbe5-13570adfb73f,Namespace:kube-system,Attempt:1,} returns sandbox id \"b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635\"" Mar 11 01:24:31.184355 containerd[1717]: time="2026-03-11T01:24:31.184316250Z" level=info msg="StartContainer for \"bc5afa1ac8e801c3bab4bbd5cd195cbe570f7f397690fa56ec1b40659919b28c\" returns successfully" Mar 11 01:24:31.186466 containerd[1717]: time="2026-03-11T01:24:31.186421694Z" level=info msg="CreateContainer within sandbox \"b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 11 01:24:31.236091 containerd[1717]: time="2026-03-11T01:24:31.235855178Z" level=info msg="CreateContainer within sandbox \"b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1bfdba23862f6584556cff751a30430f119d33e5d2f29b2b1bcf5fad32831aa7\"" Mar 11 01:24:31.239379 containerd[1717]: time="2026-03-11T01:24:31.239202223Z" level=info msg="StartContainer for \"1bfdba23862f6584556cff751a30430f119d33e5d2f29b2b1bcf5fad32831aa7\"" Mar 11 01:24:31.263242 containerd[1717]: time="2026-03-11T01:24:31.263101024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59b9679847-vdzb5,Uid:b4eb406e-fa82-44d9-b52c-29c1dd09bc5c,Namespace:calico-system,Attempt:1,} returns sandbox id \"9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6\"" Mar 11 01:24:31.295664 systemd[1]: Started cri-containerd-1bfdba23862f6584556cff751a30430f119d33e5d2f29b2b1bcf5fad32831aa7.scope - libcontainer container 1bfdba23862f6584556cff751a30430f119d33e5d2f29b2b1bcf5fad32831aa7. Mar 11 01:24:31.307215 containerd[1717]: time="2026-03-11T01:24:31.307181259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55745ff44c-mkbtt,Uid:dbaa2771-6401-4948-809a-f7e419f34efd,Namespace:calico-system,Attempt:0,}" Mar 11 01:24:31.352493 containerd[1717]: time="2026-03-11T01:24:31.352370536Z" level=info msg="StartContainer for \"1bfdba23862f6584556cff751a30430f119d33e5d2f29b2b1bcf5fad32831aa7\" returns successfully" Mar 11 01:24:31.528831 systemd-networkd[1612]: cali733728f2889: Link UP Mar 11 01:24:31.532799 systemd-networkd[1612]: cali733728f2889: Gained carrier Mar 11 01:24:31.561562 containerd[1717]: 2026-03-11 01:24:31.395 [ERROR][5013] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 11 01:24:31.561562 containerd[1717]: 2026-03-11 01:24:31.425 [INFO][5013] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--49f1e4db19-k8s-whisker--55745ff44c--mkbtt-eth0 whisker-55745ff44c- calico-system dbaa2771-6401-4948-809a-f7e419f34efd 961 0 2026-03-11 01:24:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:55745ff44c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-49f1e4db19 whisker-55745ff44c-mkbtt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali733728f2889 [] [] }} ContainerID="67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec" Namespace="calico-system" Pod="whisker-55745ff44c-mkbtt" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-whisker--55745ff44c--mkbtt-" Mar 11 01:24:31.561562 containerd[1717]: 2026-03-11 01:24:31.425 [INFO][5013] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec" Namespace="calico-system" Pod="whisker-55745ff44c-mkbtt" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-whisker--55745ff44c--mkbtt-eth0" Mar 11 01:24:31.561562 containerd[1717]: 2026-03-11 01:24:31.467 [INFO][5028] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec" HandleID="k8s-pod-network.67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec" Workload="ci--4081.3.6--n--49f1e4db19-k8s-whisker--55745ff44c--mkbtt-eth0" Mar 11 01:24:31.561562 containerd[1717]: 2026-03-11 01:24:31.477 [INFO][5028] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec" HandleID="k8s-pod-network.67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec" Workload="ci--4081.3.6--n--49f1e4db19-k8s-whisker--55745ff44c--mkbtt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f94e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-49f1e4db19", "pod":"whisker-55745ff44c-mkbtt", "timestamp":"2026-03-11 01:24:31.467262651 +0000 UTC"}, Hostname:"ci-4081.3.6-n-49f1e4db19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000252f20)} Mar 11 01:24:31.561562 containerd[1717]: 2026-03-11 01:24:31.478 [INFO][5028] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:31.561562 containerd[1717]: 2026-03-11 01:24:31.478 [INFO][5028] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:31.561562 containerd[1717]: 2026-03-11 01:24:31.478 [INFO][5028] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-49f1e4db19' Mar 11 01:24:31.561562 containerd[1717]: 2026-03-11 01:24:31.487 [INFO][5028] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:31.561562 containerd[1717]: 2026-03-11 01:24:31.491 [INFO][5028] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:31.561562 containerd[1717]: 2026-03-11 01:24:31.496 [INFO][5028] ipam/ipam.go 526: Trying affinity for 192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:31.561562 containerd[1717]: 2026-03-11 01:24:31.498 [INFO][5028] ipam/ipam.go 160: Attempting to load block cidr=192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:31.561562 containerd[1717]: 2026-03-11 01:24:31.500 [INFO][5028] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:31.561562 containerd[1717]: 2026-03-11 01:24:31.500 [INFO][5028] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.4.128/26 handle="k8s-pod-network.67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:31.561562 containerd[1717]: 2026-03-11 01:24:31.502 [INFO][5028] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec Mar 11 01:24:31.561562 containerd[1717]: 2026-03-11 01:24:31.508 [INFO][5028] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.4.128/26 handle="k8s-pod-network.67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:31.561562 containerd[1717]: 2026-03-11 01:24:31.517 [INFO][5028] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.4.135/26] block=192.168.4.128/26 handle="k8s-pod-network.67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:31.561562 containerd[1717]: 2026-03-11 01:24:31.517 [INFO][5028] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.4.135/26] handle="k8s-pod-network.67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:31.561562 containerd[1717]: 2026-03-11 01:24:31.517 [INFO][5028] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:31.562082 containerd[1717]: 2026-03-11 01:24:31.517 [INFO][5028] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.4.135/26] IPv6=[] ContainerID="67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec" HandleID="k8s-pod-network.67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec" Workload="ci--4081.3.6--n--49f1e4db19-k8s-whisker--55745ff44c--mkbtt-eth0" Mar 11 01:24:31.562082 containerd[1717]: 2026-03-11 01:24:31.520 [INFO][5013] cni-plugin/k8s.go 418: Populated endpoint ContainerID="67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec" Namespace="calico-system" Pod="whisker-55745ff44c-mkbtt" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-whisker--55745ff44c--mkbtt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-whisker--55745ff44c--mkbtt-eth0", GenerateName:"whisker-55745ff44c-", Namespace:"calico-system", SelfLink:"", UID:"dbaa2771-6401-4948-809a-f7e419f34efd", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55745ff44c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"", Pod:"whisker-55745ff44c-mkbtt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.4.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali733728f2889", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:31.562082 containerd[1717]: 2026-03-11 01:24:31.522 [INFO][5013] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.135/32] ContainerID="67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec" Namespace="calico-system" Pod="whisker-55745ff44c-mkbtt" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-whisker--55745ff44c--mkbtt-eth0" Mar 11 01:24:31.562082 containerd[1717]: 2026-03-11 01:24:31.522 [INFO][5013] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali733728f2889 ContainerID="67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec" Namespace="calico-system" Pod="whisker-55745ff44c-mkbtt" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-whisker--55745ff44c--mkbtt-eth0" Mar 11 01:24:31.562082 containerd[1717]: 2026-03-11 01:24:31.533 [INFO][5013] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec" Namespace="calico-system" Pod="whisker-55745ff44c-mkbtt" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-whisker--55745ff44c--mkbtt-eth0" Mar 11 01:24:31.562082 containerd[1717]: 2026-03-11 01:24:31.535 [INFO][5013] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec" Namespace="calico-system" Pod="whisker-55745ff44c-mkbtt" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-whisker--55745ff44c--mkbtt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-whisker--55745ff44c--mkbtt-eth0", GenerateName:"whisker-55745ff44c-", Namespace:"calico-system", SelfLink:"", UID:"dbaa2771-6401-4948-809a-f7e419f34efd", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55745ff44c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec", Pod:"whisker-55745ff44c-mkbtt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.4.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali733728f2889", MAC:"3e:0f:b8:03:29:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:31.562259 containerd[1717]: 2026-03-11 01:24:31.555 [INFO][5013] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec" Namespace="calico-system" Pod="whisker-55745ff44c-mkbtt" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-whisker--55745ff44c--mkbtt-eth0" Mar 11 01:24:31.581179 systemd[1]: run-containerd-runc-k8s.io-7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f-runc.MRfZRX.mount: Deactivated successfully. Mar 11 01:24:31.594635 containerd[1717]: time="2026-03-11T01:24:31.594281267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:24:31.594635 containerd[1717]: time="2026-03-11T01:24:31.594333507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:24:31.594635 containerd[1717]: time="2026-03-11T01:24:31.594348627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:31.594635 containerd[1717]: time="2026-03-11T01:24:31.594420227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:31.641606 systemd[1]: Started cri-containerd-67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec.scope - libcontainer container 67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec. Mar 11 01:24:31.655612 kubelet[3129]: I0311 01:24:31.654935 3129 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c69ff73-f4c7-42e1-b1b3-d584530adfe4" path="/var/lib/kubelet/pods/9c69ff73-f4c7-42e1-b1b3-d584530adfe4/volumes" Mar 11 01:24:31.682464 containerd[1717]: time="2026-03-11T01:24:31.680547333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55745ff44c-mkbtt,Uid:dbaa2771-6401-4948-809a-f7e419f34efd,Namespace:calico-system,Attempt:0,} returns sandbox id \"67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec\"" Mar 11 01:24:31.704515 kernel: calico-node[4620]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 11 01:24:31.722517 systemd-networkd[1612]: calib5f6e015954: Gained IPv6LL Mar 11 01:24:31.784582 systemd-networkd[1612]: cali499509217e6: Gained IPv6LL Mar 11 01:24:31.906128 kubelet[3129]: I0311 01:24:31.905603 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vq8w4" podStartSLOduration=37.905586715 podStartE2EDuration="37.905586715s" podCreationTimestamp="2026-03-11 01:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 01:24:31.882645276 +0000 UTC m=+44.357370296" watchObservedRunningTime="2026-03-11 01:24:31.905586715 +0000 UTC m=+44.380311695" Mar 11 01:24:31.928296 kubelet[3129]: I0311 01:24:31.928246 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2dzt4" podStartSLOduration=37.928227394 podStartE2EDuration="37.928227394s" podCreationTimestamp="2026-03-11 01:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 01:24:31.906584197 +0000 UTC m=+44.381309217" watchObservedRunningTime="2026-03-11 01:24:31.928227394 +0000 UTC m=+44.402952374" Mar 11 01:24:31.976616 systemd-networkd[1612]: cali44e9231e721: Gained IPv6LL Mar 11 01:24:32.136105 systemd-networkd[1612]: vxlan.calico: Link UP Mar 11 01:24:32.136117 systemd-networkd[1612]: vxlan.calico: Gained carrier Mar 11 01:24:32.424599 systemd-networkd[1612]: calib76d2880919: Gained IPv6LL Mar 11 01:24:32.616670 systemd-networkd[1612]: caliceb51fc42c7: Gained IPv6LL Mar 11 01:24:32.681633 systemd-networkd[1612]: cali5629dc06b51: Gained IPv6LL Mar 11 01:24:33.384624 systemd-networkd[1612]: cali733728f2889: Gained IPv6LL Mar 11 01:24:33.384888 systemd-networkd[1612]: vxlan.calico: Gained IPv6LL Mar 11 01:24:38.854018 containerd[1717]: time="2026-03-11T01:24:38.853958508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 11 01:24:38.859558 containerd[1717]: time="2026-03-11T01:24:38.859132117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:38.860378 containerd[1717]: time="2026-03-11T01:24:38.860354000Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:38.861479 containerd[1717]: time="2026-03-11T01:24:38.861222561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:38.862227 containerd[1717]: time="2026-03-11T01:24:38.862184923Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 8.009764077s" Mar 11 01:24:38.862227 containerd[1717]: time="2026-03-11T01:24:38.862213963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 11 01:24:38.865335 containerd[1717]: time="2026-03-11T01:24:38.865144488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 11 01:24:38.887575 containerd[1717]: time="2026-03-11T01:24:38.887527967Z" level=info msg="CreateContainer within sandbox \"ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 11 01:24:38.920482 containerd[1717]: time="2026-03-11T01:24:38.920432665Z" level=info msg="CreateContainer within sandbox \"ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4baca01271e8614021f432d9d73655a1ebdc2a9267e6352968d906adad77cf49\"" Mar 11 01:24:38.923126 containerd[1717]: time="2026-03-11T01:24:38.922209428Z" level=info msg="StartContainer for \"4baca01271e8614021f432d9d73655a1ebdc2a9267e6352968d906adad77cf49\"" Mar 11 01:24:38.948602 systemd[1]: Started cri-containerd-4baca01271e8614021f432d9d73655a1ebdc2a9267e6352968d906adad77cf49.scope - libcontainer container 4baca01271e8614021f432d9d73655a1ebdc2a9267e6352968d906adad77cf49. Mar 11 01:24:38.980745 containerd[1717]: time="2026-03-11T01:24:38.980646970Z" level=info msg="StartContainer for \"4baca01271e8614021f432d9d73655a1ebdc2a9267e6352968d906adad77cf49\" returns successfully" Mar 11 01:24:39.925200 kubelet[3129]: I0311 01:24:39.924058 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-946cb894d-x99qm" podStartSLOduration=19.90791753 podStartE2EDuration="27.924045258s" podCreationTimestamp="2026-03-11 01:24:12 +0000 UTC" firstStartedPulling="2026-03-11 01:24:30.847625158 +0000 UTC m=+43.322350178" lastFinishedPulling="2026-03-11 01:24:38.863752886 +0000 UTC m=+51.338477906" observedRunningTime="2026-03-11 01:24:39.923347697 +0000 UTC m=+52.398072757" watchObservedRunningTime="2026-03-11 01:24:39.924045258 +0000 UTC m=+52.398770278" Mar 11 01:24:41.003072 containerd[1717]: time="2026-03-11T01:24:41.003022664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:41.005775 containerd[1717]: time="2026-03-11T01:24:41.005592348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 11 01:24:41.008474 containerd[1717]: time="2026-03-11T01:24:41.008250913Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:41.016907 containerd[1717]: time="2026-03-11T01:24:41.016788448Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.15161388s" Mar 11 01:24:41.016907 containerd[1717]: time="2026-03-11T01:24:41.016824128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 11 01:24:41.017168 containerd[1717]: time="2026-03-11T01:24:41.017138408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:41.020066 containerd[1717]: time="2026-03-11T01:24:41.019897933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 11 01:24:41.024581 containerd[1717]: time="2026-03-11T01:24:41.024557701Z" level=info msg="CreateContainer within sandbox \"7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 11 01:24:41.079786 containerd[1717]: time="2026-03-11T01:24:41.079745598Z" level=info msg="CreateContainer within sandbox \"7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"40567ae93470123ac6b5479d8e9824c7491893ba55b4c48b5619c410b8ccd37a\"" Mar 11 01:24:41.081524 containerd[1717]: time="2026-03-11T01:24:41.080513399Z" level=info msg="StartContainer for \"40567ae93470123ac6b5479d8e9824c7491893ba55b4c48b5619c410b8ccd37a\"" Mar 11 01:24:41.115593 systemd[1]: Started cri-containerd-40567ae93470123ac6b5479d8e9824c7491893ba55b4c48b5619c410b8ccd37a.scope - libcontainer container 40567ae93470123ac6b5479d8e9824c7491893ba55b4c48b5619c410b8ccd37a. Mar 11 01:24:41.146019 containerd[1717]: time="2026-03-11T01:24:41.145890593Z" level=info msg="StartContainer for \"40567ae93470123ac6b5479d8e9824c7491893ba55b4c48b5619c410b8ccd37a\" returns successfully" Mar 11 01:24:42.648471 containerd[1717]: time="2026-03-11T01:24:42.648231698Z" level=info msg="StopPodSandbox for \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\"" Mar 11 01:24:42.730403 kubelet[3129]: I0311 01:24:42.730172 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-59b9679847-btrhp" podStartSLOduration=24.84266755 podStartE2EDuration="34.730154721s" podCreationTimestamp="2026-03-11 01:24:08 +0000 UTC" firstStartedPulling="2026-03-11 01:24:31.130692959 +0000 UTC m=+43.605417979" lastFinishedPulling="2026-03-11 01:24:41.01818013 +0000 UTC m=+53.492905150" observedRunningTime="2026-03-11 01:24:41.917612382 +0000 UTC m=+54.392337402" watchObservedRunningTime="2026-03-11 01:24:42.730154721 +0000 UTC m=+55.204879741" Mar 11 01:24:42.778011 containerd[1717]: 2026-03-11 01:24:42.732 [INFO][5390] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Mar 11 01:24:42.778011 containerd[1717]: 2026-03-11 01:24:42.732 [INFO][5390] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" iface="eth0" netns="/var/run/netns/cni-ea9eba6c-0648-3f90-d79a-23097dfb7b24" Mar 11 01:24:42.778011 containerd[1717]: 2026-03-11 01:24:42.732 [INFO][5390] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" iface="eth0" netns="/var/run/netns/cni-ea9eba6c-0648-3f90-d79a-23097dfb7b24" Mar 11 01:24:42.778011 containerd[1717]: 2026-03-11 01:24:42.732 [INFO][5390] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" iface="eth0" netns="/var/run/netns/cni-ea9eba6c-0648-3f90-d79a-23097dfb7b24" Mar 11 01:24:42.778011 containerd[1717]: 2026-03-11 01:24:42.733 [INFO][5390] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Mar 11 01:24:42.778011 containerd[1717]: 2026-03-11 01:24:42.733 [INFO][5390] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Mar 11 01:24:42.778011 containerd[1717]: 2026-03-11 01:24:42.759 [INFO][5397] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" HandleID="k8s-pod-network.b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Workload="ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0" Mar 11 01:24:42.778011 containerd[1717]: 2026-03-11 01:24:42.759 [INFO][5397] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:42.778011 containerd[1717]: 2026-03-11 01:24:42.759 [INFO][5397] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:42.778011 containerd[1717]: 2026-03-11 01:24:42.770 [WARNING][5397] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" HandleID="k8s-pod-network.b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Workload="ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0" Mar 11 01:24:42.778011 containerd[1717]: 2026-03-11 01:24:42.770 [INFO][5397] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" HandleID="k8s-pod-network.b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Workload="ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0" Mar 11 01:24:42.778011 containerd[1717]: 2026-03-11 01:24:42.772 [INFO][5397] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:42.778011 containerd[1717]: 2026-03-11 01:24:42.775 [INFO][5390] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Mar 11 01:24:42.778883 containerd[1717]: time="2026-03-11T01:24:42.778538126Z" level=info msg="TearDown network for sandbox \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\" successfully" Mar 11 01:24:42.778883 containerd[1717]: time="2026-03-11T01:24:42.778569006Z" level=info msg="StopPodSandbox for \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\" returns successfully" Mar 11 01:24:42.782044 systemd[1]: run-netns-cni\x2dea9eba6c\x2d0648\x2d3f90\x2dd79a\x2d23097dfb7b24.mount: Deactivated successfully. Mar 11 01:24:42.789178 containerd[1717]: time="2026-03-11T01:24:42.789097184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tjh4h,Uid:c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b,Namespace:calico-system,Attempt:1,}" Mar 11 01:24:42.900920 kubelet[3129]: I0311 01:24:42.900734 3129 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 01:24:42.948119 systemd-networkd[1612]: calib6f9460fa25: Link UP Mar 11 01:24:42.949557 systemd-networkd[1612]: calib6f9460fa25: Gained carrier Mar 11 01:24:42.980985 containerd[1717]: 2026-03-11 01:24:42.858 [INFO][5404] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0 csi-node-driver- calico-system c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b 1029 0 2026-03-11 01:24:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-49f1e4db19 csi-node-driver-tjh4h eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib6f9460fa25 [] [] }} ContainerID="94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b" Namespace="calico-system" Pod="csi-node-driver-tjh4h" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-" Mar 11 01:24:42.980985 containerd[1717]: 2026-03-11 01:24:42.858 [INFO][5404] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b" Namespace="calico-system" Pod="csi-node-driver-tjh4h" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0" Mar 11 01:24:42.980985 containerd[1717]: 2026-03-11 01:24:42.888 [INFO][5417] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b" HandleID="k8s-pod-network.94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b" Workload="ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0" Mar 11 01:24:42.980985 containerd[1717]: 2026-03-11 01:24:42.903 [INFO][5417] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b" HandleID="k8s-pod-network.94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b" Workload="ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb4c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-49f1e4db19", "pod":"csi-node-driver-tjh4h", "timestamp":"2026-03-11 01:24:42.888722439 +0000 UTC"}, Hostname:"ci-4081.3.6-n-49f1e4db19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400036d340)} Mar 11 01:24:42.980985 containerd[1717]: 2026-03-11 01:24:42.903 [INFO][5417] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:42.980985 containerd[1717]: 2026-03-11 01:24:42.903 [INFO][5417] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:42.980985 containerd[1717]: 2026-03-11 01:24:42.903 [INFO][5417] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-49f1e4db19' Mar 11 01:24:42.980985 containerd[1717]: 2026-03-11 01:24:42.905 [INFO][5417] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:42.980985 containerd[1717]: 2026-03-11 01:24:42.909 [INFO][5417] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:42.980985 containerd[1717]: 2026-03-11 01:24:42.915 [INFO][5417] ipam/ipam.go 526: Trying affinity for 192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:42.980985 containerd[1717]: 2026-03-11 01:24:42.918 [INFO][5417] ipam/ipam.go 160: Attempting to load block cidr=192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:42.980985 containerd[1717]: 2026-03-11 01:24:42.920 [INFO][5417] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.4.128/26 host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:42.980985 containerd[1717]: 2026-03-11 01:24:42.920 [INFO][5417] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.4.128/26 handle="k8s-pod-network.94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:42.980985 containerd[1717]: 2026-03-11 01:24:42.922 [INFO][5417] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b Mar 11 01:24:42.980985 containerd[1717]: 2026-03-11 01:24:42.928 [INFO][5417] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.4.128/26 handle="k8s-pod-network.94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:42.980985 containerd[1717]: 2026-03-11 01:24:42.940 [INFO][5417] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.4.136/26] block=192.168.4.128/26 handle="k8s-pod-network.94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:42.980985 containerd[1717]: 2026-03-11 01:24:42.940 [INFO][5417] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.4.136/26] handle="k8s-pod-network.94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b" host="ci-4081.3.6-n-49f1e4db19" Mar 11 01:24:42.980985 containerd[1717]: 2026-03-11 01:24:42.940 [INFO][5417] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:42.980985 containerd[1717]: 2026-03-11 01:24:42.940 [INFO][5417] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.4.136/26] IPv6=[] ContainerID="94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b" HandleID="k8s-pod-network.94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b" Workload="ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0" Mar 11 01:24:42.981919 containerd[1717]: 2026-03-11 01:24:42.943 [INFO][5404] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b" Namespace="calico-system" Pod="csi-node-driver-tjh4h" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"", Pod:"csi-node-driver-tjh4h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.4.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib6f9460fa25", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:42.981919 containerd[1717]: 2026-03-11 01:24:42.943 [INFO][5404] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.4.136/32] ContainerID="94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b" Namespace="calico-system" Pod="csi-node-driver-tjh4h" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0" Mar 11 01:24:42.981919 containerd[1717]: 2026-03-11 01:24:42.944 [INFO][5404] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6f9460fa25 ContainerID="94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b" Namespace="calico-system" Pod="csi-node-driver-tjh4h" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0" Mar 11 01:24:42.981919 containerd[1717]: 2026-03-11 01:24:42.953 [INFO][5404] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b" Namespace="calico-system" Pod="csi-node-driver-tjh4h" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0" Mar 11 01:24:42.981919 containerd[1717]: 2026-03-11 01:24:42.954 [INFO][5404] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b" Namespace="calico-system" Pod="csi-node-driver-tjh4h" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b", Pod:"csi-node-driver-tjh4h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.4.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib6f9460fa25", MAC:"c2:8e:51:84:e3:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:42.981919 containerd[1717]: 2026-03-11 01:24:42.977 [INFO][5404] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b" Namespace="calico-system" Pod="csi-node-driver-tjh4h" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0" Mar 11 01:24:43.036112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1106468897.mount: Deactivated successfully. Mar 11 01:24:43.056578 containerd[1717]: time="2026-03-11T01:24:43.056496812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 11 01:24:43.056578 containerd[1717]: time="2026-03-11T01:24:43.056546692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 11 01:24:43.057695 containerd[1717]: time="2026-03-11T01:24:43.056746732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:43.064662 containerd[1717]: time="2026-03-11T01:24:43.064117185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 11 01:24:43.080626 systemd[1]: Started cri-containerd-94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b.scope - libcontainer container 94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b. Mar 11 01:24:43.123728 containerd[1717]: time="2026-03-11T01:24:43.123549729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tjh4h,Uid:c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b,Namespace:calico-system,Attempt:1,} returns sandbox id \"94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b\"" Mar 11 01:24:44.149406 containerd[1717]: time="2026-03-11T01:24:44.149351025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:44.151866 containerd[1717]: time="2026-03-11T01:24:44.151712149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 11 01:24:44.155479 containerd[1717]: time="2026-03-11T01:24:44.155271475Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:44.161399 containerd[1717]: time="2026-03-11T01:24:44.161112005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:44.161908 containerd[1717]: time="2026-03-11T01:24:44.161877886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.141949513s" Mar 11 01:24:44.161966 containerd[1717]: time="2026-03-11T01:24:44.161908966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 11 01:24:44.163808 containerd[1717]: time="2026-03-11T01:24:44.163779169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 11 01:24:44.171328 containerd[1717]: time="2026-03-11T01:24:44.171296862Z" level=info msg="CreateContainer within sandbox \"89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 11 01:24:44.209874 containerd[1717]: time="2026-03-11T01:24:44.209831168Z" level=info msg="CreateContainer within sandbox \"89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1d87784910d8c13279fcda33959ae558b6c4dc5a7609c05220341401e6beeb23\"" Mar 11 01:24:44.211716 containerd[1717]: time="2026-03-11T01:24:44.210616729Z" level=info msg="StartContainer for \"1d87784910d8c13279fcda33959ae558b6c4dc5a7609c05220341401e6beeb23\"" Mar 11 01:24:44.242610 systemd[1]: Started cri-containerd-1d87784910d8c13279fcda33959ae558b6c4dc5a7609c05220341401e6beeb23.scope - libcontainer container 1d87784910d8c13279fcda33959ae558b6c4dc5a7609c05220341401e6beeb23. Mar 11 01:24:44.264902 systemd-networkd[1612]: calib6f9460fa25: Gained IPv6LL Mar 11 01:24:44.279648 containerd[1717]: time="2026-03-11T01:24:44.279536926Z" level=info msg="StartContainer for \"1d87784910d8c13279fcda33959ae558b6c4dc5a7609c05220341401e6beeb23\" returns successfully" Mar 11 01:24:44.510207 containerd[1717]: time="2026-03-11T01:24:44.509512638Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:44.511657 containerd[1717]: time="2026-03-11T01:24:44.511630681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 11 01:24:44.514117 containerd[1717]: time="2026-03-11T01:24:44.514010085Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 350.199756ms" Mar 11 01:24:44.514117 containerd[1717]: time="2026-03-11T01:24:44.514042686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 11 01:24:44.515500 containerd[1717]: time="2026-03-11T01:24:44.515466568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 11 01:24:44.520876 containerd[1717]: time="2026-03-11T01:24:44.520847577Z" level=info msg="CreateContainer within sandbox \"9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 11 01:24:44.555104 containerd[1717]: time="2026-03-11T01:24:44.555059355Z" level=info msg="CreateContainer within sandbox \"9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ed57ef50e2c4e3b8717fff789d6fa855186585a6fad766ac662c08abf43eb531\"" Mar 11 01:24:44.555799 containerd[1717]: time="2026-03-11T01:24:44.555775477Z" level=info msg="StartContainer for \"ed57ef50e2c4e3b8717fff789d6fa855186585a6fad766ac662c08abf43eb531\"" Mar 11 01:24:44.577772 systemd[1]: run-containerd-runc-k8s.io-1d87784910d8c13279fcda33959ae558b6c4dc5a7609c05220341401e6beeb23-runc.3q9YFF.mount: Deactivated successfully. Mar 11 01:24:44.586617 systemd[1]: Started cri-containerd-ed57ef50e2c4e3b8717fff789d6fa855186585a6fad766ac662c08abf43eb531.scope - libcontainer container ed57ef50e2c4e3b8717fff789d6fa855186585a6fad766ac662c08abf43eb531. Mar 11 01:24:44.618823 containerd[1717]: time="2026-03-11T01:24:44.618649184Z" level=info msg="StartContainer for \"ed57ef50e2c4e3b8717fff789d6fa855186585a6fad766ac662c08abf43eb531\" returns successfully" Mar 11 01:24:44.932858 kubelet[3129]: I0311 01:24:44.932797 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-tglk5" podStartSLOduration=23.913533532 podStartE2EDuration="36.932780558s" podCreationTimestamp="2026-03-11 01:24:08 +0000 UTC" firstStartedPulling="2026-03-11 01:24:31.143934462 +0000 UTC m=+43.618659482" lastFinishedPulling="2026-03-11 01:24:44.163181488 +0000 UTC m=+56.637906508" observedRunningTime="2026-03-11 01:24:44.925687266 +0000 UTC m=+57.400412286" watchObservedRunningTime="2026-03-11 01:24:44.932780558 +0000 UTC m=+57.407505578" Mar 11 01:24:45.769821 containerd[1717]: time="2026-03-11T01:24:45.769771743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:45.773313 containerd[1717]: time="2026-03-11T01:24:45.773166748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 11 01:24:45.776344 containerd[1717]: time="2026-03-11T01:24:45.776292474Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:45.781125 containerd[1717]: time="2026-03-11T01:24:45.780800601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:45.781610 containerd[1717]: time="2026-03-11T01:24:45.781581603Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.265990795s" Mar 11 01:24:45.781672 containerd[1717]: time="2026-03-11T01:24:45.781612123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 11 01:24:45.784274 containerd[1717]: time="2026-03-11T01:24:45.783848647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 11 01:24:45.821843 containerd[1717]: time="2026-03-11T01:24:45.821793271Z" level=info msg="CreateContainer within sandbox \"67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 11 01:24:45.897792 containerd[1717]: time="2026-03-11T01:24:45.897749120Z" level=info msg="CreateContainer within sandbox \"67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1614c4290ad9fe8d893f65bf1daccb8a845a55534eba74a9a3d3ce85c54cf8fd\"" Mar 11 01:24:45.899028 containerd[1717]: time="2026-03-11T01:24:45.898972683Z" level=info msg="StartContainer for \"1614c4290ad9fe8d893f65bf1daccb8a845a55534eba74a9a3d3ce85c54cf8fd\"" Mar 11 01:24:45.940379 systemd[1]: run-containerd-runc-k8s.io-1614c4290ad9fe8d893f65bf1daccb8a845a55534eba74a9a3d3ce85c54cf8fd-runc.8Vrc5b.mount: Deactivated successfully. Mar 11 01:24:45.951523 systemd[1]: Started cri-containerd-1614c4290ad9fe8d893f65bf1daccb8a845a55534eba74a9a3d3ce85c54cf8fd.scope - libcontainer container 1614c4290ad9fe8d893f65bf1daccb8a845a55534eba74a9a3d3ce85c54cf8fd. Mar 11 01:24:46.026257 containerd[1717]: time="2026-03-11T01:24:46.026105619Z" level=info msg="StartContainer for \"1614c4290ad9fe8d893f65bf1daccb8a845a55534eba74a9a3d3ce85c54cf8fd\" returns successfully" Mar 11 01:24:46.083754 kubelet[3129]: I0311 01:24:46.083581 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-59b9679847-vdzb5" podStartSLOduration=24.83550642 podStartE2EDuration="38.083562517s" podCreationTimestamp="2026-03-11 01:24:08 +0000 UTC" firstStartedPulling="2026-03-11 01:24:31.26668247 +0000 UTC m=+43.741407490" lastFinishedPulling="2026-03-11 01:24:44.514738567 +0000 UTC m=+56.989463587" observedRunningTime="2026-03-11 01:24:44.95714732 +0000 UTC m=+57.431872340" watchObservedRunningTime="2026-03-11 01:24:46.083562517 +0000 UTC m=+58.558287537" Mar 11 01:24:47.230487 containerd[1717]: time="2026-03-11T01:24:47.230320428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:47.232364 containerd[1717]: time="2026-03-11T01:24:47.232329472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 11 01:24:47.235471 containerd[1717]: time="2026-03-11T01:24:47.235418837Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:47.240082 containerd[1717]: time="2026-03-11T01:24:47.239976205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:47.241398 containerd[1717]: time="2026-03-11T01:24:47.241293847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.45741292s" Mar 11 01:24:47.241398 containerd[1717]: time="2026-03-11T01:24:47.241326727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 11 01:24:47.242475 containerd[1717]: time="2026-03-11T01:24:47.242254369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 11 01:24:47.248828 containerd[1717]: time="2026-03-11T01:24:47.248801460Z" level=info msg="CreateContainer within sandbox \"94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 11 01:24:47.288715 containerd[1717]: time="2026-03-11T01:24:47.288683688Z" level=info msg="CreateContainer within sandbox \"94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ecf820ae663b82f41c196cdc1396f79e88a9928c142030ea842db05150d97e16\"" Mar 11 01:24:47.289348 containerd[1717]: time="2026-03-11T01:24:47.289326729Z" level=info msg="StartContainer for \"ecf820ae663b82f41c196cdc1396f79e88a9928c142030ea842db05150d97e16\"" Mar 11 01:24:47.325606 systemd[1]: Started cri-containerd-ecf820ae663b82f41c196cdc1396f79e88a9928c142030ea842db05150d97e16.scope - libcontainer container ecf820ae663b82f41c196cdc1396f79e88a9928c142030ea842db05150d97e16. Mar 11 01:24:47.354644 containerd[1717]: time="2026-03-11T01:24:47.354552600Z" level=info msg="StartContainer for \"ecf820ae663b82f41c196cdc1396f79e88a9928c142030ea842db05150d97e16\" returns successfully" Mar 11 01:24:47.638339 containerd[1717]: time="2026-03-11T01:24:47.638174482Z" level=info msg="StopPodSandbox for \"fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589\"" Mar 11 01:24:47.716009 containerd[1717]: 2026-03-11 01:24:47.672 [WARNING][5705] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0", GenerateName:"calico-apiserver-59b9679847-", Namespace:"calico-system", SelfLink:"", UID:"4d253aa8-9d1e-4431-94a0-bc4663390ef6", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59b9679847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f", Pod:"calico-apiserver-59b9679847-btrhp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali499509217e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:47.716009 containerd[1717]: 2026-03-11 01:24:47.674 [INFO][5705] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Mar 11 01:24:47.716009 containerd[1717]: 2026-03-11 01:24:47.674 [INFO][5705] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" iface="eth0" netns="" Mar 11 01:24:47.716009 containerd[1717]: 2026-03-11 01:24:47.674 [INFO][5705] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Mar 11 01:24:47.716009 containerd[1717]: 2026-03-11 01:24:47.674 [INFO][5705] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Mar 11 01:24:47.716009 containerd[1717]: 2026-03-11 01:24:47.702 [INFO][5714] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" HandleID="k8s-pod-network.fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0" Mar 11 01:24:47.716009 containerd[1717]: 2026-03-11 01:24:47.702 [INFO][5714] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:47.716009 containerd[1717]: 2026-03-11 01:24:47.702 [INFO][5714] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:47.716009 containerd[1717]: 2026-03-11 01:24:47.711 [WARNING][5714] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" HandleID="k8s-pod-network.fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0" Mar 11 01:24:47.716009 containerd[1717]: 2026-03-11 01:24:47.711 [INFO][5714] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" HandleID="k8s-pod-network.fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0" Mar 11 01:24:47.716009 containerd[1717]: 2026-03-11 01:24:47.712 [INFO][5714] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:47.716009 containerd[1717]: 2026-03-11 01:24:47.714 [INFO][5705] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Mar 11 01:24:47.716542 containerd[1717]: time="2026-03-11T01:24:47.716047495Z" level=info msg="TearDown network for sandbox \"fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589\" successfully" Mar 11 01:24:47.716542 containerd[1717]: time="2026-03-11T01:24:47.716070335Z" level=info msg="StopPodSandbox for \"fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589\" returns successfully" Mar 11 01:24:47.716607 containerd[1717]: time="2026-03-11T01:24:47.716579416Z" level=info msg="RemovePodSandbox for \"fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589\"" Mar 11 01:24:47.723888 containerd[1717]: time="2026-03-11T01:24:47.723556268Z" level=info msg="Forcibly stopping sandbox \"fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589\"" Mar 11 01:24:47.787221 containerd[1717]: 2026-03-11 01:24:47.754 [WARNING][5728] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0", GenerateName:"calico-apiserver-59b9679847-", Namespace:"calico-system", SelfLink:"", UID:"4d253aa8-9d1e-4431-94a0-bc4663390ef6", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59b9679847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"7fb25a756d782c7f1a81fbeb6708e265c98946e2e1aa1faafbf076f9ad8ad14f", Pod:"calico-apiserver-59b9679847-btrhp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali499509217e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:47.787221 containerd[1717]: 2026-03-11 01:24:47.755 [INFO][5728] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Mar 11 01:24:47.787221 containerd[1717]: 2026-03-11 01:24:47.755 [INFO][5728] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" iface="eth0" netns="" Mar 11 01:24:47.787221 containerd[1717]: 2026-03-11 01:24:47.755 [INFO][5728] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Mar 11 01:24:47.787221 containerd[1717]: 2026-03-11 01:24:47.755 [INFO][5728] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Mar 11 01:24:47.787221 containerd[1717]: 2026-03-11 01:24:47.774 [INFO][5735] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" HandleID="k8s-pod-network.fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0" Mar 11 01:24:47.787221 containerd[1717]: 2026-03-11 01:24:47.774 [INFO][5735] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:47.787221 containerd[1717]: 2026-03-11 01:24:47.774 [INFO][5735] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:47.787221 containerd[1717]: 2026-03-11 01:24:47.783 [WARNING][5735] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" HandleID="k8s-pod-network.fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0" Mar 11 01:24:47.787221 containerd[1717]: 2026-03-11 01:24:47.783 [INFO][5735] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" HandleID="k8s-pod-network.fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--btrhp-eth0" Mar 11 01:24:47.787221 containerd[1717]: 2026-03-11 01:24:47.784 [INFO][5735] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:47.787221 containerd[1717]: 2026-03-11 01:24:47.786 [INFO][5728] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589" Mar 11 01:24:47.787634 containerd[1717]: time="2026-03-11T01:24:47.787262296Z" level=info msg="TearDown network for sandbox \"fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589\" successfully" Mar 11 01:24:47.798372 containerd[1717]: time="2026-03-11T01:24:47.798335275Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 01:24:47.798465 containerd[1717]: time="2026-03-11T01:24:47.798419115Z" level=info msg="RemovePodSandbox \"fbc97c1df4cca0ab83a2af6a007ed467b8f069133f4497dfa9610e40ac974589\" returns successfully" Mar 11 01:24:47.799124 containerd[1717]: time="2026-03-11T01:24:47.799072756Z" level=info msg="StopPodSandbox for \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\"" Mar 11 01:24:47.856734 containerd[1717]: 2026-03-11 01:24:47.828 [WARNING][5750] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b", Pod:"csi-node-driver-tjh4h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.4.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib6f9460fa25", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:47.856734 containerd[1717]: 2026-03-11 01:24:47.828 [INFO][5750] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Mar 11 01:24:47.856734 containerd[1717]: 2026-03-11 01:24:47.828 [INFO][5750] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" iface="eth0" netns="" Mar 11 01:24:47.856734 containerd[1717]: 2026-03-11 01:24:47.828 [INFO][5750] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Mar 11 01:24:47.856734 containerd[1717]: 2026-03-11 01:24:47.828 [INFO][5750] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Mar 11 01:24:47.856734 containerd[1717]: 2026-03-11 01:24:47.844 [INFO][5757] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" HandleID="k8s-pod-network.b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Workload="ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0" Mar 11 01:24:47.856734 containerd[1717]: 2026-03-11 01:24:47.844 [INFO][5757] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:47.856734 containerd[1717]: 2026-03-11 01:24:47.844 [INFO][5757] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:47.856734 containerd[1717]: 2026-03-11 01:24:47.852 [WARNING][5757] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" HandleID="k8s-pod-network.b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Workload="ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0" Mar 11 01:24:47.856734 containerd[1717]: 2026-03-11 01:24:47.853 [INFO][5757] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" HandleID="k8s-pod-network.b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Workload="ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0" Mar 11 01:24:47.856734 containerd[1717]: 2026-03-11 01:24:47.854 [INFO][5757] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:47.856734 containerd[1717]: 2026-03-11 01:24:47.855 [INFO][5750] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Mar 11 01:24:47.857323 containerd[1717]: time="2026-03-11T01:24:47.856781854Z" level=info msg="TearDown network for sandbox \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\" successfully" Mar 11 01:24:47.857323 containerd[1717]: time="2026-03-11T01:24:47.856806374Z" level=info msg="StopPodSandbox for \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\" returns successfully" Mar 11 01:24:47.857373 containerd[1717]: time="2026-03-11T01:24:47.857322055Z" level=info msg="RemovePodSandbox for \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\"" Mar 11 01:24:47.857373 containerd[1717]: time="2026-03-11T01:24:47.857356535Z" level=info msg="Forcibly stopping sandbox \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\"" Mar 11 01:24:47.960240 containerd[1717]: 2026-03-11 01:24:47.921 [WARNING][5771] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6f6763b-72d4-4230-abc8-f8b4f7ba7e3b", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b", Pod:"csi-node-driver-tjh4h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.4.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib6f9460fa25", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:47.960240 containerd[1717]: 2026-03-11 01:24:47.922 [INFO][5771] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Mar 11 01:24:47.960240 containerd[1717]: 2026-03-11 01:24:47.922 [INFO][5771] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" iface="eth0" netns="" Mar 11 01:24:47.960240 containerd[1717]: 2026-03-11 01:24:47.922 [INFO][5771] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Mar 11 01:24:47.960240 containerd[1717]: 2026-03-11 01:24:47.922 [INFO][5771] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Mar 11 01:24:47.960240 containerd[1717]: 2026-03-11 01:24:47.944 [INFO][5778] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" HandleID="k8s-pod-network.b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Workload="ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0" Mar 11 01:24:47.960240 containerd[1717]: 2026-03-11 01:24:47.945 [INFO][5778] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:47.960240 containerd[1717]: 2026-03-11 01:24:47.945 [INFO][5778] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:47.960240 containerd[1717]: 2026-03-11 01:24:47.954 [WARNING][5778] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" HandleID="k8s-pod-network.b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Workload="ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0" Mar 11 01:24:47.960240 containerd[1717]: 2026-03-11 01:24:47.954 [INFO][5778] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" HandleID="k8s-pod-network.b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Workload="ci--4081.3.6--n--49f1e4db19-k8s-csi--node--driver--tjh4h-eth0" Mar 11 01:24:47.960240 containerd[1717]: 2026-03-11 01:24:47.956 [INFO][5778] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:47.960240 containerd[1717]: 2026-03-11 01:24:47.958 [INFO][5771] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55" Mar 11 01:24:47.960770 containerd[1717]: time="2026-03-11T01:24:47.960295871Z" level=info msg="TearDown network for sandbox \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\" successfully" Mar 11 01:24:47.970657 containerd[1717]: time="2026-03-11T01:24:47.970611128Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 01:24:47.970771 containerd[1717]: time="2026-03-11T01:24:47.970689608Z" level=info msg="RemovePodSandbox \"b2d9db588c93ed6bdf45c75ef24ef5eb06bffc938796089bea6c10bc475f3c55\" returns successfully" Mar 11 01:24:47.971203 containerd[1717]: time="2026-03-11T01:24:47.971175689Z" level=info msg="StopPodSandbox for \"abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52\"" Mar 11 01:24:48.040379 containerd[1717]: 2026-03-11 01:24:48.011 [WARNING][5792] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1923a631-58e0-434b-b9f7-d22db35c541a", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 23, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30", Pod:"coredns-66bc5c9577-vq8w4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44e9231e721", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:48.040379 containerd[1717]: 2026-03-11 01:24:48.011 [INFO][5792] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Mar 11 01:24:48.040379 containerd[1717]: 2026-03-11 01:24:48.011 [INFO][5792] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" iface="eth0" netns="" Mar 11 01:24:48.040379 containerd[1717]: 2026-03-11 01:24:48.011 [INFO][5792] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Mar 11 01:24:48.040379 containerd[1717]: 2026-03-11 01:24:48.011 [INFO][5792] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Mar 11 01:24:48.040379 containerd[1717]: 2026-03-11 01:24:48.027 [INFO][5800] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" HandleID="k8s-pod-network.abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0" Mar 11 01:24:48.040379 containerd[1717]: 2026-03-11 01:24:48.028 [INFO][5800] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:48.040379 containerd[1717]: 2026-03-11 01:24:48.028 [INFO][5800] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:48.040379 containerd[1717]: 2026-03-11 01:24:48.036 [WARNING][5800] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" HandleID="k8s-pod-network.abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0" Mar 11 01:24:48.040379 containerd[1717]: 2026-03-11 01:24:48.036 [INFO][5800] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" HandleID="k8s-pod-network.abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0" Mar 11 01:24:48.040379 containerd[1717]: 2026-03-11 01:24:48.037 [INFO][5800] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:48.040379 containerd[1717]: 2026-03-11 01:24:48.038 [INFO][5792] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Mar 11 01:24:48.040974 containerd[1717]: time="2026-03-11T01:24:48.040416047Z" level=info msg="TearDown network for sandbox \"abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52\" successfully" Mar 11 01:24:48.040974 containerd[1717]: time="2026-03-11T01:24:48.040439327Z" level=info msg="StopPodSandbox for \"abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52\" returns successfully" Mar 11 01:24:48.040974 containerd[1717]: time="2026-03-11T01:24:48.040852328Z" level=info msg="RemovePodSandbox for \"abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52\"" Mar 11 01:24:48.040974 containerd[1717]: time="2026-03-11T01:24:48.040878968Z" level=info msg="Forcibly stopping sandbox \"abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52\"" Mar 11 01:24:48.111354 containerd[1717]: 2026-03-11 01:24:48.071 [WARNING][5814] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1923a631-58e0-434b-b9f7-d22db35c541a", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 23, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"19dd6f40fc32a2e522dd2bb242a0d710e8adbe984a42477d6fdcddfd1dbd7c30", Pod:"coredns-66bc5c9577-vq8w4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44e9231e721", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:48.111354 containerd[1717]: 2026-03-11 01:24:48.072 [INFO][5814] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Mar 11 01:24:48.111354 containerd[1717]: 2026-03-11 01:24:48.072 [INFO][5814] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" iface="eth0" netns="" Mar 11 01:24:48.111354 containerd[1717]: 2026-03-11 01:24:48.072 [INFO][5814] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Mar 11 01:24:48.111354 containerd[1717]: 2026-03-11 01:24:48.072 [INFO][5814] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Mar 11 01:24:48.111354 containerd[1717]: 2026-03-11 01:24:48.096 [INFO][5823] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" HandleID="k8s-pod-network.abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0" Mar 11 01:24:48.111354 containerd[1717]: 2026-03-11 01:24:48.096 [INFO][5823] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:48.111354 containerd[1717]: 2026-03-11 01:24:48.096 [INFO][5823] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:48.111354 containerd[1717]: 2026-03-11 01:24:48.106 [WARNING][5823] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" HandleID="k8s-pod-network.abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0" Mar 11 01:24:48.111354 containerd[1717]: 2026-03-11 01:24:48.106 [INFO][5823] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" HandleID="k8s-pod-network.abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--vq8w4-eth0" Mar 11 01:24:48.111354 containerd[1717]: 2026-03-11 01:24:48.108 [INFO][5823] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:48.111354 containerd[1717]: 2026-03-11 01:24:48.109 [INFO][5814] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52" Mar 11 01:24:48.111856 containerd[1717]: time="2026-03-11T01:24:48.111414888Z" level=info msg="TearDown network for sandbox \"abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52\" successfully" Mar 11 01:24:48.118609 containerd[1717]: time="2026-03-11T01:24:48.118575180Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 01:24:48.118692 containerd[1717]: time="2026-03-11T01:24:48.118650100Z" level=info msg="RemovePodSandbox \"abc3b0297159402073589a30863dbb69a76bf6bf6f25e00d019eff63adb30a52\" returns successfully" Mar 11 01:24:48.119346 containerd[1717]: time="2026-03-11T01:24:48.119091741Z" level=info msg="StopPodSandbox for \"51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1\"" Mar 11 01:24:48.179877 containerd[1717]: 2026-03-11 01:24:48.149 [WARNING][5842] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0", GenerateName:"calico-kube-controllers-946cb894d-", Namespace:"calico-system", SelfLink:"", UID:"0b7b3ee1-30bd-400b-8e2b-54c143148a44", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"946cb894d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98", Pod:"calico-kube-controllers-946cb894d-x99qm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib5f6e015954", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:48.179877 containerd[1717]: 2026-03-11 01:24:48.149 [INFO][5842] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Mar 11 01:24:48.179877 containerd[1717]: 2026-03-11 01:24:48.149 [INFO][5842] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" iface="eth0" netns="" Mar 11 01:24:48.179877 containerd[1717]: 2026-03-11 01:24:48.149 [INFO][5842] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Mar 11 01:24:48.179877 containerd[1717]: 2026-03-11 01:24:48.149 [INFO][5842] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Mar 11 01:24:48.179877 containerd[1717]: 2026-03-11 01:24:48.166 [INFO][5849] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" HandleID="k8s-pod-network.51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0" Mar 11 01:24:48.179877 containerd[1717]: 2026-03-11 01:24:48.166 [INFO][5849] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:48.179877 containerd[1717]: 2026-03-11 01:24:48.166 [INFO][5849] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:48.179877 containerd[1717]: 2026-03-11 01:24:48.175 [WARNING][5849] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" HandleID="k8s-pod-network.51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0" Mar 11 01:24:48.179877 containerd[1717]: 2026-03-11 01:24:48.175 [INFO][5849] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" HandleID="k8s-pod-network.51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0" Mar 11 01:24:48.179877 containerd[1717]: 2026-03-11 01:24:48.177 [INFO][5849] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:48.179877 containerd[1717]: 2026-03-11 01:24:48.178 [INFO][5842] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Mar 11 01:24:48.180498 containerd[1717]: time="2026-03-11T01:24:48.180369605Z" level=info msg="TearDown network for sandbox \"51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1\" successfully" Mar 11 01:24:48.180498 containerd[1717]: time="2026-03-11T01:24:48.180398245Z" level=info msg="StopPodSandbox for \"51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1\" returns successfully" Mar 11 01:24:48.181160 containerd[1717]: time="2026-03-11T01:24:48.180812606Z" level=info msg="RemovePodSandbox for \"51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1\"" Mar 11 01:24:48.181160 containerd[1717]: time="2026-03-11T01:24:48.180836646Z" level=info msg="Forcibly stopping sandbox \"51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1\"" Mar 11 01:24:48.251828 containerd[1717]: 2026-03-11 01:24:48.219 [WARNING][5863] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0", GenerateName:"calico-kube-controllers-946cb894d-", Namespace:"calico-system", SelfLink:"", UID:"0b7b3ee1-30bd-400b-8e2b-54c143148a44", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"946cb894d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"ab9a545a3223d3abbea625b9d84e0b134c43b14e2667a010f22e4019ed50bd98", Pod:"calico-kube-controllers-946cb894d-x99qm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.4.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib5f6e015954", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:48.251828 containerd[1717]: 2026-03-11 01:24:48.219 [INFO][5863] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Mar 11 01:24:48.251828 containerd[1717]: 2026-03-11 01:24:48.219 [INFO][5863] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" iface="eth0" netns="" Mar 11 01:24:48.251828 containerd[1717]: 2026-03-11 01:24:48.219 [INFO][5863] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Mar 11 01:24:48.251828 containerd[1717]: 2026-03-11 01:24:48.219 [INFO][5863] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Mar 11 01:24:48.251828 containerd[1717]: 2026-03-11 01:24:48.236 [INFO][5871] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" HandleID="k8s-pod-network.51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0" Mar 11 01:24:48.251828 containerd[1717]: 2026-03-11 01:24:48.236 [INFO][5871] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:48.251828 containerd[1717]: 2026-03-11 01:24:48.236 [INFO][5871] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:48.251828 containerd[1717]: 2026-03-11 01:24:48.246 [WARNING][5871] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" HandleID="k8s-pod-network.51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0" Mar 11 01:24:48.251828 containerd[1717]: 2026-03-11 01:24:48.246 [INFO][5871] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" HandleID="k8s-pod-network.51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--kube--controllers--946cb894d--x99qm-eth0" Mar 11 01:24:48.251828 containerd[1717]: 2026-03-11 01:24:48.248 [INFO][5871] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:48.251828 containerd[1717]: 2026-03-11 01:24:48.250 [INFO][5863] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1" Mar 11 01:24:48.251828 containerd[1717]: time="2026-03-11T01:24:48.251502206Z" level=info msg="TearDown network for sandbox \"51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1\" successfully" Mar 11 01:24:48.280568 containerd[1717]: time="2026-03-11T01:24:48.280421495Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 01:24:48.280568 containerd[1717]: time="2026-03-11T01:24:48.280510056Z" level=info msg="RemovePodSandbox \"51f46b4d230db6ad96675295a81eaf010791cbe24db1a64b14af21d6c62f3ae1\" returns successfully" Mar 11 01:24:48.281170 containerd[1717]: time="2026-03-11T01:24:48.281149137Z" level=info msg="StopPodSandbox for \"e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936\"" Mar 11 01:24:48.344511 containerd[1717]: 2026-03-11 01:24:48.314 [WARNING][5886] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"e98682e7-a223-4af4-85e4-1cd58ea80d8c", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21", Pod:"goldmane-cccfbd5cf-tglk5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.4.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliceb51fc42c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:48.344511 containerd[1717]: 2026-03-11 01:24:48.315 [INFO][5886] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Mar 11 01:24:48.344511 containerd[1717]: 2026-03-11 01:24:48.315 [INFO][5886] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" iface="eth0" netns="" Mar 11 01:24:48.344511 containerd[1717]: 2026-03-11 01:24:48.315 [INFO][5886] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Mar 11 01:24:48.344511 containerd[1717]: 2026-03-11 01:24:48.315 [INFO][5886] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Mar 11 01:24:48.344511 containerd[1717]: 2026-03-11 01:24:48.332 [INFO][5894] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" HandleID="k8s-pod-network.e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Workload="ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0" Mar 11 01:24:48.344511 containerd[1717]: 2026-03-11 01:24:48.332 [INFO][5894] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:48.344511 containerd[1717]: 2026-03-11 01:24:48.332 [INFO][5894] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:48.344511 containerd[1717]: 2026-03-11 01:24:48.340 [WARNING][5894] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" HandleID="k8s-pod-network.e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Workload="ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0" Mar 11 01:24:48.344511 containerd[1717]: 2026-03-11 01:24:48.340 [INFO][5894] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" HandleID="k8s-pod-network.e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Workload="ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0" Mar 11 01:24:48.344511 containerd[1717]: 2026-03-11 01:24:48.341 [INFO][5894] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:48.344511 containerd[1717]: 2026-03-11 01:24:48.343 [INFO][5886] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Mar 11 01:24:48.345140 containerd[1717]: time="2026-03-11T01:24:48.344551005Z" level=info msg="TearDown network for sandbox \"e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936\" successfully" Mar 11 01:24:48.345140 containerd[1717]: time="2026-03-11T01:24:48.344575565Z" level=info msg="StopPodSandbox for \"e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936\" returns successfully" Mar 11 01:24:48.345140 containerd[1717]: time="2026-03-11T01:24:48.345105246Z" level=info msg="RemovePodSandbox for \"e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936\"" Mar 11 01:24:48.345140 containerd[1717]: time="2026-03-11T01:24:48.345130006Z" level=info msg="Forcibly stopping sandbox \"e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936\"" Mar 11 01:24:48.410245 containerd[1717]: 2026-03-11 01:24:48.377 [WARNING][5908] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"e98682e7-a223-4af4-85e4-1cd58ea80d8c", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"89beb1b7d5d18ce4ddced02444d0373684bbfafc2d6cebc480b0b5f0c0c05b21", Pod:"goldmane-cccfbd5cf-tglk5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.4.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliceb51fc42c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:48.410245 containerd[1717]: 2026-03-11 01:24:48.377 [INFO][5908] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Mar 11 01:24:48.410245 containerd[1717]: 2026-03-11 01:24:48.377 [INFO][5908] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" iface="eth0" netns="" Mar 11 01:24:48.410245 containerd[1717]: 2026-03-11 01:24:48.377 [INFO][5908] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Mar 11 01:24:48.410245 containerd[1717]: 2026-03-11 01:24:48.377 [INFO][5908] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Mar 11 01:24:48.410245 containerd[1717]: 2026-03-11 01:24:48.397 [INFO][5916] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" HandleID="k8s-pod-network.e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Workload="ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0" Mar 11 01:24:48.410245 containerd[1717]: 2026-03-11 01:24:48.397 [INFO][5916] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:48.410245 containerd[1717]: 2026-03-11 01:24:48.397 [INFO][5916] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:48.410245 containerd[1717]: 2026-03-11 01:24:48.405 [WARNING][5916] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" HandleID="k8s-pod-network.e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Workload="ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0" Mar 11 01:24:48.410245 containerd[1717]: 2026-03-11 01:24:48.405 [INFO][5916] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" HandleID="k8s-pod-network.e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Workload="ci--4081.3.6--n--49f1e4db19-k8s-goldmane--cccfbd5cf--tglk5-eth0" Mar 11 01:24:48.410245 containerd[1717]: 2026-03-11 01:24:48.407 [INFO][5916] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:48.410245 containerd[1717]: 2026-03-11 01:24:48.408 [INFO][5908] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936" Mar 11 01:24:48.410656 containerd[1717]: time="2026-03-11T01:24:48.410279556Z" level=info msg="TearDown network for sandbox \"e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936\" successfully" Mar 11 01:24:48.417330 containerd[1717]: time="2026-03-11T01:24:48.417298168Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 01:24:48.417398 containerd[1717]: time="2026-03-11T01:24:48.417364848Z" level=info msg="RemovePodSandbox \"e680999eb27db3e4685143b7400571692d5b1391830317da996ee8af2d38a936\" returns successfully" Mar 11 01:24:48.418128 containerd[1717]: time="2026-03-11T01:24:48.417872929Z" level=info msg="StopPodSandbox for \"8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3\"" Mar 11 01:24:48.531849 containerd[1717]: 2026-03-11 01:24:48.474 [WARNING][5930] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0", GenerateName:"calico-apiserver-59b9679847-", Namespace:"calico-system", SelfLink:"", UID:"b4eb406e-fa82-44d9-b52c-29c1dd09bc5c", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59b9679847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6", Pod:"calico-apiserver-59b9679847-vdzb5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib76d2880919", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:48.531849 containerd[1717]: 2026-03-11 01:24:48.475 [INFO][5930] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Mar 11 01:24:48.531849 containerd[1717]: 2026-03-11 01:24:48.475 [INFO][5930] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" iface="eth0" netns="" Mar 11 01:24:48.531849 containerd[1717]: 2026-03-11 01:24:48.475 [INFO][5930] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Mar 11 01:24:48.531849 containerd[1717]: 2026-03-11 01:24:48.475 [INFO][5930] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Mar 11 01:24:48.531849 containerd[1717]: 2026-03-11 01:24:48.515 [INFO][5941] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" HandleID="k8s-pod-network.8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0" Mar 11 01:24:48.531849 containerd[1717]: 2026-03-11 01:24:48.515 [INFO][5941] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:48.531849 containerd[1717]: 2026-03-11 01:24:48.515 [INFO][5941] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:48.531849 containerd[1717]: 2026-03-11 01:24:48.526 [WARNING][5941] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" HandleID="k8s-pod-network.8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0" Mar 11 01:24:48.531849 containerd[1717]: 2026-03-11 01:24:48.526 [INFO][5941] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" HandleID="k8s-pod-network.8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0" Mar 11 01:24:48.531849 containerd[1717]: 2026-03-11 01:24:48.528 [INFO][5941] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:48.531849 containerd[1717]: 2026-03-11 01:24:48.529 [INFO][5930] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Mar 11 01:24:48.532664 containerd[1717]: time="2026-03-11T01:24:48.532059484Z" level=info msg="TearDown network for sandbox \"8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3\" successfully" Mar 11 01:24:48.532664 containerd[1717]: time="2026-03-11T01:24:48.532088604Z" level=info msg="StopPodSandbox for \"8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3\" returns successfully" Mar 11 01:24:48.533498 containerd[1717]: time="2026-03-11T01:24:48.533226926Z" level=info msg="RemovePodSandbox for \"8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3\"" Mar 11 01:24:48.533498 containerd[1717]: time="2026-03-11T01:24:48.533282366Z" level=info msg="Forcibly stopping sandbox \"8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3\"" Mar 11 01:24:48.614626 containerd[1717]: 2026-03-11 01:24:48.582 [WARNING][5955] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0", GenerateName:"calico-apiserver-59b9679847-", Namespace:"calico-system", SelfLink:"", UID:"b4eb406e-fa82-44d9-b52c-29c1dd09bc5c", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 24, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59b9679847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"9a5a8a8fd8c066397fdb4f93013058c249b5dc2a67e14c18788d3c657e5e7fa6", Pod:"calico-apiserver-59b9679847-vdzb5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.4.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib76d2880919", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:48.614626 containerd[1717]: 2026-03-11 01:24:48.582 [INFO][5955] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Mar 11 01:24:48.614626 containerd[1717]: 2026-03-11 01:24:48.582 [INFO][5955] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" iface="eth0" netns="" Mar 11 01:24:48.614626 containerd[1717]: 2026-03-11 01:24:48.582 [INFO][5955] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Mar 11 01:24:48.614626 containerd[1717]: 2026-03-11 01:24:48.583 [INFO][5955] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Mar 11 01:24:48.614626 containerd[1717]: 2026-03-11 01:24:48.601 [INFO][5962] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" HandleID="k8s-pod-network.8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0" Mar 11 01:24:48.614626 containerd[1717]: 2026-03-11 01:24:48.601 [INFO][5962] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:48.614626 containerd[1717]: 2026-03-11 01:24:48.601 [INFO][5962] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:48.614626 containerd[1717]: 2026-03-11 01:24:48.610 [WARNING][5962] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" HandleID="k8s-pod-network.8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0" Mar 11 01:24:48.614626 containerd[1717]: 2026-03-11 01:24:48.610 [INFO][5962] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" HandleID="k8s-pod-network.8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Workload="ci--4081.3.6--n--49f1e4db19-k8s-calico--apiserver--59b9679847--vdzb5-eth0" Mar 11 01:24:48.614626 containerd[1717]: 2026-03-11 01:24:48.611 [INFO][5962] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:48.614626 containerd[1717]: 2026-03-11 01:24:48.613 [INFO][5955] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3" Mar 11 01:24:48.615652 containerd[1717]: time="2026-03-11T01:24:48.614716064Z" level=info msg="TearDown network for sandbox \"8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3\" successfully" Mar 11 01:24:49.134383 containerd[1717]: time="2026-03-11T01:24:49.134336949Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 01:24:49.134521 containerd[1717]: time="2026-03-11T01:24:49.134411109Z" level=info msg="RemovePodSandbox \"8ea2592699ea8811be2140f9607bd2a85fba5bd4a7e6124935ac6fe092e55fc3\" returns successfully" Mar 11 01:24:49.136238 containerd[1717]: time="2026-03-11T01:24:49.136201952Z" level=info msg="StopPodSandbox for \"e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e\"" Mar 11 01:24:49.264370 containerd[1717]: 2026-03-11 01:24:49.205 [WARNING][5980] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-whisker--6cfbf665c8--8wp24-eth0" Mar 11 01:24:49.264370 containerd[1717]: 2026-03-11 01:24:49.205 [INFO][5980] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Mar 11 01:24:49.264370 containerd[1717]: 2026-03-11 01:24:49.205 [INFO][5980] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" iface="eth0" netns="" Mar 11 01:24:49.264370 containerd[1717]: 2026-03-11 01:24:49.205 [INFO][5980] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Mar 11 01:24:49.264370 containerd[1717]: 2026-03-11 01:24:49.205 [INFO][5980] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Mar 11 01:24:49.264370 containerd[1717]: 2026-03-11 01:24:49.246 [INFO][5987] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" HandleID="k8s-pod-network.e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Workload="ci--4081.3.6--n--49f1e4db19-k8s-whisker--6cfbf665c8--8wp24-eth0" Mar 11 01:24:49.264370 containerd[1717]: 2026-03-11 01:24:49.246 [INFO][5987] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:49.264370 containerd[1717]: 2026-03-11 01:24:49.247 [INFO][5987] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:49.264370 containerd[1717]: 2026-03-11 01:24:49.259 [WARNING][5987] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" HandleID="k8s-pod-network.e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Workload="ci--4081.3.6--n--49f1e4db19-k8s-whisker--6cfbf665c8--8wp24-eth0" Mar 11 01:24:49.264370 containerd[1717]: 2026-03-11 01:24:49.259 [INFO][5987] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" HandleID="k8s-pod-network.e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Workload="ci--4081.3.6--n--49f1e4db19-k8s-whisker--6cfbf665c8--8wp24-eth0" Mar 11 01:24:49.264370 containerd[1717]: 2026-03-11 01:24:49.260 [INFO][5987] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:49.264370 containerd[1717]: 2026-03-11 01:24:49.262 [INFO][5980] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Mar 11 01:24:49.265028 containerd[1717]: time="2026-03-11T01:24:49.264491290Z" level=info msg="TearDown network for sandbox \"e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e\" successfully" Mar 11 01:24:49.265028 containerd[1717]: time="2026-03-11T01:24:49.264517450Z" level=info msg="StopPodSandbox for \"e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e\" returns successfully" Mar 11 01:24:49.265879 containerd[1717]: time="2026-03-11T01:24:49.265853773Z" level=info msg="RemovePodSandbox for \"e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e\"" Mar 11 01:24:49.265938 containerd[1717]: time="2026-03-11T01:24:49.265885293Z" level=info msg="Forcibly stopping sandbox \"e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e\"" Mar 11 01:24:49.350544 containerd[1717]: 2026-03-11 01:24:49.307 [WARNING][6002] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" WorkloadEndpoint="ci--4081.3.6--n--49f1e4db19-k8s-whisker--6cfbf665c8--8wp24-eth0" Mar 11 01:24:49.350544 containerd[1717]: 2026-03-11 01:24:49.307 [INFO][6002] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Mar 11 01:24:49.350544 containerd[1717]: 2026-03-11 01:24:49.307 [INFO][6002] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" iface="eth0" netns="" Mar 11 01:24:49.350544 containerd[1717]: 2026-03-11 01:24:49.307 [INFO][6002] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Mar 11 01:24:49.350544 containerd[1717]: 2026-03-11 01:24:49.307 [INFO][6002] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Mar 11 01:24:49.350544 containerd[1717]: 2026-03-11 01:24:49.332 [INFO][6011] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" HandleID="k8s-pod-network.e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Workload="ci--4081.3.6--n--49f1e4db19-k8s-whisker--6cfbf665c8--8wp24-eth0" Mar 11 01:24:49.350544 containerd[1717]: 2026-03-11 01:24:49.332 [INFO][6011] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:49.350544 containerd[1717]: 2026-03-11 01:24:49.332 [INFO][6011] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:49.350544 containerd[1717]: 2026-03-11 01:24:49.345 [WARNING][6011] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" HandleID="k8s-pod-network.e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Workload="ci--4081.3.6--n--49f1e4db19-k8s-whisker--6cfbf665c8--8wp24-eth0" Mar 11 01:24:49.350544 containerd[1717]: 2026-03-11 01:24:49.345 [INFO][6011] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" HandleID="k8s-pod-network.e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Workload="ci--4081.3.6--n--49f1e4db19-k8s-whisker--6cfbf665c8--8wp24-eth0" Mar 11 01:24:49.350544 containerd[1717]: 2026-03-11 01:24:49.347 [INFO][6011] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:49.350544 containerd[1717]: 2026-03-11 01:24:49.348 [INFO][6002] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e" Mar 11 01:24:49.350893 containerd[1717]: time="2026-03-11T01:24:49.350579837Z" level=info msg="TearDown network for sandbox \"e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e\" successfully" Mar 11 01:24:49.357586 containerd[1717]: time="2026-03-11T01:24:49.357541489Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 01:24:49.357991 containerd[1717]: time="2026-03-11T01:24:49.357613929Z" level=info msg="RemovePodSandbox \"e2ee2da82d7a502662277b3263e3a75b82ffa7d047a3831faea19d302f354b3e\" returns successfully" Mar 11 01:24:49.358204 containerd[1717]: time="2026-03-11T01:24:49.358178450Z" level=info msg="StopPodSandbox for \"e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3\"" Mar 11 01:24:49.423310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1385543290.mount: Deactivated successfully. Mar 11 01:24:49.442997 containerd[1717]: 2026-03-11 01:24:49.398 [WARNING][6025] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"48c32900-5045-4f60-bbe5-13570adfb73f", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 23, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635", Pod:"coredns-66bc5c9577-2dzt4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5629dc06b51", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:49.442997 containerd[1717]: 2026-03-11 01:24:49.398 [INFO][6025] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Mar 11 01:24:49.442997 containerd[1717]: 2026-03-11 01:24:49.398 [INFO][6025] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" iface="eth0" netns="" Mar 11 01:24:49.442997 containerd[1717]: 2026-03-11 01:24:49.398 [INFO][6025] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Mar 11 01:24:49.442997 containerd[1717]: 2026-03-11 01:24:49.398 [INFO][6025] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Mar 11 01:24:49.442997 containerd[1717]: 2026-03-11 01:24:49.427 [INFO][6033] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" HandleID="k8s-pod-network.e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0" Mar 11 01:24:49.442997 containerd[1717]: 2026-03-11 01:24:49.427 [INFO][6033] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:49.442997 containerd[1717]: 2026-03-11 01:24:49.428 [INFO][6033] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:49.442997 containerd[1717]: 2026-03-11 01:24:49.438 [WARNING][6033] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" HandleID="k8s-pod-network.e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0" Mar 11 01:24:49.442997 containerd[1717]: 2026-03-11 01:24:49.438 [INFO][6033] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" HandleID="k8s-pod-network.e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0" Mar 11 01:24:49.442997 containerd[1717]: 2026-03-11 01:24:49.439 [INFO][6033] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:49.442997 containerd[1717]: 2026-03-11 01:24:49.441 [INFO][6025] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Mar 11 01:24:49.442997 containerd[1717]: time="2026-03-11T01:24:49.442888954Z" level=info msg="TearDown network for sandbox \"e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3\" successfully" Mar 11 01:24:49.442997 containerd[1717]: time="2026-03-11T01:24:49.442913514Z" level=info msg="StopPodSandbox for \"e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3\" returns successfully" Mar 11 01:24:49.444050 containerd[1717]: time="2026-03-11T01:24:49.443705835Z" level=info msg="RemovePodSandbox for \"e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3\"" Mar 11 01:24:49.444050 containerd[1717]: time="2026-03-11T01:24:49.443737275Z" level=info msg="Forcibly stopping sandbox \"e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3\"" Mar 11 01:24:49.462838 containerd[1717]: time="2026-03-11T01:24:49.462806508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:49.466438 containerd[1717]: time="2026-03-11T01:24:49.466412634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 11 01:24:49.470476 containerd[1717]: time="2026-03-11T01:24:49.470098440Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:49.476672 containerd[1717]: time="2026-03-11T01:24:49.476239891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:49.477888 containerd[1717]: time="2026-03-11T01:24:49.477201412Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.234917723s" Mar 11 01:24:49.477957 containerd[1717]: time="2026-03-11T01:24:49.477889733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 11 01:24:49.480096 containerd[1717]: time="2026-03-11T01:24:49.480071937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 11 01:24:49.492104 containerd[1717]: time="2026-03-11T01:24:49.491620957Z" level=info msg="CreateContainer within sandbox \"67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 11 01:24:49.523991 containerd[1717]: time="2026-03-11T01:24:49.523949612Z" level=info msg="CreateContainer within sandbox \"67a65729ca6a8ef0700f2ca74b9a165d711b47898335014e173d4a5c179751ec\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8af27c46395571893989d078f82cad347d849710904769d04e2f9d2dfa42bc68\"" Mar 11 01:24:49.524730 containerd[1717]: time="2026-03-11T01:24:49.524707973Z" level=info msg="StartContainer for \"8af27c46395571893989d078f82cad347d849710904769d04e2f9d2dfa42bc68\"" Mar 11 01:24:49.560535 containerd[1717]: 2026-03-11 01:24:49.512 [WARNING][6051] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"48c32900-5045-4f60-bbe5-13570adfb73f", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.March, 11, 1, 23, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-49f1e4db19", ContainerID:"b402bf51f80024e57ebf4a041ee7b3567839eee939f4cb52362421dd4496a635", Pod:"coredns-66bc5c9577-2dzt4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.4.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5629dc06b51", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 11 01:24:49.560535 containerd[1717]: 2026-03-11 01:24:49.512 [INFO][6051] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Mar 11 01:24:49.560535 containerd[1717]: 2026-03-11 01:24:49.512 [INFO][6051] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" iface="eth0" netns="" Mar 11 01:24:49.560535 containerd[1717]: 2026-03-11 01:24:49.512 [INFO][6051] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Mar 11 01:24:49.560535 containerd[1717]: 2026-03-11 01:24:49.512 [INFO][6051] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Mar 11 01:24:49.560535 containerd[1717]: 2026-03-11 01:24:49.537 [INFO][6059] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" HandleID="k8s-pod-network.e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0" Mar 11 01:24:49.560535 containerd[1717]: 2026-03-11 01:24:49.537 [INFO][6059] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 11 01:24:49.560535 containerd[1717]: 2026-03-11 01:24:49.538 [INFO][6059] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 11 01:24:49.560535 containerd[1717]: 2026-03-11 01:24:49.553 [WARNING][6059] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" HandleID="k8s-pod-network.e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0" Mar 11 01:24:49.560535 containerd[1717]: 2026-03-11 01:24:49.553 [INFO][6059] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" HandleID="k8s-pod-network.e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Workload="ci--4081.3.6--n--49f1e4db19-k8s-coredns--66bc5c9577--2dzt4-eth0" Mar 11 01:24:49.560535 containerd[1717]: 2026-03-11 01:24:49.554 [INFO][6059] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 11 01:24:49.560535 containerd[1717]: 2026-03-11 01:24:49.556 [INFO][6051] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3" Mar 11 01:24:49.560981 containerd[1717]: time="2026-03-11T01:24:49.560578434Z" level=info msg="TearDown network for sandbox \"e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3\" successfully" Mar 11 01:24:49.565614 systemd[1]: Started cri-containerd-8af27c46395571893989d078f82cad347d849710904769d04e2f9d2dfa42bc68.scope - libcontainer container 8af27c46395571893989d078f82cad347d849710904769d04e2f9d2dfa42bc68. Mar 11 01:24:49.664428 containerd[1717]: time="2026-03-11T01:24:49.664286971Z" level=info msg="StartContainer for \"8af27c46395571893989d078f82cad347d849710904769d04e2f9d2dfa42bc68\" returns successfully" Mar 11 01:24:49.666787 containerd[1717]: time="2026-03-11T01:24:49.666751255Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 11 01:24:49.666863 containerd[1717]: time="2026-03-11T01:24:49.666805895Z" level=info msg="RemovePodSandbox \"e0bb0a05ad714e09cca0b61665ce1b773618489e0f1b67afccfa4700927eadc3\" returns successfully" Mar 11 01:24:50.787427 containerd[1717]: time="2026-03-11T01:24:50.787378962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:50.789685 containerd[1717]: time="2026-03-11T01:24:50.789657406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 11 01:24:50.792667 containerd[1717]: time="2026-03-11T01:24:50.792639731Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:50.796379 containerd[1717]: time="2026-03-11T01:24:50.796351777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 11 01:24:50.797359 containerd[1717]: time="2026-03-11T01:24:50.796978098Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.316763201s" Mar 11 01:24:50.797359 containerd[1717]: time="2026-03-11T01:24:50.797010978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 11 01:24:50.804217 containerd[1717]: time="2026-03-11T01:24:50.804046430Z" level=info msg="CreateContainer within sandbox \"94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 11 01:24:50.833992 containerd[1717]: time="2026-03-11T01:24:50.833951241Z" level=info msg="CreateContainer within sandbox \"94701a7d3b3a242345cad0b8f61b9ea51a1a50f4cd876f44f81f44aca7a9470b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d4360874e613f193fc41f3b0818ba56c4cd79c1f7bb0fdcf10756e2f4dce7fa1\"" Mar 11 01:24:50.835457 containerd[1717]: time="2026-03-11T01:24:50.834683562Z" level=info msg="StartContainer for \"d4360874e613f193fc41f3b0818ba56c4cd79c1f7bb0fdcf10756e2f4dce7fa1\"" Mar 11 01:24:50.861608 systemd[1]: Started cri-containerd-d4360874e613f193fc41f3b0818ba56c4cd79c1f7bb0fdcf10756e2f4dce7fa1.scope - libcontainer container d4360874e613f193fc41f3b0818ba56c4cd79c1f7bb0fdcf10756e2f4dce7fa1. Mar 11 01:24:50.888829 containerd[1717]: time="2026-03-11T01:24:50.888789935Z" level=info msg="StartContainer for \"d4360874e613f193fc41f3b0818ba56c4cd79c1f7bb0fdcf10756e2f4dce7fa1\" returns successfully" Mar 11 01:24:50.974614 kubelet[3129]: I0311 01:24:50.974533 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-55745ff44c-mkbtt" podStartSLOduration=3.17736064 podStartE2EDuration="20.97451732s" podCreationTimestamp="2026-03-11 01:24:30 +0000 UTC" firstStartedPulling="2026-03-11 01:24:31.682462496 +0000 UTC m=+44.157187516" lastFinishedPulling="2026-03-11 01:24:49.479619176 +0000 UTC m=+61.954344196" observedRunningTime="2026-03-11 01:24:49.973772897 +0000 UTC m=+62.448497917" watchObservedRunningTime="2026-03-11 01:24:50.97451732 +0000 UTC m=+63.449242340" Mar 11 01:24:50.974987 kubelet[3129]: I0311 01:24:50.974716 3129 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tjh4h" podStartSLOduration=31.302151793 podStartE2EDuration="38.974712161s" podCreationTimestamp="2026-03-11 01:24:12 +0000 UTC" firstStartedPulling="2026-03-11 01:24:43.125496212 +0000 UTC m=+55.600221192" lastFinishedPulling="2026-03-11 01:24:50.79805658 +0000 UTC m=+63.272781560" observedRunningTime="2026-03-11 01:24:50.97443552 +0000 UTC m=+63.449160540" watchObservedRunningTime="2026-03-11 01:24:50.974712161 +0000 UTC m=+63.449437181" Mar 11 01:24:51.726912 kubelet[3129]: I0311 01:24:51.726880 3129 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 11 01:24:51.729664 kubelet[3129]: I0311 01:24:51.729645 3129 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 11 01:25:15.946327 systemd[1]: run-containerd-runc-k8s.io-1d87784910d8c13279fcda33959ae558b6c4dc5a7609c05220341401e6beeb23-runc.xE2gBW.mount: Deactivated successfully. Mar 11 01:25:17.740483 kubelet[3129]: I0311 01:25:17.738841 3129 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 01:25:30.862109 systemd[1]: run-containerd-runc-k8s.io-1a1da4d43af08c4535c1cc5c976d5b2516161498bd13c546a68e2c5340460198-runc.2NnO0Z.mount: Deactivated successfully. Mar 11 01:25:31.942736 systemd[1]: Started sshd@7-10.200.20.12:22-10.200.16.10:40026.service - OpenSSH per-connection server daemon (10.200.16.10:40026). Mar 11 01:25:32.429033 sshd[6316]: Accepted publickey for core from 10.200.16.10 port 40026 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:25:32.431439 sshd[6316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:25:32.437601 systemd-logind[1690]: New session 10 of user core. Mar 11 01:25:32.442594 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 11 01:25:32.871603 sshd[6316]: pam_unix(sshd:session): session closed for user core Mar 11 01:25:32.880075 systemd[1]: sshd@7-10.200.20.12:22-10.200.16.10:40026.service: Deactivated successfully. Mar 11 01:25:32.885784 systemd[1]: session-10.scope: Deactivated successfully. Mar 11 01:25:32.887253 systemd-logind[1690]: Session 10 logged out. Waiting for processes to exit. Mar 11 01:25:32.888295 systemd-logind[1690]: Removed session 10. Mar 11 01:25:37.953992 systemd[1]: Started sshd@8-10.200.20.12:22-10.200.16.10:40040.service - OpenSSH per-connection server daemon (10.200.16.10:40040). Mar 11 01:25:38.402595 sshd[6330]: Accepted publickey for core from 10.200.16.10 port 40040 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:25:38.403493 sshd[6330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:25:38.409163 systemd-logind[1690]: New session 11 of user core. Mar 11 01:25:38.414591 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 11 01:25:38.795965 sshd[6330]: pam_unix(sshd:session): session closed for user core Mar 11 01:25:38.800259 systemd[1]: sshd@8-10.200.20.12:22-10.200.16.10:40040.service: Deactivated successfully. Mar 11 01:25:38.802230 systemd[1]: session-11.scope: Deactivated successfully. Mar 11 01:25:38.803334 systemd-logind[1690]: Session 11 logged out. Waiting for processes to exit. Mar 11 01:25:38.804285 systemd-logind[1690]: Removed session 11. Mar 11 01:25:43.882020 systemd[1]: Started sshd@9-10.200.20.12:22-10.200.16.10:43400.service - OpenSSH per-connection server daemon (10.200.16.10:43400). Mar 11 01:25:44.291989 sshd[6363]: Accepted publickey for core from 10.200.16.10 port 43400 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:25:44.292806 sshd[6363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:25:44.296556 systemd-logind[1690]: New session 12 of user core. Mar 11 01:25:44.300583 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 11 01:25:44.654996 sshd[6363]: pam_unix(sshd:session): session closed for user core Mar 11 01:25:44.658922 systemd[1]: sshd@9-10.200.20.12:22-10.200.16.10:43400.service: Deactivated successfully. Mar 11 01:25:44.660612 systemd[1]: session-12.scope: Deactivated successfully. Mar 11 01:25:44.661161 systemd-logind[1690]: Session 12 logged out. Waiting for processes to exit. Mar 11 01:25:44.662209 systemd-logind[1690]: Removed session 12. Mar 11 01:25:49.747406 systemd[1]: Started sshd@10-10.200.20.12:22-10.200.16.10:43416.service - OpenSSH per-connection server daemon (10.200.16.10:43416). Mar 11 01:25:50.238253 sshd[6398]: Accepted publickey for core from 10.200.16.10 port 43416 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:25:50.239696 sshd[6398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:25:50.243380 systemd-logind[1690]: New session 13 of user core. Mar 11 01:25:50.250610 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 11 01:25:50.648743 sshd[6398]: pam_unix(sshd:session): session closed for user core Mar 11 01:25:50.652362 systemd[1]: sshd@10-10.200.20.12:22-10.200.16.10:43416.service: Deactivated successfully. Mar 11 01:25:50.655028 systemd[1]: session-13.scope: Deactivated successfully. Mar 11 01:25:50.656139 systemd-logind[1690]: Session 13 logged out. Waiting for processes to exit. Mar 11 01:25:50.658657 systemd-logind[1690]: Removed session 13. Mar 11 01:25:50.731280 systemd[1]: Started sshd@11-10.200.20.12:22-10.200.16.10:53028.service - OpenSSH per-connection server daemon (10.200.16.10:53028). Mar 11 01:25:51.180261 sshd[6422]: Accepted publickey for core from 10.200.16.10 port 53028 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:25:51.181716 sshd[6422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:25:51.186332 systemd-logind[1690]: New session 14 of user core. Mar 11 01:25:51.190589 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 11 01:25:51.624756 sshd[6422]: pam_unix(sshd:session): session closed for user core Mar 11 01:25:51.628332 systemd[1]: sshd@11-10.200.20.12:22-10.200.16.10:53028.service: Deactivated successfully. Mar 11 01:25:51.629931 systemd[1]: session-14.scope: Deactivated successfully. Mar 11 01:25:51.630675 systemd-logind[1690]: Session 14 logged out. Waiting for processes to exit. Mar 11 01:25:51.631553 systemd-logind[1690]: Removed session 14. Mar 11 01:25:51.714677 systemd[1]: Started sshd@12-10.200.20.12:22-10.200.16.10:53044.service - OpenSSH per-connection server daemon (10.200.16.10:53044). Mar 11 01:25:52.193491 sshd[6442]: Accepted publickey for core from 10.200.16.10 port 53044 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:25:52.194353 sshd[6442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:25:52.198759 systemd-logind[1690]: New session 15 of user core. Mar 11 01:25:52.201604 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 11 01:25:52.612703 sshd[6442]: pam_unix(sshd:session): session closed for user core Mar 11 01:25:52.616834 systemd-logind[1690]: Session 15 logged out. Waiting for processes to exit. Mar 11 01:25:52.617171 systemd[1]: sshd@12-10.200.20.12:22-10.200.16.10:53044.service: Deactivated successfully. Mar 11 01:25:52.619002 systemd[1]: session-15.scope: Deactivated successfully. Mar 11 01:25:52.621151 systemd-logind[1690]: Removed session 15. Mar 11 01:25:57.692976 systemd[1]: Started sshd@13-10.200.20.12:22-10.200.16.10:53046.service - OpenSSH per-connection server daemon (10.200.16.10:53046). Mar 11 01:25:58.148073 sshd[6466]: Accepted publickey for core from 10.200.16.10 port 53046 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:25:58.149588 sshd[6466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:25:58.153870 systemd-logind[1690]: New session 16 of user core. Mar 11 01:25:58.161590 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 11 01:25:58.570509 sshd[6466]: pam_unix(sshd:session): session closed for user core Mar 11 01:25:58.574610 systemd-logind[1690]: Session 16 logged out. Waiting for processes to exit. Mar 11 01:25:58.574938 systemd[1]: sshd@13-10.200.20.12:22-10.200.16.10:53046.service: Deactivated successfully. Mar 11 01:25:58.576591 systemd[1]: session-16.scope: Deactivated successfully. Mar 11 01:25:58.577541 systemd-logind[1690]: Removed session 16. Mar 11 01:25:58.655669 systemd[1]: Started sshd@14-10.200.20.12:22-10.200.16.10:53062.service - OpenSSH per-connection server daemon (10.200.16.10:53062). Mar 11 01:25:59.100103 sshd[6480]: Accepted publickey for core from 10.200.16.10 port 53062 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:25:59.101534 sshd[6480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:25:59.105952 systemd-logind[1690]: New session 17 of user core. Mar 11 01:25:59.110576 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 11 01:25:59.631864 sshd[6480]: pam_unix(sshd:session): session closed for user core Mar 11 01:25:59.636170 systemd[1]: sshd@14-10.200.20.12:22-10.200.16.10:53062.service: Deactivated successfully. Mar 11 01:25:59.638116 systemd[1]: session-17.scope: Deactivated successfully. Mar 11 01:25:59.639124 systemd-logind[1690]: Session 17 logged out. Waiting for processes to exit. Mar 11 01:25:59.640257 systemd-logind[1690]: Removed session 17. Mar 11 01:25:59.718978 systemd[1]: Started sshd@15-10.200.20.12:22-10.200.16.10:53070.service - OpenSSH per-connection server daemon (10.200.16.10:53070). Mar 11 01:26:00.206104 sshd[6490]: Accepted publickey for core from 10.200.16.10 port 53070 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:00.208540 sshd[6490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:00.212459 systemd-logind[1690]: New session 18 of user core. Mar 11 01:26:00.217050 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 11 01:26:00.871994 systemd[1]: run-containerd-runc-k8s.io-1a1da4d43af08c4535c1cc5c976d5b2516161498bd13c546a68e2c5340460198-runc.5ku4DB.mount: Deactivated successfully. Mar 11 01:26:01.176593 sshd[6490]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:01.180665 systemd[1]: sshd@15-10.200.20.12:22-10.200.16.10:53070.service: Deactivated successfully. Mar 11 01:26:01.182271 systemd[1]: session-18.scope: Deactivated successfully. Mar 11 01:26:01.183033 systemd-logind[1690]: Session 18 logged out. Waiting for processes to exit. Mar 11 01:26:01.185439 systemd-logind[1690]: Removed session 18. Mar 11 01:26:01.262675 systemd[1]: Started sshd@16-10.200.20.12:22-10.200.16.10:36004.service - OpenSSH per-connection server daemon (10.200.16.10:36004). Mar 11 01:26:01.708837 sshd[6535]: Accepted publickey for core from 10.200.16.10 port 36004 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:01.709653 sshd[6535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:01.713217 systemd-logind[1690]: New session 19 of user core. Mar 11 01:26:01.720581 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 11 01:26:02.221422 sshd[6535]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:02.225848 systemd[1]: sshd@16-10.200.20.12:22-10.200.16.10:36004.service: Deactivated successfully. Mar 11 01:26:02.227917 systemd[1]: session-19.scope: Deactivated successfully. Mar 11 01:26:02.229018 systemd-logind[1690]: Session 19 logged out. Waiting for processes to exit. Mar 11 01:26:02.230070 systemd-logind[1690]: Removed session 19. Mar 11 01:26:02.309500 systemd[1]: Started sshd@17-10.200.20.12:22-10.200.16.10:36016.service - OpenSSH per-connection server daemon (10.200.16.10:36016). Mar 11 01:26:02.721634 sshd[6548]: Accepted publickey for core from 10.200.16.10 port 36016 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:02.722421 sshd[6548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:02.726307 systemd-logind[1690]: New session 20 of user core. Mar 11 01:26:02.733581 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 11 01:26:03.082752 sshd[6548]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:03.088653 systemd-logind[1690]: Session 20 logged out. Waiting for processes to exit. Mar 11 01:26:03.090412 systemd[1]: sshd@17-10.200.20.12:22-10.200.16.10:36016.service: Deactivated successfully. Mar 11 01:26:03.092692 systemd[1]: session-20.scope: Deactivated successfully. Mar 11 01:26:03.095734 systemd-logind[1690]: Removed session 20. Mar 11 01:26:08.174670 systemd[1]: Started sshd@18-10.200.20.12:22-10.200.16.10:36022.service - OpenSSH per-connection server daemon (10.200.16.10:36022). Mar 11 01:26:08.620606 sshd[6588]: Accepted publickey for core from 10.200.16.10 port 36022 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:08.641063 sshd[6588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:08.645691 systemd-logind[1690]: New session 21 of user core. Mar 11 01:26:08.652600 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 11 01:26:09.003064 sshd[6588]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:09.006641 systemd[1]: sshd@18-10.200.20.12:22-10.200.16.10:36022.service: Deactivated successfully. Mar 11 01:26:09.008791 systemd[1]: session-21.scope: Deactivated successfully. Mar 11 01:26:09.011011 systemd-logind[1690]: Session 21 logged out. Waiting for processes to exit. Mar 11 01:26:09.011811 systemd-logind[1690]: Removed session 21. Mar 11 01:26:14.089324 systemd[1]: Started sshd@19-10.200.20.12:22-10.200.16.10:55916.service - OpenSSH per-connection server daemon (10.200.16.10:55916). Mar 11 01:26:14.559116 sshd[6653]: Accepted publickey for core from 10.200.16.10 port 55916 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:14.560475 sshd[6653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:14.565560 systemd-logind[1690]: New session 22 of user core. Mar 11 01:26:14.569592 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 11 01:26:14.960155 sshd[6653]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:14.963617 systemd[1]: sshd@19-10.200.20.12:22-10.200.16.10:55916.service: Deactivated successfully. Mar 11 01:26:14.965552 systemd[1]: session-22.scope: Deactivated successfully. Mar 11 01:26:14.966358 systemd-logind[1690]: Session 22 logged out. Waiting for processes to exit. Mar 11 01:26:14.967622 systemd-logind[1690]: Removed session 22. Mar 11 01:26:15.947367 systemd[1]: run-containerd-runc-k8s.io-1d87784910d8c13279fcda33959ae558b6c4dc5a7609c05220341401e6beeb23-runc.Vq460K.mount: Deactivated successfully. Mar 11 01:26:20.038785 systemd[1]: Started sshd@20-10.200.20.12:22-10.200.16.10:33338.service - OpenSSH per-connection server daemon (10.200.16.10:33338). Mar 11 01:26:20.492460 sshd[6687]: Accepted publickey for core from 10.200.16.10 port 33338 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:20.493869 sshd[6687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:20.497646 systemd-logind[1690]: New session 23 of user core. Mar 11 01:26:20.506589 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 11 01:26:20.879843 sshd[6687]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:20.884077 systemd[1]: sshd@20-10.200.20.12:22-10.200.16.10:33338.service: Deactivated successfully. Mar 11 01:26:20.886501 systemd[1]: session-23.scope: Deactivated successfully. Mar 11 01:26:20.887214 systemd-logind[1690]: Session 23 logged out. Waiting for processes to exit. Mar 11 01:26:20.888335 systemd-logind[1690]: Removed session 23. Mar 11 01:26:25.975174 systemd[1]: Started sshd@21-10.200.20.12:22-10.200.16.10:33348.service - OpenSSH per-connection server daemon (10.200.16.10:33348). Mar 11 01:26:26.472320 sshd[6720]: Accepted publickey for core from 10.200.16.10 port 33348 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:26.474308 sshd[6720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:26.479510 systemd-logind[1690]: New session 24 of user core. Mar 11 01:26:26.489618 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 11 01:26:26.876677 sshd[6720]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:26.879255 systemd[1]: sshd@21-10.200.20.12:22-10.200.16.10:33348.service: Deactivated successfully. Mar 11 01:26:26.882977 systemd[1]: session-24.scope: Deactivated successfully. Mar 11 01:26:26.885129 systemd-logind[1690]: Session 24 logged out. Waiting for processes to exit. Mar 11 01:26:26.886177 systemd-logind[1690]: Removed session 24. Mar 11 01:26:31.969674 systemd[1]: Started sshd@22-10.200.20.12:22-10.200.16.10:35342.service - OpenSSH per-connection server daemon (10.200.16.10:35342). Mar 11 01:26:32.456704 sshd[6753]: Accepted publickey for core from 10.200.16.10 port 35342 ssh2: RSA SHA256:aKs++qWXmU0p8ywakqPK357SogTFFOoBb0ARJbOu5OI Mar 11 01:26:32.458059 sshd[6753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 11 01:26:32.461842 systemd-logind[1690]: New session 25 of user core. Mar 11 01:26:32.466589 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 11 01:26:32.870420 sshd[6753]: pam_unix(sshd:session): session closed for user core Mar 11 01:26:32.875699 systemd[1]: sshd@22-10.200.20.12:22-10.200.16.10:35342.service: Deactivated successfully. Mar 11 01:26:32.878421 systemd[1]: session-25.scope: Deactivated successfully. Mar 11 01:26:32.879633 systemd-logind[1690]: Session 25 logged out. Waiting for processes to exit. Mar 11 01:26:32.880738 systemd-logind[1690]: Removed session 25.