Mar 4 01:04:16.163247 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 4 01:04:16.163268 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Mar 3 22:54:15 -00 2026 Mar 4 01:04:16.163276 kernel: KASLR enabled Mar 4 01:04:16.163282 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 4 01:04:16.163289 kernel: printk: bootconsole [pl11] enabled Mar 4 01:04:16.163295 kernel: efi: EFI v2.7 by EDK II Mar 4 01:04:16.163302 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 4 01:04:16.163309 kernel: random: crng init done Mar 4 01:04:16.163315 kernel: ACPI: Early table checksum verification disabled Mar 4 01:04:16.163320 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 4 01:04:16.163327 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 01:04:16.163333 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 01:04:16.163340 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 4 01:04:16.163346 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 01:04:16.163354 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 01:04:16.163360 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 01:04:16.163367 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 01:04:16.163375 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 01:04:16.163381 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 01:04:16.163387 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 4 01:04:16.163394 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 01:04:16.163400 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 4 01:04:16.163407 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 4 01:04:16.163413 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 4 01:04:16.163419 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 4 01:04:16.163426 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 4 01:04:16.163432 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 4 01:04:16.163439 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 4 01:04:16.163447 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 4 01:04:16.163453 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 4 01:04:16.163460 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 4 01:04:16.163466 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 4 01:04:16.163473 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 4 01:04:16.163479 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 4 01:04:16.163485 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 4 01:04:16.163492 kernel: Zone ranges: Mar 4 01:04:16.163498 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 4 01:04:16.163505 kernel: DMA32 empty Mar 4 01:04:16.163511 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 4 01:04:16.163518 kernel: Movable zone start for each node Mar 4 01:04:16.163528 kernel: Early memory node ranges Mar 4 01:04:16.163535 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 4 01:04:16.163542 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 4 01:04:16.163549 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 4 01:04:16.163555 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 4 01:04:16.163563 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 4 01:04:16.163570 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 4 01:04:16.163577 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 4 01:04:16.163584 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 4 01:04:16.163591 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 4 01:04:16.163597 kernel: psci: probing for conduit method from ACPI. Mar 4 01:04:16.163604 kernel: psci: PSCIv1.1 detected in firmware. Mar 4 01:04:16.163611 kernel: psci: Using standard PSCI v0.2 function IDs Mar 4 01:04:16.163618 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 4 01:04:16.163624 kernel: psci: SMC Calling Convention v1.4 Mar 4 01:04:16.163631 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 4 01:04:16.163638 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 4 01:04:16.163646 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 4 01:04:16.163653 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 4 01:04:16.163660 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 4 01:04:16.163667 kernel: Detected PIPT I-cache on CPU0 Mar 4 01:04:16.163674 kernel: CPU features: detected: GIC system register CPU interface Mar 4 01:04:16.163680 kernel: CPU features: detected: Hardware dirty bit management Mar 4 01:04:16.163687 kernel: CPU features: detected: Spectre-BHB Mar 4 01:04:16.165722 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 4 01:04:16.165734 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 4 01:04:16.165741 kernel: CPU features: detected: ARM erratum 1418040 Mar 4 01:04:16.165748 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 4 01:04:16.165759 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 4 01:04:16.165766 kernel: alternatives: applying boot alternatives Mar 4 01:04:16.165775 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=91dd0271a88d9bb7bec20dc87bcc265a7fea20c3a6509775d928994c51ae2010 Mar 4 01:04:16.165782 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 4 01:04:16.165789 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 4 01:04:16.165796 kernel: Fallback order for Node 0: 0 Mar 4 01:04:16.165803 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 4 01:04:16.165810 kernel: Policy zone: Normal Mar 4 01:04:16.165817 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 4 01:04:16.165824 kernel: software IO TLB: area num 2. Mar 4 01:04:16.165830 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 4 01:04:16.165839 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 4 01:04:16.165846 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 4 01:04:16.165853 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 4 01:04:16.165861 kernel: rcu: RCU event tracing is enabled. Mar 4 01:04:16.165868 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 4 01:04:16.165875 kernel: Trampoline variant of Tasks RCU enabled. Mar 4 01:04:16.165882 kernel: Tracing variant of Tasks RCU enabled. Mar 4 01:04:16.165889 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 4 01:04:16.165895 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 4 01:04:16.165902 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 4 01:04:16.165909 kernel: GICv3: 960 SPIs implemented Mar 4 01:04:16.165917 kernel: GICv3: 0 Extended SPIs implemented Mar 4 01:04:16.165924 kernel: Root IRQ handler: gic_handle_irq Mar 4 01:04:16.165931 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 4 01:04:16.165938 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 4 01:04:16.165945 kernel: ITS: No ITS available, not enabling LPIs Mar 4 01:04:16.165952 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 4 01:04:16.165959 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 4 01:04:16.165965 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 4 01:04:16.165973 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 4 01:04:16.165979 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 4 01:04:16.165986 kernel: Console: colour dummy device 80x25 Mar 4 01:04:16.165995 kernel: printk: console [tty1] enabled Mar 4 01:04:16.166002 kernel: ACPI: Core revision 20230628 Mar 4 01:04:16.166009 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 4 01:04:16.166017 kernel: pid_max: default: 32768 minimum: 301 Mar 4 01:04:16.166024 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 4 01:04:16.166031 kernel: landlock: Up and running. Mar 4 01:04:16.166038 kernel: SELinux: Initializing. Mar 4 01:04:16.166045 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 01:04:16.166052 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 01:04:16.166060 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 4 01:04:16.166068 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 4 01:04:16.166075 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 4 01:04:16.166082 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 4 01:04:16.166089 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 4 01:04:16.166096 kernel: rcu: Hierarchical SRCU implementation. Mar 4 01:04:16.166103 kernel: rcu: Max phase no-delay instances is 400. Mar 4 01:04:16.166110 kernel: Remapping and enabling EFI services. Mar 4 01:04:16.166123 kernel: smp: Bringing up secondary CPUs ... Mar 4 01:04:16.166131 kernel: Detected PIPT I-cache on CPU1 Mar 4 01:04:16.166138 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 4 01:04:16.166145 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 4 01:04:16.166154 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 4 01:04:16.166161 kernel: smp: Brought up 1 node, 2 CPUs Mar 4 01:04:16.166169 kernel: SMP: Total of 2 processors activated. Mar 4 01:04:16.166176 kernel: CPU features: detected: 32-bit EL0 Support Mar 4 01:04:16.166184 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 4 01:04:16.166192 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 4 01:04:16.166200 kernel: CPU features: detected: CRC32 instructions Mar 4 01:04:16.166207 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 4 01:04:16.166215 kernel: CPU features: detected: LSE atomic instructions Mar 4 01:04:16.166222 kernel: CPU features: detected: Privileged Access Never Mar 4 01:04:16.166230 kernel: CPU: All CPU(s) started at EL1 Mar 4 01:04:16.166237 kernel: alternatives: applying system-wide alternatives Mar 4 01:04:16.166244 kernel: devtmpfs: initialized Mar 4 01:04:16.166252 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 4 01:04:16.166261 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 4 01:04:16.166269 kernel: pinctrl core: initialized pinctrl subsystem Mar 4 01:04:16.166276 kernel: SMBIOS 3.1.0 present. Mar 4 01:04:16.166283 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 4 01:04:16.166291 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 4 01:04:16.166299 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 4 01:04:16.166306 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 4 01:04:16.166314 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 4 01:04:16.166321 kernel: audit: initializing netlink subsys (disabled) Mar 4 01:04:16.166330 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 4 01:04:16.166337 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 4 01:04:16.166344 kernel: cpuidle: using governor menu Mar 4 01:04:16.166352 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 4 01:04:16.166359 kernel: ASID allocator initialised with 32768 entries Mar 4 01:04:16.166367 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 4 01:04:16.166374 kernel: Serial: AMBA PL011 UART driver Mar 4 01:04:16.166381 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 4 01:04:16.166389 kernel: Modules: 0 pages in range for non-PLT usage Mar 4 01:04:16.166398 kernel: Modules: 509008 pages in range for PLT usage Mar 4 01:04:16.166405 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 4 01:04:16.166413 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 4 01:04:16.166420 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 4 01:04:16.166428 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 4 01:04:16.166435 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 4 01:04:16.166442 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 4 01:04:16.166450 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 4 01:04:16.166457 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 4 01:04:16.166466 kernel: ACPI: Added _OSI(Module Device) Mar 4 01:04:16.166473 kernel: ACPI: Added _OSI(Processor Device) Mar 4 01:04:16.166481 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 4 01:04:16.166488 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 4 01:04:16.166495 kernel: ACPI: Interpreter enabled Mar 4 01:04:16.166502 kernel: ACPI: Using GIC for interrupt routing Mar 4 01:04:16.166510 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 4 01:04:16.166517 kernel: printk: console [ttyAMA0] enabled Mar 4 01:04:16.166524 kernel: printk: bootconsole [pl11] disabled Mar 4 01:04:16.166533 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 4 01:04:16.166541 kernel: iommu: Default domain type: Translated Mar 4 01:04:16.166548 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 4 01:04:16.166556 kernel: efivars: Registered efivars operations Mar 4 01:04:16.166563 kernel: vgaarb: loaded Mar 4 01:04:16.166570 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 4 01:04:16.166577 kernel: VFS: Disk quotas dquot_6.6.0 Mar 4 01:04:16.166585 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 4 01:04:16.166592 kernel: pnp: PnP ACPI init Mar 4 01:04:16.166600 kernel: pnp: PnP ACPI: found 0 devices Mar 4 01:04:16.166608 kernel: NET: Registered PF_INET protocol family Mar 4 01:04:16.166615 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 4 01:04:16.166623 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 4 01:04:16.166630 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 4 01:04:16.166638 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 4 01:04:16.166645 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 4 01:04:16.166652 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 4 01:04:16.166660 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 01:04:16.166669 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 01:04:16.166676 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 4 01:04:16.166684 kernel: PCI: CLS 0 bytes, default 64 Mar 4 01:04:16.166699 kernel: kvm [1]: HYP mode not available Mar 4 01:04:16.166706 kernel: Initialise system trusted keyrings Mar 4 01:04:16.166714 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 4 01:04:16.166721 kernel: Key type asymmetric registered Mar 4 01:04:16.166728 kernel: Asymmetric key parser 'x509' registered Mar 4 01:04:16.166735 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 4 01:04:16.166745 kernel: io scheduler mq-deadline registered Mar 4 01:04:16.166752 kernel: io scheduler kyber registered Mar 4 01:04:16.166759 kernel: io scheduler bfq registered Mar 4 01:04:16.166767 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 4 01:04:16.166774 kernel: thunder_xcv, ver 1.0 Mar 4 01:04:16.166781 kernel: thunder_bgx, ver 1.0 Mar 4 01:04:16.166788 kernel: nicpf, ver 1.0 Mar 4 01:04:16.166796 kernel: nicvf, ver 1.0 Mar 4 01:04:16.166934 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 4 01:04:16.167011 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-04T01:04:15 UTC (1772586255) Mar 4 01:04:16.167022 kernel: efifb: probing for efifb Mar 4 01:04:16.167030 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 4 01:04:16.167037 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 4 01:04:16.167045 kernel: efifb: scrolling: redraw Mar 4 01:04:16.167052 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 4 01:04:16.167059 kernel: Console: switching to colour frame buffer device 128x48 Mar 4 01:04:16.167067 kernel: fb0: EFI VGA frame buffer device Mar 4 01:04:16.167076 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 4 01:04:16.167084 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 4 01:04:16.167091 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 4 01:04:16.167102 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 4 01:04:16.167109 kernel: watchdog: Hard watchdog permanently disabled Mar 4 01:04:16.167116 kernel: NET: Registered PF_INET6 protocol family Mar 4 01:04:16.167124 kernel: Segment Routing with IPv6 Mar 4 01:04:16.167131 kernel: In-situ OAM (IOAM) with IPv6 Mar 4 01:04:16.167138 kernel: NET: Registered PF_PACKET protocol family Mar 4 01:04:16.167147 kernel: Key type dns_resolver registered Mar 4 01:04:16.167154 kernel: registered taskstats version 1 Mar 4 01:04:16.167161 kernel: Loading compiled-in X.509 certificates Mar 4 01:04:16.167169 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: f9e9add37a55ffc89aa4c4c76a356167cf3fd659' Mar 4 01:04:16.167176 kernel: Key type .fscrypt registered Mar 4 01:04:16.167183 kernel: Key type fscrypt-provisioning registered Mar 4 01:04:16.167190 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 4 01:04:16.167198 kernel: ima: Allocated hash algorithm: sha1 Mar 4 01:04:16.167205 kernel: ima: No architecture policies found Mar 4 01:04:16.167215 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 4 01:04:16.167223 kernel: clk: Disabling unused clocks Mar 4 01:04:16.167230 kernel: Freeing unused kernel memory: 39424K Mar 4 01:04:16.167238 kernel: Run /init as init process Mar 4 01:04:16.167245 kernel: with arguments: Mar 4 01:04:16.167252 kernel: /init Mar 4 01:04:16.167260 kernel: with environment: Mar 4 01:04:16.167266 kernel: HOME=/ Mar 4 01:04:16.167274 kernel: TERM=linux Mar 4 01:04:16.167283 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 01:04:16.167294 systemd[1]: Detected virtualization microsoft. Mar 4 01:04:16.167302 systemd[1]: Detected architecture arm64. Mar 4 01:04:16.167310 systemd[1]: Running in initrd. Mar 4 01:04:16.167317 systemd[1]: No hostname configured, using default hostname. Mar 4 01:04:16.167325 systemd[1]: Hostname set to . Mar 4 01:04:16.167333 systemd[1]: Initializing machine ID from random generator. Mar 4 01:04:16.167343 systemd[1]: Queued start job for default target initrd.target. Mar 4 01:04:16.167351 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:04:16.167359 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:04:16.167368 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 4 01:04:16.167376 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 01:04:16.167384 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 4 01:04:16.167392 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 4 01:04:16.167401 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 4 01:04:16.167411 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 4 01:04:16.167419 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:04:16.167427 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:04:16.167436 systemd[1]: Reached target paths.target - Path Units. Mar 4 01:04:16.167444 systemd[1]: Reached target slices.target - Slice Units. Mar 4 01:04:16.167452 systemd[1]: Reached target swap.target - Swaps. Mar 4 01:04:16.167459 systemd[1]: Reached target timers.target - Timer Units. Mar 4 01:04:16.167467 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 01:04:16.167477 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 01:04:16.167485 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 01:04:16.167493 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 01:04:16.167501 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:04:16.167509 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 01:04:16.167517 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:04:16.167525 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 01:04:16.167533 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 4 01:04:16.167543 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 01:04:16.167551 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 4 01:04:16.167559 systemd[1]: Starting systemd-fsck-usr.service... Mar 4 01:04:16.167567 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 01:04:16.167574 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 01:04:16.167600 systemd-journald[217]: Collecting audit messages is disabled. Mar 4 01:04:16.167621 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:04:16.167630 systemd-journald[217]: Journal started Mar 4 01:04:16.167649 systemd-journald[217]: Runtime Journal (/run/log/journal/32071dc171a94957ad34dc8c9b2dec29) is 8.0M, max 78.5M, 70.5M free. Mar 4 01:04:16.172981 systemd-modules-load[218]: Inserted module 'overlay' Mar 4 01:04:16.184751 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 01:04:16.185309 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 4 01:04:16.197044 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:04:16.215230 systemd[1]: Finished systemd-fsck-usr.service. Mar 4 01:04:16.222105 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 4 01:04:16.222135 kernel: Bridge firewalling registered Mar 4 01:04:16.224824 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:04:16.225567 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 4 01:04:16.235162 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 01:04:16.257854 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:04:16.265867 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:04:16.283854 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 01:04:16.299848 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 01:04:16.310765 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:04:16.317220 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:04:16.326972 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:04:16.335907 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:04:16.355938 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 4 01:04:16.367280 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 01:04:16.382422 dracut-cmdline[249]: dracut-dracut-053 Mar 4 01:04:16.393260 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=91dd0271a88d9bb7bec20dc87bcc265a7fea20c3a6509775d928994c51ae2010 Mar 4 01:04:16.384262 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 01:04:16.398479 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:04:16.444854 systemd-resolved[255]: Positive Trust Anchors: Mar 4 01:04:16.444868 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 01:04:16.444900 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 01:04:16.447110 systemd-resolved[255]: Defaulting to hostname 'linux'. Mar 4 01:04:16.448921 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 01:04:16.454312 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:04:16.538715 kernel: SCSI subsystem initialized Mar 4 01:04:16.545709 kernel: Loading iSCSI transport class v2.0-870. Mar 4 01:04:16.554712 kernel: iscsi: registered transport (tcp) Mar 4 01:04:16.571199 kernel: iscsi: registered transport (qla4xxx) Mar 4 01:04:16.571229 kernel: QLogic iSCSI HBA Driver Mar 4 01:04:16.610042 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 4 01:04:16.623813 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 4 01:04:16.651662 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 4 01:04:16.651716 kernel: device-mapper: uevent: version 1.0.3 Mar 4 01:04:16.656592 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 4 01:04:16.720700 kernel: raid6: neonx8 gen() 15807 MB/s Mar 4 01:04:16.726711 kernel: raid6: neonx4 gen() 15692 MB/s Mar 4 01:04:16.742704 kernel: raid6: neonx2 gen() 13312 MB/s Mar 4 01:04:16.762702 kernel: raid6: neonx1 gen() 10492 MB/s Mar 4 01:04:16.781701 kernel: raid6: int64x8 gen() 6981 MB/s Mar 4 01:04:16.800700 kernel: raid6: int64x4 gen() 7363 MB/s Mar 4 01:04:16.820701 kernel: raid6: int64x2 gen() 6146 MB/s Mar 4 01:04:16.842452 kernel: raid6: int64x1 gen() 5072 MB/s Mar 4 01:04:16.842471 kernel: raid6: using algorithm neonx8 gen() 15807 MB/s Mar 4 01:04:16.864543 kernel: raid6: .... xor() 12037 MB/s, rmw enabled Mar 4 01:04:16.864553 kernel: raid6: using neon recovery algorithm Mar 4 01:04:16.874435 kernel: xor: measuring software checksum speed Mar 4 01:04:16.874448 kernel: 8regs : 19745 MB/sec Mar 4 01:04:16.880985 kernel: 32regs : 19119 MB/sec Mar 4 01:04:16.881009 kernel: arm64_neon : 26927 MB/sec Mar 4 01:04:16.883999 kernel: xor: using function: arm64_neon (26927 MB/sec) Mar 4 01:04:16.933870 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 4 01:04:16.942726 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 4 01:04:16.956823 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:04:16.972498 systemd-udevd[438]: Using default interface naming scheme 'v255'. Mar 4 01:04:16.976562 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:04:16.990857 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 4 01:04:17.006104 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Mar 4 01:04:17.031387 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 01:04:17.046866 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 01:04:17.082867 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:04:17.097877 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 4 01:04:17.125416 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 4 01:04:17.139024 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 01:04:17.149632 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:04:17.161579 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 01:04:17.173889 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 4 01:04:17.192041 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 4 01:04:17.203994 kernel: hv_vmbus: Vmbus version:5.3 Mar 4 01:04:17.205864 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 01:04:17.206019 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:04:17.229458 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:04:17.263827 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 4 01:04:17.263850 kernel: hv_vmbus: registering driver hid_hyperv Mar 4 01:04:17.263860 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 4 01:04:17.263877 kernel: hv_vmbus: registering driver hv_netvsc Mar 4 01:04:17.263886 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 4 01:04:17.251899 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 01:04:17.276619 kernel: PTP clock support registered Mar 4 01:04:17.252112 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:04:17.315158 kernel: hv_vmbus: registering driver hv_storvsc Mar 4 01:04:17.315180 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 4 01:04:17.315191 kernel: scsi host0: storvsc_host_t Mar 4 01:04:17.315357 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 4 01:04:17.315370 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 4 01:04:17.315391 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 4 01:04:17.272339 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:04:17.320470 kernel: scsi host1: storvsc_host_t Mar 4 01:04:17.327629 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 4 01:04:17.328231 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:04:17.353804 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:04:17.370852 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:04:17.386670 kernel: hv_utils: Registering HyperV Utility Driver Mar 4 01:04:17.386829 kernel: hv_vmbus: registering driver hv_utils Mar 4 01:04:17.401502 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 4 01:04:17.401667 kernel: hv_utils: Heartbeat IC version 3.0 Mar 4 01:04:17.401679 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 4 01:04:17.401783 kernel: hv_utils: Shutdown IC version 3.2 Mar 4 01:04:17.401794 kernel: hv_utils: TimeSync IC version 4.0 Mar 4 01:04:17.410920 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 4 01:04:17.411071 kernel: hv_netvsc 00224878-2eeb-0022-4878-2eeb00224878 eth0: VF slot 1 added Mar 4 01:04:16.980267 systemd-resolved[255]: Clock change detected. Flushing caches. Mar 4 01:04:16.999140 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 4 01:04:16.999283 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 4 01:04:16.999982 systemd-journald[217]: Time jumped backwards, rotating. Mar 4 01:04:17.007375 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 01:04:17.016107 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 4 01:04:17.010924 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:04:17.039171 kernel: hv_vmbus: registering driver hv_pci Mar 4 01:04:17.039192 kernel: hv_pci 5a074528-a391-49e1-bd32-2e4a93b800a6: PCI VMBus probing: Using version 0x10004 Mar 4 01:04:17.039388 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 4 01:04:17.039520 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 4 01:04:17.048560 kernel: hv_pci 5a074528-a391-49e1-bd32-2e4a93b800a6: PCI host bridge to bus a391:00 Mar 4 01:04:17.048681 kernel: pci_bus a391:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 4 01:04:17.053194 kernel: pci_bus a391:00: No busn resource found for root bus, will use [bus 00-ff] Mar 4 01:04:17.054678 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 4 01:04:17.060962 kernel: pci a391:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 4 01:04:17.071086 kernel: pci a391:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 4 01:04:17.071140 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#15 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 4 01:04:17.071287 kernel: pci a391:00:02.0: enabling Extended Tags Mar 4 01:04:17.103633 kernel: pci a391:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a391:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 4 01:04:17.103839 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 4 01:04:17.103935 kernel: pci_bus a391:00: busn_res: [bus 00-ff] end is updated to 00 Mar 4 01:04:17.113381 kernel: pci a391:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 4 01:04:17.153809 kernel: mlx5_core a391:00:02.0: enabling device (0000 -> 0002) Mar 4 01:04:17.160376 kernel: mlx5_core a391:00:02.0: firmware version: 16.30.5026 Mar 4 01:04:17.357939 kernel: hv_netvsc 00224878-2eeb-0022-4878-2eeb00224878 eth0: VF registering: eth1 Mar 4 01:04:17.358142 kernel: mlx5_core a391:00:02.0 eth1: joined to eth0 Mar 4 01:04:17.364431 kernel: mlx5_core a391:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 4 01:04:17.373382 kernel: mlx5_core a391:00:02.0 enP41873s1: renamed from eth1 Mar 4 01:04:17.531929 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 4 01:04:17.569383 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (486) Mar 4 01:04:17.583862 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 4 01:04:17.601411 kernel: BTRFS: device fsid aea7b15d-9414-4172-952e-52d0c2e5c89d devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (481) Mar 4 01:04:17.611214 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 4 01:04:17.616387 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 4 01:04:17.637623 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 4 01:04:17.649155 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 4 01:04:17.665372 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 01:04:17.674372 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 01:04:17.683371 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 01:04:18.683332 disk-uuid[605]: The operation has completed successfully. Mar 4 01:04:18.689354 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 01:04:18.755185 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 4 01:04:18.756543 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 4 01:04:18.785539 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 4 01:04:18.795843 sh[718]: Success Mar 4 01:04:18.826397 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 4 01:04:19.064817 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 4 01:04:19.073471 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 4 01:04:19.078634 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 4 01:04:19.106714 kernel: BTRFS info (device dm-0): first mount of filesystem aea7b15d-9414-4172-952e-52d0c2e5c89d Mar 4 01:04:19.106753 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 4 01:04:19.114371 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 4 01:04:19.114396 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 4 01:04:19.118523 kernel: BTRFS info (device dm-0): using free space tree Mar 4 01:04:19.446484 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 4 01:04:19.450721 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 4 01:04:19.463660 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 4 01:04:19.472568 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 4 01:04:19.500351 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 01:04:19.503781 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 01:04:19.503793 kernel: BTRFS info (device sda6): using free space tree Mar 4 01:04:19.535376 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 01:04:19.542265 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 4 01:04:19.551430 kernel: BTRFS info (device sda6): last unmount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 01:04:19.561279 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 4 01:04:19.574510 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 4 01:04:19.592412 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 01:04:19.606603 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 01:04:19.632976 systemd-networkd[902]: lo: Link UP Mar 4 01:04:19.632984 systemd-networkd[902]: lo: Gained carrier Mar 4 01:04:19.634507 systemd-networkd[902]: Enumeration completed Mar 4 01:04:19.635624 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 01:04:19.641948 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:04:19.641951 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 01:04:19.642478 systemd[1]: Reached target network.target - Network. Mar 4 01:04:19.694375 kernel: mlx5_core a391:00:02.0 enP41873s1: Link up Mar 4 01:04:19.736374 kernel: hv_netvsc 00224878-2eeb-0022-4878-2eeb00224878 eth0: Data path switched to VF: enP41873s1 Mar 4 01:04:19.737305 systemd-networkd[902]: enP41873s1: Link UP Mar 4 01:04:19.737539 systemd-networkd[902]: eth0: Link UP Mar 4 01:04:19.737930 systemd-networkd[902]: eth0: Gained carrier Mar 4 01:04:19.737939 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:04:19.746552 systemd-networkd[902]: enP41873s1: Gained carrier Mar 4 01:04:19.767390 systemd-networkd[902]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 4 01:04:20.667798 ignition[887]: Ignition 2.19.0 Mar 4 01:04:20.667808 ignition[887]: Stage: fetch-offline Mar 4 01:04:20.671928 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 01:04:20.667852 ignition[887]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:04:20.667860 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 01:04:20.667956 ignition[887]: parsed url from cmdline: "" Mar 4 01:04:20.667959 ignition[887]: no config URL provided Mar 4 01:04:20.667963 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 01:04:20.667970 ignition[887]: no config at "/usr/lib/ignition/user.ign" Mar 4 01:04:20.694665 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 4 01:04:20.667974 ignition[887]: failed to fetch config: resource requires networking Mar 4 01:04:20.668470 ignition[887]: Ignition finished successfully Mar 4 01:04:20.719793 ignition[911]: Ignition 2.19.0 Mar 4 01:04:20.719805 ignition[911]: Stage: fetch Mar 4 01:04:20.720007 ignition[911]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:04:20.720019 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 01:04:20.720119 ignition[911]: parsed url from cmdline: "" Mar 4 01:04:20.720123 ignition[911]: no config URL provided Mar 4 01:04:20.720128 ignition[911]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 01:04:20.720134 ignition[911]: no config at "/usr/lib/ignition/user.ign" Mar 4 01:04:20.720159 ignition[911]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 4 01:04:20.877460 ignition[911]: GET result: OK Mar 4 01:04:20.877523 ignition[911]: config has been read from IMDS userdata Mar 4 01:04:20.877567 ignition[911]: parsing config with SHA512: 39f73968983483d89ed9a27a81f8b1140036638965fa9e1b746e46147a19581427fa534048c27444b96134e54a3d529970d9174e702802c2a25d4895f86ef369 Mar 4 01:04:20.881743 unknown[911]: fetched base config from "system" Mar 4 01:04:20.882126 ignition[911]: fetch: fetch complete Mar 4 01:04:20.881750 unknown[911]: fetched base config from "system" Mar 4 01:04:20.882131 ignition[911]: fetch: fetch passed Mar 4 01:04:20.881755 unknown[911]: fetched user config from "azure" Mar 4 01:04:20.882170 ignition[911]: Ignition finished successfully Mar 4 01:04:20.884982 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 4 01:04:20.906533 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 4 01:04:20.922252 ignition[917]: Ignition 2.19.0 Mar 4 01:04:20.922261 ignition[917]: Stage: kargs Mar 4 01:04:20.922438 ignition[917]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:04:20.928302 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 4 01:04:20.922448 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 01:04:20.923353 ignition[917]: kargs: kargs passed Mar 4 01:04:20.923408 ignition[917]: Ignition finished successfully Mar 4 01:04:20.952637 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 4 01:04:20.966569 ignition[923]: Ignition 2.19.0 Mar 4 01:04:20.966579 ignition[923]: Stage: disks Mar 4 01:04:20.970147 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 4 01:04:20.966756 ignition[923]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:04:20.976062 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 4 01:04:20.966766 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 01:04:20.980860 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 01:04:20.967736 ignition[923]: disks: disks passed Mar 4 01:04:20.987961 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 01:04:20.967786 ignition[923]: Ignition finished successfully Mar 4 01:04:20.995966 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 01:04:21.003339 systemd[1]: Reached target basic.target - Basic System. Mar 4 01:04:21.027673 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 4 01:04:21.063440 systemd-networkd[902]: eth0: Gained IPv6LL Mar 4 01:04:21.094907 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 4 01:04:21.102225 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 4 01:04:21.118597 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 4 01:04:21.177325 kernel: EXT4-fs (sda9): mounted filesystem e47fe8fd-dacc-429e-aef1-b03916169c3c r/w with ordered data mode. Quota mode: none. Mar 4 01:04:21.173968 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 4 01:04:21.181212 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 4 01:04:21.222454 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 01:04:21.244194 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Mar 4 01:04:21.240959 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 4 01:04:21.253532 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 4 01:04:21.271689 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 01:04:21.271714 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 01:04:21.271724 kernel: BTRFS info (device sda6): using free space tree Mar 4 01:04:21.268412 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 4 01:04:21.268466 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 01:04:21.297875 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 01:04:21.277750 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 4 01:04:21.303647 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 4 01:04:21.309078 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 01:04:21.961303 coreos-metadata[945]: Mar 04 01:04:21.961 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 4 01:04:21.968795 coreos-metadata[945]: Mar 04 01:04:21.968 INFO Fetch successful Mar 4 01:04:21.972721 coreos-metadata[945]: Mar 04 01:04:21.972 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 4 01:04:21.990982 coreos-metadata[945]: Mar 04 01:04:21.990 INFO Fetch successful Mar 4 01:04:22.003804 coreos-metadata[945]: Mar 04 01:04:22.003 INFO wrote hostname ci-4081.3.6-n-8ef68d175b to /sysroot/etc/hostname Mar 4 01:04:22.011492 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 4 01:04:22.200143 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Mar 4 01:04:22.234045 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Mar 4 01:04:22.241472 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Mar 4 01:04:22.249046 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Mar 4 01:04:23.502253 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 4 01:04:23.512742 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 4 01:04:23.520526 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 4 01:04:23.537349 kernel: BTRFS info (device sda6): last unmount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 01:04:23.532901 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 4 01:04:23.556342 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 4 01:04:23.564494 ignition[1062]: INFO : Ignition 2.19.0 Mar 4 01:04:23.564494 ignition[1062]: INFO : Stage: mount Mar 4 01:04:23.575007 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:04:23.575007 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 01:04:23.575007 ignition[1062]: INFO : mount: mount passed Mar 4 01:04:23.575007 ignition[1062]: INFO : Ignition finished successfully Mar 4 01:04:23.567335 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 4 01:04:23.591155 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 4 01:04:23.598628 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 01:04:23.622487 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1072) Mar 4 01:04:23.632869 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 01:04:23.632905 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 01:04:23.636375 kernel: BTRFS info (device sda6): using free space tree Mar 4 01:04:23.643373 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 01:04:23.645466 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 01:04:23.667907 ignition[1089]: INFO : Ignition 2.19.0 Mar 4 01:04:23.672643 ignition[1089]: INFO : Stage: files Mar 4 01:04:23.672643 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:04:23.672643 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 01:04:23.672643 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Mar 4 01:04:23.704245 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 4 01:04:23.704245 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 4 01:04:23.862898 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 4 01:04:23.868646 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 4 01:04:23.868646 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 4 01:04:23.868646 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 4 01:04:23.868646 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 4 01:04:23.868646 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 4 01:04:23.868646 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 4 01:04:23.863305 unknown[1089]: wrote ssh authorized keys file for user: core Mar 4 01:04:23.914328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 4 01:04:24.028789 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 4 01:04:24.525904 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 4 01:04:25.681657 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 01:04:25.681657 ignition[1089]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 4 01:04:25.696316 ignition[1089]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 4 01:04:25.696316 ignition[1089]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 4 01:04:25.696316 ignition[1089]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 4 01:04:25.696316 ignition[1089]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 4 01:04:25.696316 ignition[1089]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 01:04:25.696316 ignition[1089]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 01:04:25.696316 ignition[1089]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 4 01:04:25.696316 ignition[1089]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 4 01:04:25.770458 ignition[1089]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 4 01:04:25.770458 ignition[1089]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 4 01:04:25.770458 ignition[1089]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 4 01:04:25.770458 ignition[1089]: INFO : files: files passed Mar 4 01:04:25.770458 ignition[1089]: INFO : Ignition finished successfully Mar 4 01:04:25.716283 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 4 01:04:25.742116 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 4 01:04:25.754515 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 4 01:04:25.824456 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:04:25.824456 initrd-setup-root-after-ignition[1116]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:04:25.764484 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 4 01:04:25.846106 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:04:25.764622 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 4 01:04:25.819216 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 01:04:25.824998 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 4 01:04:25.853647 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 4 01:04:25.890694 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 4 01:04:25.890819 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 4 01:04:25.900290 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 4 01:04:25.909220 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 4 01:04:25.917547 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 4 01:04:25.928568 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 4 01:04:25.945249 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 01:04:25.957599 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 4 01:04:25.974952 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 4 01:04:25.975055 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 4 01:04:25.985272 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:04:25.993712 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:04:26.002756 systemd[1]: Stopped target timers.target - Timer Units. Mar 4 01:04:26.011007 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 4 01:04:26.011078 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 01:04:26.023235 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 4 01:04:26.031910 systemd[1]: Stopped target basic.target - Basic System. Mar 4 01:04:26.039669 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 4 01:04:26.047422 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 01:04:26.058192 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 4 01:04:26.068641 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 4 01:04:26.077623 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 01:04:26.086893 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 4 01:04:26.096111 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 4 01:04:26.104466 systemd[1]: Stopped target swap.target - Swaps. Mar 4 01:04:26.111719 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 4 01:04:26.111783 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 4 01:04:26.123335 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:04:26.132741 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:04:26.141704 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 4 01:04:26.146379 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:04:26.151478 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 4 01:04:26.151542 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 4 01:04:26.165232 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 4 01:04:26.165281 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 01:04:26.174963 systemd[1]: ignition-files.service: Deactivated successfully. Mar 4 01:04:26.175004 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 4 01:04:26.183860 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 4 01:04:26.183894 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 4 01:04:26.227726 ignition[1142]: INFO : Ignition 2.19.0 Mar 4 01:04:26.227726 ignition[1142]: INFO : Stage: umount Mar 4 01:04:26.227726 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:04:26.227726 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 01:04:26.227726 ignition[1142]: INFO : umount: umount passed Mar 4 01:04:26.204533 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 4 01:04:26.272433 ignition[1142]: INFO : Ignition finished successfully Mar 4 01:04:26.218474 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 4 01:04:26.231620 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 4 01:04:26.231688 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:04:26.236927 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 4 01:04:26.236971 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 01:04:26.254169 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 4 01:04:26.254268 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 4 01:04:26.268957 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 4 01:04:26.269064 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 4 01:04:26.285169 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 4 01:04:26.285227 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 4 01:04:26.293933 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 4 01:04:26.293974 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 4 01:04:26.303078 systemd[1]: Stopped target network.target - Network. Mar 4 01:04:26.310826 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 4 01:04:26.310881 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 01:04:26.321552 systemd[1]: Stopped target paths.target - Path Units. Mar 4 01:04:26.328728 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 4 01:04:26.332390 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:04:26.341186 systemd[1]: Stopped target slices.target - Slice Units. Mar 4 01:04:26.350142 systemd[1]: Stopped target sockets.target - Socket Units. Mar 4 01:04:26.357804 systemd[1]: iscsid.socket: Deactivated successfully. Mar 4 01:04:26.357859 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 01:04:26.365692 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 4 01:04:26.365739 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 01:04:26.373459 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 4 01:04:26.373507 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 4 01:04:26.381965 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 4 01:04:26.382009 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 4 01:04:26.389630 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 4 01:04:26.398399 systemd-networkd[902]: eth0: DHCPv6 lease lost Mar 4 01:04:26.399614 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 4 01:04:26.407089 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 4 01:04:26.407701 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 4 01:04:26.407797 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 4 01:04:26.416135 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 4 01:04:26.416276 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 4 01:04:26.426060 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 4 01:04:26.426109 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:04:26.450584 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 4 01:04:26.585428 kernel: hv_netvsc 00224878-2eeb-0022-4878-2eeb00224878 eth0: Data path switched from VF: enP41873s1 Mar 4 01:04:26.457256 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 4 01:04:26.457327 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 01:04:26.466455 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 01:04:26.466499 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:04:26.474324 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 4 01:04:26.474367 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 4 01:04:26.483683 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 4 01:04:26.483726 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:04:26.492744 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:04:26.520001 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 4 01:04:26.520169 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:04:26.529458 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 4 01:04:26.529501 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 4 01:04:26.537807 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 4 01:04:26.537849 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:04:26.546470 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 4 01:04:26.546522 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 4 01:04:26.558623 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 4 01:04:26.558669 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 4 01:04:26.579789 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 01:04:26.579852 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:04:26.612542 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 4 01:04:26.622435 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 4 01:04:26.622499 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:04:26.633022 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 4 01:04:26.633065 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:04:26.642874 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 4 01:04:26.642911 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:04:26.653221 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 01:04:26.653254 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:04:26.661846 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 4 01:04:26.661948 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 4 01:04:26.670680 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 4 01:04:26.670771 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 4 01:04:26.678443 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 4 01:04:26.678522 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 4 01:04:26.687817 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 4 01:04:26.695449 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 4 01:04:26.695530 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 4 01:04:26.719594 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 4 01:04:26.822514 systemd[1]: Switching root. Mar 4 01:04:26.855204 systemd-journald[217]: Journal stopped Mar 4 01:04:16.163247 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 4 01:04:16.163268 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Mar 3 22:54:15 -00 2026 Mar 4 01:04:16.163276 kernel: KASLR enabled Mar 4 01:04:16.163282 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 4 01:04:16.163289 kernel: printk: bootconsole [pl11] enabled Mar 4 01:04:16.163295 kernel: efi: EFI v2.7 by EDK II Mar 4 01:04:16.163302 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 4 01:04:16.163309 kernel: random: crng init done Mar 4 01:04:16.163315 kernel: ACPI: Early table checksum verification disabled Mar 4 01:04:16.163320 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 4 01:04:16.163327 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 01:04:16.163333 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 01:04:16.163340 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 4 01:04:16.163346 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 01:04:16.163354 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 01:04:16.163360 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 01:04:16.163367 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 01:04:16.163375 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 01:04:16.163381 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 01:04:16.163387 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 4 01:04:16.163394 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 4 01:04:16.163400 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 4 01:04:16.163407 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 4 01:04:16.163413 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 4 01:04:16.163419 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 4 01:04:16.163426 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 4 01:04:16.163432 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 4 01:04:16.163439 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 4 01:04:16.163447 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 4 01:04:16.163453 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 4 01:04:16.163460 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 4 01:04:16.163466 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 4 01:04:16.163473 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 4 01:04:16.163479 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 4 01:04:16.163485 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 4 01:04:16.163492 kernel: Zone ranges: Mar 4 01:04:16.163498 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 4 01:04:16.163505 kernel: DMA32 empty Mar 4 01:04:16.163511 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 4 01:04:16.163518 kernel: Movable zone start for each node Mar 4 01:04:16.163528 kernel: Early memory node ranges Mar 4 01:04:16.163535 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 4 01:04:16.163542 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 4 01:04:16.163549 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 4 01:04:16.163555 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 4 01:04:16.163563 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 4 01:04:16.163570 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 4 01:04:16.163577 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 4 01:04:16.163584 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 4 01:04:16.163591 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 4 01:04:16.163597 kernel: psci: probing for conduit method from ACPI. Mar 4 01:04:16.163604 kernel: psci: PSCIv1.1 detected in firmware. Mar 4 01:04:16.163611 kernel: psci: Using standard PSCI v0.2 function IDs Mar 4 01:04:16.163618 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 4 01:04:16.163624 kernel: psci: SMC Calling Convention v1.4 Mar 4 01:04:16.163631 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 4 01:04:16.163638 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 4 01:04:16.163646 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 4 01:04:16.163653 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 4 01:04:16.163660 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 4 01:04:16.163667 kernel: Detected PIPT I-cache on CPU0 Mar 4 01:04:16.163674 kernel: CPU features: detected: GIC system register CPU interface Mar 4 01:04:16.163680 kernel: CPU features: detected: Hardware dirty bit management Mar 4 01:04:16.163687 kernel: CPU features: detected: Spectre-BHB Mar 4 01:04:16.165722 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 4 01:04:16.165734 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 4 01:04:16.165741 kernel: CPU features: detected: ARM erratum 1418040 Mar 4 01:04:16.165748 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 4 01:04:16.165759 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 4 01:04:16.165766 kernel: alternatives: applying boot alternatives Mar 4 01:04:16.165775 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=91dd0271a88d9bb7bec20dc87bcc265a7fea20c3a6509775d928994c51ae2010 Mar 4 01:04:16.165782 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 4 01:04:16.165789 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 4 01:04:16.165796 kernel: Fallback order for Node 0: 0 Mar 4 01:04:16.165803 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 4 01:04:16.165810 kernel: Policy zone: Normal Mar 4 01:04:16.165817 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 4 01:04:16.165824 kernel: software IO TLB: area num 2. Mar 4 01:04:16.165830 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 4 01:04:16.165839 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 4 01:04:16.165846 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 4 01:04:16.165853 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 4 01:04:16.165861 kernel: rcu: RCU event tracing is enabled. Mar 4 01:04:16.165868 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 4 01:04:16.165875 kernel: Trampoline variant of Tasks RCU enabled. Mar 4 01:04:16.165882 kernel: Tracing variant of Tasks RCU enabled. Mar 4 01:04:16.165889 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 4 01:04:16.165895 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 4 01:04:16.165902 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 4 01:04:16.165909 kernel: GICv3: 960 SPIs implemented Mar 4 01:04:16.165917 kernel: GICv3: 0 Extended SPIs implemented Mar 4 01:04:16.165924 kernel: Root IRQ handler: gic_handle_irq Mar 4 01:04:16.165931 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 4 01:04:16.165938 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 4 01:04:16.165945 kernel: ITS: No ITS available, not enabling LPIs Mar 4 01:04:16.165952 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 4 01:04:16.165959 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 4 01:04:16.165965 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 4 01:04:16.165973 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 4 01:04:16.165979 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 4 01:04:16.165986 kernel: Console: colour dummy device 80x25 Mar 4 01:04:16.165995 kernel: printk: console [tty1] enabled Mar 4 01:04:16.166002 kernel: ACPI: Core revision 20230628 Mar 4 01:04:16.166009 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 4 01:04:16.166017 kernel: pid_max: default: 32768 minimum: 301 Mar 4 01:04:16.166024 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 4 01:04:16.166031 kernel: landlock: Up and running. Mar 4 01:04:16.166038 kernel: SELinux: Initializing. Mar 4 01:04:16.166045 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 01:04:16.166052 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 01:04:16.166060 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 4 01:04:16.166068 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 4 01:04:16.166075 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 4 01:04:16.166082 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 4 01:04:16.166089 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 4 01:04:16.166096 kernel: rcu: Hierarchical SRCU implementation. Mar 4 01:04:16.166103 kernel: rcu: Max phase no-delay instances is 400. Mar 4 01:04:16.166110 kernel: Remapping and enabling EFI services. Mar 4 01:04:16.166123 kernel: smp: Bringing up secondary CPUs ... Mar 4 01:04:16.166131 kernel: Detected PIPT I-cache on CPU1 Mar 4 01:04:16.166138 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 4 01:04:16.166145 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 4 01:04:16.166154 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 4 01:04:16.166161 kernel: smp: Brought up 1 node, 2 CPUs Mar 4 01:04:16.166169 kernel: SMP: Total of 2 processors activated. Mar 4 01:04:16.166176 kernel: CPU features: detected: 32-bit EL0 Support Mar 4 01:04:16.166184 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 4 01:04:16.166192 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 4 01:04:16.166200 kernel: CPU features: detected: CRC32 instructions Mar 4 01:04:16.166207 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 4 01:04:16.166215 kernel: CPU features: detected: LSE atomic instructions Mar 4 01:04:16.166222 kernel: CPU features: detected: Privileged Access Never Mar 4 01:04:16.166230 kernel: CPU: All CPU(s) started at EL1 Mar 4 01:04:16.166237 kernel: alternatives: applying system-wide alternatives Mar 4 01:04:16.166244 kernel: devtmpfs: initialized Mar 4 01:04:16.166252 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 4 01:04:16.166261 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 4 01:04:16.166269 kernel: pinctrl core: initialized pinctrl subsystem Mar 4 01:04:16.166276 kernel: SMBIOS 3.1.0 present. Mar 4 01:04:16.166283 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 4 01:04:16.166291 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 4 01:04:16.166299 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 4 01:04:16.166306 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 4 01:04:16.166314 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 4 01:04:16.166321 kernel: audit: initializing netlink subsys (disabled) Mar 4 01:04:16.166330 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 4 01:04:16.166337 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 4 01:04:16.166344 kernel: cpuidle: using governor menu Mar 4 01:04:16.166352 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 4 01:04:16.166359 kernel: ASID allocator initialised with 32768 entries Mar 4 01:04:16.166367 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 4 01:04:16.166374 kernel: Serial: AMBA PL011 UART driver Mar 4 01:04:16.166381 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 4 01:04:16.166389 kernel: Modules: 0 pages in range for non-PLT usage Mar 4 01:04:16.166398 kernel: Modules: 509008 pages in range for PLT usage Mar 4 01:04:16.166405 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 4 01:04:16.166413 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 4 01:04:16.166420 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 4 01:04:16.166428 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 4 01:04:16.166435 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 4 01:04:16.166442 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 4 01:04:16.166450 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 4 01:04:16.166457 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 4 01:04:16.166466 kernel: ACPI: Added _OSI(Module Device) Mar 4 01:04:16.166473 kernel: ACPI: Added _OSI(Processor Device) Mar 4 01:04:16.166481 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 4 01:04:16.166488 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 4 01:04:16.166495 kernel: ACPI: Interpreter enabled Mar 4 01:04:16.166502 kernel: ACPI: Using GIC for interrupt routing Mar 4 01:04:16.166510 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 4 01:04:16.166517 kernel: printk: console [ttyAMA0] enabled Mar 4 01:04:16.166524 kernel: printk: bootconsole [pl11] disabled Mar 4 01:04:16.166533 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 4 01:04:16.166541 kernel: iommu: Default domain type: Translated Mar 4 01:04:16.166548 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 4 01:04:16.166556 kernel: efivars: Registered efivars operations Mar 4 01:04:16.166563 kernel: vgaarb: loaded Mar 4 01:04:16.166570 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 4 01:04:16.166577 kernel: VFS: Disk quotas dquot_6.6.0 Mar 4 01:04:16.166585 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 4 01:04:16.166592 kernel: pnp: PnP ACPI init Mar 4 01:04:16.166600 kernel: pnp: PnP ACPI: found 0 devices Mar 4 01:04:16.166608 kernel: NET: Registered PF_INET protocol family Mar 4 01:04:16.166615 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 4 01:04:16.166623 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 4 01:04:16.166630 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 4 01:04:16.166638 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 4 01:04:16.166645 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 4 01:04:16.166652 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 4 01:04:16.166660 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 01:04:16.166669 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 01:04:16.166676 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 4 01:04:16.166684 kernel: PCI: CLS 0 bytes, default 64 Mar 4 01:04:16.166699 kernel: kvm [1]: HYP mode not available Mar 4 01:04:16.166706 kernel: Initialise system trusted keyrings Mar 4 01:04:16.166714 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 4 01:04:16.166721 kernel: Key type asymmetric registered Mar 4 01:04:16.166728 kernel: Asymmetric key parser 'x509' registered Mar 4 01:04:16.166735 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 4 01:04:16.166745 kernel: io scheduler mq-deadline registered Mar 4 01:04:16.166752 kernel: io scheduler kyber registered Mar 4 01:04:16.166759 kernel: io scheduler bfq registered Mar 4 01:04:16.166767 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 4 01:04:16.166774 kernel: thunder_xcv, ver 1.0 Mar 4 01:04:16.166781 kernel: thunder_bgx, ver 1.0 Mar 4 01:04:16.166788 kernel: nicpf, ver 1.0 Mar 4 01:04:16.166796 kernel: nicvf, ver 1.0 Mar 4 01:04:16.166934 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 4 01:04:16.167011 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-04T01:04:15 UTC (1772586255) Mar 4 01:04:16.167022 kernel: efifb: probing for efifb Mar 4 01:04:16.167030 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 4 01:04:16.167037 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 4 01:04:16.167045 kernel: efifb: scrolling: redraw Mar 4 01:04:16.167052 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 4 01:04:16.167059 kernel: Console: switching to colour frame buffer device 128x48 Mar 4 01:04:16.167067 kernel: fb0: EFI VGA frame buffer device Mar 4 01:04:16.167076 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 4 01:04:16.167084 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 4 01:04:16.167091 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 4 01:04:16.167102 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 4 01:04:16.167109 kernel: watchdog: Hard watchdog permanently disabled Mar 4 01:04:16.167116 kernel: NET: Registered PF_INET6 protocol family Mar 4 01:04:16.167124 kernel: Segment Routing with IPv6 Mar 4 01:04:16.167131 kernel: In-situ OAM (IOAM) with IPv6 Mar 4 01:04:16.167138 kernel: NET: Registered PF_PACKET protocol family Mar 4 01:04:16.167147 kernel: Key type dns_resolver registered Mar 4 01:04:16.167154 kernel: registered taskstats version 1 Mar 4 01:04:16.167161 kernel: Loading compiled-in X.509 certificates Mar 4 01:04:16.167169 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: f9e9add37a55ffc89aa4c4c76a356167cf3fd659' Mar 4 01:04:16.167176 kernel: Key type .fscrypt registered Mar 4 01:04:16.167183 kernel: Key type fscrypt-provisioning registered Mar 4 01:04:16.167190 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 4 01:04:16.167198 kernel: ima: Allocated hash algorithm: sha1 Mar 4 01:04:16.167205 kernel: ima: No architecture policies found Mar 4 01:04:16.167215 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 4 01:04:16.167223 kernel: clk: Disabling unused clocks Mar 4 01:04:16.167230 kernel: Freeing unused kernel memory: 39424K Mar 4 01:04:16.167238 kernel: Run /init as init process Mar 4 01:04:16.167245 kernel: with arguments: Mar 4 01:04:16.167252 kernel: /init Mar 4 01:04:16.167260 kernel: with environment: Mar 4 01:04:16.167266 kernel: HOME=/ Mar 4 01:04:16.167274 kernel: TERM=linux Mar 4 01:04:16.167283 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 01:04:16.167294 systemd[1]: Detected virtualization microsoft. Mar 4 01:04:16.167302 systemd[1]: Detected architecture arm64. Mar 4 01:04:16.167310 systemd[1]: Running in initrd. Mar 4 01:04:16.167317 systemd[1]: No hostname configured, using default hostname. Mar 4 01:04:16.167325 systemd[1]: Hostname set to . Mar 4 01:04:16.167333 systemd[1]: Initializing machine ID from random generator. Mar 4 01:04:16.167343 systemd[1]: Queued start job for default target initrd.target. Mar 4 01:04:16.167351 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:04:16.167359 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:04:16.167368 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 4 01:04:16.167376 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 01:04:16.167384 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 4 01:04:16.167392 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 4 01:04:16.167401 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 4 01:04:16.167411 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 4 01:04:16.167419 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:04:16.167427 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:04:16.167436 systemd[1]: Reached target paths.target - Path Units. Mar 4 01:04:16.167444 systemd[1]: Reached target slices.target - Slice Units. Mar 4 01:04:16.167452 systemd[1]: Reached target swap.target - Swaps. Mar 4 01:04:16.167459 systemd[1]: Reached target timers.target - Timer Units. Mar 4 01:04:16.167467 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 01:04:16.167477 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 01:04:16.167485 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 01:04:16.167493 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 01:04:16.167501 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:04:16.167509 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 01:04:16.167517 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:04:16.167525 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 01:04:16.167533 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 4 01:04:16.167543 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 01:04:16.167551 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 4 01:04:16.167559 systemd[1]: Starting systemd-fsck-usr.service... Mar 4 01:04:16.167567 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 01:04:16.167574 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 01:04:16.167600 systemd-journald[217]: Collecting audit messages is disabled. Mar 4 01:04:16.167621 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:04:16.167630 systemd-journald[217]: Journal started Mar 4 01:04:16.167649 systemd-journald[217]: Runtime Journal (/run/log/journal/32071dc171a94957ad34dc8c9b2dec29) is 8.0M, max 78.5M, 70.5M free. Mar 4 01:04:16.172981 systemd-modules-load[218]: Inserted module 'overlay' Mar 4 01:04:16.184751 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 01:04:16.185309 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 4 01:04:16.197044 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:04:16.215230 systemd[1]: Finished systemd-fsck-usr.service. Mar 4 01:04:16.222105 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 4 01:04:16.222135 kernel: Bridge firewalling registered Mar 4 01:04:16.224824 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:04:16.225567 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 4 01:04:16.235162 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 01:04:16.257854 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:04:16.265867 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:04:16.283854 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 01:04:16.299848 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 01:04:16.310765 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:04:16.317220 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:04:16.326972 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:04:16.335907 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:04:16.355938 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 4 01:04:16.367280 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 01:04:16.382422 dracut-cmdline[249]: dracut-dracut-053 Mar 4 01:04:16.393260 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=91dd0271a88d9bb7bec20dc87bcc265a7fea20c3a6509775d928994c51ae2010 Mar 4 01:04:16.384262 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 01:04:16.398479 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:04:16.444854 systemd-resolved[255]: Positive Trust Anchors: Mar 4 01:04:16.444868 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 01:04:16.444900 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 01:04:16.447110 systemd-resolved[255]: Defaulting to hostname 'linux'. Mar 4 01:04:16.448921 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 01:04:16.454312 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:04:16.538715 kernel: SCSI subsystem initialized Mar 4 01:04:16.545709 kernel: Loading iSCSI transport class v2.0-870. Mar 4 01:04:16.554712 kernel: iscsi: registered transport (tcp) Mar 4 01:04:16.571199 kernel: iscsi: registered transport (qla4xxx) Mar 4 01:04:16.571229 kernel: QLogic iSCSI HBA Driver Mar 4 01:04:16.610042 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 4 01:04:16.623813 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 4 01:04:16.651662 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 4 01:04:16.651716 kernel: device-mapper: uevent: version 1.0.3 Mar 4 01:04:16.656592 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 4 01:04:16.720700 kernel: raid6: neonx8 gen() 15807 MB/s Mar 4 01:04:16.726711 kernel: raid6: neonx4 gen() 15692 MB/s Mar 4 01:04:16.742704 kernel: raid6: neonx2 gen() 13312 MB/s Mar 4 01:04:16.762702 kernel: raid6: neonx1 gen() 10492 MB/s Mar 4 01:04:16.781701 kernel: raid6: int64x8 gen() 6981 MB/s Mar 4 01:04:16.800700 kernel: raid6: int64x4 gen() 7363 MB/s Mar 4 01:04:16.820701 kernel: raid6: int64x2 gen() 6146 MB/s Mar 4 01:04:16.842452 kernel: raid6: int64x1 gen() 5072 MB/s Mar 4 01:04:16.842471 kernel: raid6: using algorithm neonx8 gen() 15807 MB/s Mar 4 01:04:16.864543 kernel: raid6: .... xor() 12037 MB/s, rmw enabled Mar 4 01:04:16.864553 kernel: raid6: using neon recovery algorithm Mar 4 01:04:16.874435 kernel: xor: measuring software checksum speed Mar 4 01:04:16.874448 kernel: 8regs : 19745 MB/sec Mar 4 01:04:16.880985 kernel: 32regs : 19119 MB/sec Mar 4 01:04:16.881009 kernel: arm64_neon : 26927 MB/sec Mar 4 01:04:16.883999 kernel: xor: using function: arm64_neon (26927 MB/sec) Mar 4 01:04:16.933870 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 4 01:04:16.942726 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 4 01:04:16.956823 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:04:16.972498 systemd-udevd[438]: Using default interface naming scheme 'v255'. Mar 4 01:04:16.976562 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:04:16.990857 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 4 01:04:17.006104 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Mar 4 01:04:17.031387 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 01:04:17.046866 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 01:04:17.082867 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:04:17.097877 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 4 01:04:17.125416 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 4 01:04:17.139024 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 01:04:17.149632 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:04:17.161579 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 01:04:17.173889 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 4 01:04:17.192041 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 4 01:04:17.203994 kernel: hv_vmbus: Vmbus version:5.3 Mar 4 01:04:17.205864 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 01:04:17.206019 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:04:17.229458 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:04:17.263827 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 4 01:04:17.263850 kernel: hv_vmbus: registering driver hid_hyperv Mar 4 01:04:17.263860 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 4 01:04:17.263877 kernel: hv_vmbus: registering driver hv_netvsc Mar 4 01:04:17.263886 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 4 01:04:17.251899 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 01:04:17.276619 kernel: PTP clock support registered Mar 4 01:04:17.252112 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:04:17.315158 kernel: hv_vmbus: registering driver hv_storvsc Mar 4 01:04:17.315180 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 4 01:04:17.315191 kernel: scsi host0: storvsc_host_t Mar 4 01:04:17.315357 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 4 01:04:17.315370 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 4 01:04:17.315391 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 4 01:04:17.272339 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:04:17.320470 kernel: scsi host1: storvsc_host_t Mar 4 01:04:17.327629 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 4 01:04:17.328231 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:04:17.353804 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:04:17.370852 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:04:17.386670 kernel: hv_utils: Registering HyperV Utility Driver Mar 4 01:04:17.386829 kernel: hv_vmbus: registering driver hv_utils Mar 4 01:04:17.401502 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 4 01:04:17.401667 kernel: hv_utils: Heartbeat IC version 3.0 Mar 4 01:04:17.401679 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 4 01:04:17.401783 kernel: hv_utils: Shutdown IC version 3.2 Mar 4 01:04:17.401794 kernel: hv_utils: TimeSync IC version 4.0 Mar 4 01:04:17.410920 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 4 01:04:17.411071 kernel: hv_netvsc 00224878-2eeb-0022-4878-2eeb00224878 eth0: VF slot 1 added Mar 4 01:04:16.980267 systemd-resolved[255]: Clock change detected. Flushing caches. Mar 4 01:04:16.999140 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 4 01:04:16.999283 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 4 01:04:16.999982 systemd-journald[217]: Time jumped backwards, rotating. Mar 4 01:04:17.007375 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 01:04:17.016107 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 4 01:04:17.010924 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:04:17.039171 kernel: hv_vmbus: registering driver hv_pci Mar 4 01:04:17.039192 kernel: hv_pci 5a074528-a391-49e1-bd32-2e4a93b800a6: PCI VMBus probing: Using version 0x10004 Mar 4 01:04:17.039388 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 4 01:04:17.039520 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 4 01:04:17.048560 kernel: hv_pci 5a074528-a391-49e1-bd32-2e4a93b800a6: PCI host bridge to bus a391:00 Mar 4 01:04:17.048681 kernel: pci_bus a391:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 4 01:04:17.053194 kernel: pci_bus a391:00: No busn resource found for root bus, will use [bus 00-ff] Mar 4 01:04:17.054678 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 4 01:04:17.060962 kernel: pci a391:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 4 01:04:17.071086 kernel: pci a391:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 4 01:04:17.071140 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#15 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 4 01:04:17.071287 kernel: pci a391:00:02.0: enabling Extended Tags Mar 4 01:04:17.103633 kernel: pci a391:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a391:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 4 01:04:17.103839 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 4 01:04:17.103935 kernel: pci_bus a391:00: busn_res: [bus 00-ff] end is updated to 00 Mar 4 01:04:17.113381 kernel: pci a391:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 4 01:04:17.153809 kernel: mlx5_core a391:00:02.0: enabling device (0000 -> 0002) Mar 4 01:04:17.160376 kernel: mlx5_core a391:00:02.0: firmware version: 16.30.5026 Mar 4 01:04:17.357939 kernel: hv_netvsc 00224878-2eeb-0022-4878-2eeb00224878 eth0: VF registering: eth1 Mar 4 01:04:17.358142 kernel: mlx5_core a391:00:02.0 eth1: joined to eth0 Mar 4 01:04:17.364431 kernel: mlx5_core a391:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 4 01:04:17.373382 kernel: mlx5_core a391:00:02.0 enP41873s1: renamed from eth1 Mar 4 01:04:17.531929 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 4 01:04:17.569383 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (486) Mar 4 01:04:17.583862 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 4 01:04:17.601411 kernel: BTRFS: device fsid aea7b15d-9414-4172-952e-52d0c2e5c89d devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (481) Mar 4 01:04:17.611214 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 4 01:04:17.616387 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 4 01:04:17.637623 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 4 01:04:17.649155 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 4 01:04:17.665372 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 01:04:17.674372 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 01:04:17.683371 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 01:04:18.683332 disk-uuid[605]: The operation has completed successfully. Mar 4 01:04:18.689354 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 4 01:04:18.755185 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 4 01:04:18.756543 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 4 01:04:18.785539 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 4 01:04:18.795843 sh[718]: Success Mar 4 01:04:18.826397 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 4 01:04:19.064817 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 4 01:04:19.073471 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 4 01:04:19.078634 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 4 01:04:19.106714 kernel: BTRFS info (device dm-0): first mount of filesystem aea7b15d-9414-4172-952e-52d0c2e5c89d Mar 4 01:04:19.106753 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 4 01:04:19.114371 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 4 01:04:19.114396 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 4 01:04:19.118523 kernel: BTRFS info (device dm-0): using free space tree Mar 4 01:04:19.446484 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 4 01:04:19.450721 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 4 01:04:19.463660 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 4 01:04:19.472568 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 4 01:04:19.500351 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 01:04:19.503781 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 01:04:19.503793 kernel: BTRFS info (device sda6): using free space tree Mar 4 01:04:19.535376 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 01:04:19.542265 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 4 01:04:19.551430 kernel: BTRFS info (device sda6): last unmount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 01:04:19.561279 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 4 01:04:19.574510 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 4 01:04:19.592412 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 01:04:19.606603 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 01:04:19.632976 systemd-networkd[902]: lo: Link UP Mar 4 01:04:19.632984 systemd-networkd[902]: lo: Gained carrier Mar 4 01:04:19.634507 systemd-networkd[902]: Enumeration completed Mar 4 01:04:19.635624 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 01:04:19.641948 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:04:19.641951 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 01:04:19.642478 systemd[1]: Reached target network.target - Network. Mar 4 01:04:19.694375 kernel: mlx5_core a391:00:02.0 enP41873s1: Link up Mar 4 01:04:19.736374 kernel: hv_netvsc 00224878-2eeb-0022-4878-2eeb00224878 eth0: Data path switched to VF: enP41873s1 Mar 4 01:04:19.737305 systemd-networkd[902]: enP41873s1: Link UP Mar 4 01:04:19.737539 systemd-networkd[902]: eth0: Link UP Mar 4 01:04:19.737930 systemd-networkd[902]: eth0: Gained carrier Mar 4 01:04:19.737939 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:04:19.746552 systemd-networkd[902]: enP41873s1: Gained carrier Mar 4 01:04:19.767390 systemd-networkd[902]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 4 01:04:20.667798 ignition[887]: Ignition 2.19.0 Mar 4 01:04:20.667808 ignition[887]: Stage: fetch-offline Mar 4 01:04:20.671928 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 01:04:20.667852 ignition[887]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:04:20.667860 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 01:04:20.667956 ignition[887]: parsed url from cmdline: "" Mar 4 01:04:20.667959 ignition[887]: no config URL provided Mar 4 01:04:20.667963 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 01:04:20.667970 ignition[887]: no config at "/usr/lib/ignition/user.ign" Mar 4 01:04:20.694665 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 4 01:04:20.667974 ignition[887]: failed to fetch config: resource requires networking Mar 4 01:04:20.668470 ignition[887]: Ignition finished successfully Mar 4 01:04:20.719793 ignition[911]: Ignition 2.19.0 Mar 4 01:04:20.719805 ignition[911]: Stage: fetch Mar 4 01:04:20.720007 ignition[911]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:04:20.720019 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 01:04:20.720119 ignition[911]: parsed url from cmdline: "" Mar 4 01:04:20.720123 ignition[911]: no config URL provided Mar 4 01:04:20.720128 ignition[911]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 01:04:20.720134 ignition[911]: no config at "/usr/lib/ignition/user.ign" Mar 4 01:04:20.720159 ignition[911]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 4 01:04:20.877460 ignition[911]: GET result: OK Mar 4 01:04:20.877523 ignition[911]: config has been read from IMDS userdata Mar 4 01:04:20.877567 ignition[911]: parsing config with SHA512: 39f73968983483d89ed9a27a81f8b1140036638965fa9e1b746e46147a19581427fa534048c27444b96134e54a3d529970d9174e702802c2a25d4895f86ef369 Mar 4 01:04:20.881743 unknown[911]: fetched base config from "system" Mar 4 01:04:20.882126 ignition[911]: fetch: fetch complete Mar 4 01:04:20.881750 unknown[911]: fetched base config from "system" Mar 4 01:04:20.882131 ignition[911]: fetch: fetch passed Mar 4 01:04:20.881755 unknown[911]: fetched user config from "azure" Mar 4 01:04:20.882170 ignition[911]: Ignition finished successfully Mar 4 01:04:20.884982 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 4 01:04:20.906533 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 4 01:04:20.922252 ignition[917]: Ignition 2.19.0 Mar 4 01:04:20.922261 ignition[917]: Stage: kargs Mar 4 01:04:20.922438 ignition[917]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:04:20.928302 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 4 01:04:20.922448 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 01:04:20.923353 ignition[917]: kargs: kargs passed Mar 4 01:04:20.923408 ignition[917]: Ignition finished successfully Mar 4 01:04:20.952637 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 4 01:04:20.966569 ignition[923]: Ignition 2.19.0 Mar 4 01:04:20.966579 ignition[923]: Stage: disks Mar 4 01:04:20.970147 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 4 01:04:20.966756 ignition[923]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:04:20.976062 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 4 01:04:20.966766 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 01:04:20.980860 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 01:04:20.967736 ignition[923]: disks: disks passed Mar 4 01:04:20.987961 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 01:04:20.967786 ignition[923]: Ignition finished successfully Mar 4 01:04:20.995966 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 01:04:21.003339 systemd[1]: Reached target basic.target - Basic System. Mar 4 01:04:21.027673 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 4 01:04:21.063440 systemd-networkd[902]: eth0: Gained IPv6LL Mar 4 01:04:21.094907 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 4 01:04:21.102225 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 4 01:04:21.118597 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 4 01:04:21.177325 kernel: EXT4-fs (sda9): mounted filesystem e47fe8fd-dacc-429e-aef1-b03916169c3c r/w with ordered data mode. Quota mode: none. Mar 4 01:04:21.173968 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 4 01:04:21.181212 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 4 01:04:21.222454 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 01:04:21.244194 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Mar 4 01:04:21.240959 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 4 01:04:21.253532 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 4 01:04:21.271689 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 01:04:21.271714 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 01:04:21.271724 kernel: BTRFS info (device sda6): using free space tree Mar 4 01:04:21.268412 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 4 01:04:21.268466 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 01:04:21.297875 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 01:04:21.277750 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 4 01:04:21.303647 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 4 01:04:21.309078 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 01:04:21.961303 coreos-metadata[945]: Mar 04 01:04:21.961 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 4 01:04:21.968795 coreos-metadata[945]: Mar 04 01:04:21.968 INFO Fetch successful Mar 4 01:04:21.972721 coreos-metadata[945]: Mar 04 01:04:21.972 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 4 01:04:21.990982 coreos-metadata[945]: Mar 04 01:04:21.990 INFO Fetch successful Mar 4 01:04:22.003804 coreos-metadata[945]: Mar 04 01:04:22.003 INFO wrote hostname ci-4081.3.6-n-8ef68d175b to /sysroot/etc/hostname Mar 4 01:04:22.011492 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 4 01:04:22.200143 initrd-setup-root[972]: cut: /sysroot/etc/passwd: No such file or directory Mar 4 01:04:22.234045 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Mar 4 01:04:22.241472 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Mar 4 01:04:22.249046 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Mar 4 01:04:23.502253 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 4 01:04:23.512742 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 4 01:04:23.520526 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 4 01:04:23.537349 kernel: BTRFS info (device sda6): last unmount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 01:04:23.532901 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 4 01:04:23.556342 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 4 01:04:23.564494 ignition[1062]: INFO : Ignition 2.19.0 Mar 4 01:04:23.564494 ignition[1062]: INFO : Stage: mount Mar 4 01:04:23.575007 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:04:23.575007 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 01:04:23.575007 ignition[1062]: INFO : mount: mount passed Mar 4 01:04:23.575007 ignition[1062]: INFO : Ignition finished successfully Mar 4 01:04:23.567335 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 4 01:04:23.591155 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 4 01:04:23.598628 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 01:04:23.622487 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1072) Mar 4 01:04:23.632869 kernel: BTRFS info (device sda6): first mount of filesystem 890b17d4-8d00-4efa-984f-4dac5f17b223 Mar 4 01:04:23.632905 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 4 01:04:23.636375 kernel: BTRFS info (device sda6): using free space tree Mar 4 01:04:23.643373 kernel: BTRFS info (device sda6): auto enabling async discard Mar 4 01:04:23.645466 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 01:04:23.667907 ignition[1089]: INFO : Ignition 2.19.0 Mar 4 01:04:23.672643 ignition[1089]: INFO : Stage: files Mar 4 01:04:23.672643 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:04:23.672643 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 01:04:23.672643 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Mar 4 01:04:23.704245 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 4 01:04:23.704245 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 4 01:04:23.862898 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 4 01:04:23.868646 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 4 01:04:23.868646 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 4 01:04:23.868646 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 4 01:04:23.868646 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 4 01:04:23.868646 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 4 01:04:23.868646 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 4 01:04:23.863305 unknown[1089]: wrote ssh authorized keys file for user: core Mar 4 01:04:23.914328 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 4 01:04:24.028789 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 01:04:24.036711 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 4 01:04:24.525904 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 4 01:04:25.681657 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 4 01:04:25.681657 ignition[1089]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 4 01:04:25.696316 ignition[1089]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 4 01:04:25.696316 ignition[1089]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 4 01:04:25.696316 ignition[1089]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 4 01:04:25.696316 ignition[1089]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 4 01:04:25.696316 ignition[1089]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 01:04:25.696316 ignition[1089]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 01:04:25.696316 ignition[1089]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 4 01:04:25.696316 ignition[1089]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 4 01:04:25.770458 ignition[1089]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 4 01:04:25.770458 ignition[1089]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 4 01:04:25.770458 ignition[1089]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 4 01:04:25.770458 ignition[1089]: INFO : files: files passed Mar 4 01:04:25.770458 ignition[1089]: INFO : Ignition finished successfully Mar 4 01:04:25.716283 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 4 01:04:25.742116 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 4 01:04:25.754515 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 4 01:04:25.824456 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:04:25.824456 initrd-setup-root-after-ignition[1116]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:04:25.764484 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 4 01:04:25.846106 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:04:25.764622 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 4 01:04:25.819216 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 01:04:25.824998 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 4 01:04:25.853647 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 4 01:04:25.890694 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 4 01:04:25.890819 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 4 01:04:25.900290 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 4 01:04:25.909220 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 4 01:04:25.917547 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 4 01:04:25.928568 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 4 01:04:25.945249 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 01:04:25.957599 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 4 01:04:25.974952 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 4 01:04:25.975055 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 4 01:04:25.985272 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:04:25.993712 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:04:26.002756 systemd[1]: Stopped target timers.target - Timer Units. Mar 4 01:04:26.011007 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 4 01:04:26.011078 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 01:04:26.023235 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 4 01:04:26.031910 systemd[1]: Stopped target basic.target - Basic System. Mar 4 01:04:26.039669 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 4 01:04:26.047422 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 01:04:26.058192 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 4 01:04:26.068641 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 4 01:04:26.077623 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 01:04:26.086893 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 4 01:04:26.096111 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 4 01:04:26.104466 systemd[1]: Stopped target swap.target - Swaps. Mar 4 01:04:26.111719 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 4 01:04:26.111783 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 4 01:04:26.123335 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:04:26.132741 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:04:26.141704 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 4 01:04:26.146379 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:04:26.151478 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 4 01:04:26.151542 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 4 01:04:26.165232 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 4 01:04:26.165281 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 01:04:26.174963 systemd[1]: ignition-files.service: Deactivated successfully. Mar 4 01:04:26.175004 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 4 01:04:26.183860 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 4 01:04:26.183894 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 4 01:04:26.227726 ignition[1142]: INFO : Ignition 2.19.0 Mar 4 01:04:26.227726 ignition[1142]: INFO : Stage: umount Mar 4 01:04:26.227726 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:04:26.227726 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 4 01:04:26.227726 ignition[1142]: INFO : umount: umount passed Mar 4 01:04:26.204533 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 4 01:04:26.272433 ignition[1142]: INFO : Ignition finished successfully Mar 4 01:04:26.218474 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 4 01:04:26.231620 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 4 01:04:26.231688 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:04:26.236927 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 4 01:04:26.236971 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 01:04:26.254169 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 4 01:04:26.254268 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 4 01:04:26.268957 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 4 01:04:26.269064 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 4 01:04:26.285169 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 4 01:04:26.285227 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 4 01:04:26.293933 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 4 01:04:26.293974 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 4 01:04:26.303078 systemd[1]: Stopped target network.target - Network. Mar 4 01:04:26.310826 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 4 01:04:26.310881 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 01:04:26.321552 systemd[1]: Stopped target paths.target - Path Units. Mar 4 01:04:26.328728 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 4 01:04:26.332390 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:04:26.341186 systemd[1]: Stopped target slices.target - Slice Units. Mar 4 01:04:26.350142 systemd[1]: Stopped target sockets.target - Socket Units. Mar 4 01:04:26.357804 systemd[1]: iscsid.socket: Deactivated successfully. Mar 4 01:04:26.357859 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 01:04:26.365692 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 4 01:04:26.365739 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 01:04:26.373459 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 4 01:04:26.373507 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 4 01:04:26.381965 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 4 01:04:26.382009 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 4 01:04:26.389630 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 4 01:04:26.398399 systemd-networkd[902]: eth0: DHCPv6 lease lost Mar 4 01:04:26.399614 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 4 01:04:26.407089 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 4 01:04:26.407701 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 4 01:04:26.407797 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 4 01:04:26.416135 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 4 01:04:26.416276 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 4 01:04:26.426060 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 4 01:04:26.426109 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:04:26.450584 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 4 01:04:26.585428 kernel: hv_netvsc 00224878-2eeb-0022-4878-2eeb00224878 eth0: Data path switched from VF: enP41873s1 Mar 4 01:04:26.457256 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 4 01:04:26.457327 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 01:04:26.466455 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 01:04:26.466499 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:04:26.474324 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 4 01:04:26.474367 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 4 01:04:26.483683 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 4 01:04:26.483726 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:04:26.492744 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:04:26.520001 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 4 01:04:26.520169 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:04:26.529458 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 4 01:04:26.529501 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 4 01:04:26.537807 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 4 01:04:26.537849 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:04:26.546470 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 4 01:04:26.546522 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 4 01:04:26.558623 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 4 01:04:26.558669 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 4 01:04:26.579789 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 01:04:26.579852 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:04:26.612542 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 4 01:04:26.622435 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 4 01:04:26.622499 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:04:26.633022 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 4 01:04:26.633065 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:04:26.642874 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 4 01:04:26.642911 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:04:26.653221 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 01:04:26.653254 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:04:26.661846 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 4 01:04:26.661948 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 4 01:04:26.670680 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 4 01:04:26.670771 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 4 01:04:26.678443 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 4 01:04:26.678522 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 4 01:04:26.687817 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 4 01:04:26.695449 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 4 01:04:26.695530 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 4 01:04:26.719594 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 4 01:04:26.822514 systemd[1]: Switching root. Mar 4 01:04:26.855204 systemd-journald[217]: Journal stopped Mar 4 01:04:32.503720 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Mar 4 01:04:32.503763 kernel: SELinux: policy capability network_peer_controls=1 Mar 4 01:04:32.503775 kernel: SELinux: policy capability open_perms=1 Mar 4 01:04:32.503788 kernel: SELinux: policy capability extended_socket_class=1 Mar 4 01:04:32.503796 kernel: SELinux: policy capability always_check_network=0 Mar 4 01:04:32.503803 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 4 01:04:32.503812 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 4 01:04:32.503820 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 4 01:04:32.503829 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 4 01:04:32.503840 kernel: audit: type=1403 audit(1772586269.326:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 4 01:04:32.503851 systemd[1]: Successfully loaded SELinux policy in 210.697ms. Mar 4 01:04:32.503861 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.788ms. Mar 4 01:04:32.503871 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 01:04:32.503881 systemd[1]: Detected virtualization microsoft. Mar 4 01:04:32.503890 systemd[1]: Detected architecture arm64. Mar 4 01:04:32.503901 systemd[1]: Detected first boot. Mar 4 01:04:32.503910 systemd[1]: Hostname set to . Mar 4 01:04:32.503919 systemd[1]: Initializing machine ID from random generator. Mar 4 01:04:32.503929 zram_generator::config[1201]: No configuration found. Mar 4 01:04:32.503939 systemd[1]: Populated /etc with preset unit settings. Mar 4 01:04:32.503948 systemd[1]: Queued start job for default target multi-user.target. Mar 4 01:04:32.503959 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 4 01:04:32.503968 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 4 01:04:32.503978 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 4 01:04:32.503987 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 4 01:04:32.503996 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 4 01:04:32.504006 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 4 01:04:32.504015 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 4 01:04:32.504026 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 4 01:04:32.504036 systemd[1]: Created slice user.slice - User and Session Slice. Mar 4 01:04:32.504046 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:04:32.504055 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:04:32.504065 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 4 01:04:32.504074 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 4 01:04:32.504083 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 4 01:04:32.504093 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 01:04:32.504102 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 4 01:04:32.504113 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:04:32.504122 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 4 01:04:32.504132 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:04:32.504144 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 01:04:32.504153 systemd[1]: Reached target slices.target - Slice Units. Mar 4 01:04:32.504163 systemd[1]: Reached target swap.target - Swaps. Mar 4 01:04:32.504172 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 4 01:04:32.504183 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 4 01:04:32.504193 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 01:04:32.504202 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 01:04:32.504212 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:04:32.504221 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 01:04:32.504231 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:04:32.504240 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 4 01:04:32.504253 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 4 01:04:32.504262 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 4 01:04:32.504272 systemd[1]: Mounting media.mount - External Media Directory... Mar 4 01:04:32.504281 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 4 01:04:32.504291 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 4 01:04:32.504301 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 4 01:04:32.504312 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 4 01:04:32.504322 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:04:32.504332 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 01:04:32.504341 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 4 01:04:32.504351 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 01:04:32.504370 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 01:04:32.504381 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 01:04:32.504391 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 4 01:04:32.504401 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 01:04:32.504413 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 4 01:04:32.504423 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 4 01:04:32.504433 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 4 01:04:32.504443 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 01:04:32.504452 kernel: fuse: init (API version 7.39) Mar 4 01:04:32.504460 kernel: loop: module loaded Mar 4 01:04:32.504470 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 01:04:32.504509 systemd-journald[1319]: Collecting audit messages is disabled. Mar 4 01:04:32.504536 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 4 01:04:32.504546 systemd-journald[1319]: Journal started Mar 4 01:04:32.504569 systemd-journald[1319]: Runtime Journal (/run/log/journal/68563977610e4e2eb45f5a3fba4ff250) is 8.0M, max 78.5M, 70.5M free. Mar 4 01:04:32.523968 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 4 01:04:32.539146 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 01:04:32.550551 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 01:04:32.554959 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 4 01:04:32.559632 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 4 01:04:32.564471 systemd[1]: Mounted media.mount - External Media Directory. Mar 4 01:04:32.571294 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 4 01:04:32.571407 kernel: ACPI: bus type drm_connector registered Mar 4 01:04:32.576282 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 4 01:04:32.581163 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 4 01:04:32.585483 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 4 01:04:32.590812 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:04:32.596698 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 4 01:04:32.596861 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 4 01:04:32.602244 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 01:04:32.602542 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 01:04:32.607747 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 01:04:32.607893 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 01:04:32.612723 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 01:04:32.612865 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 01:04:32.618271 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 4 01:04:32.618537 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 4 01:04:32.623607 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 01:04:32.623788 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 01:04:32.628903 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 01:04:32.634105 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 4 01:04:32.640036 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 4 01:04:32.647828 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:04:32.659419 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 4 01:04:32.668494 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 4 01:04:32.676554 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 4 01:04:32.681462 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 4 01:04:32.684913 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 4 01:04:32.691871 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 4 01:04:32.696995 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 01:04:32.698122 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 4 01:04:32.703137 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 01:04:32.705764 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:04:32.716619 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 01:04:32.723505 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 4 01:04:32.730420 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 4 01:04:32.735689 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 4 01:04:32.744681 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 4 01:04:32.754130 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 4 01:04:32.761530 udevadm[1363]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 4 01:04:32.787793 systemd-journald[1319]: Time spent on flushing to /var/log/journal/68563977610e4e2eb45f5a3fba4ff250 is 55.789ms for 886 entries. Mar 4 01:04:32.787793 systemd-journald[1319]: System Journal (/var/log/journal/68563977610e4e2eb45f5a3fba4ff250) is 11.8M, max 2.6G, 2.6G free. Mar 4 01:04:32.873088 systemd-journald[1319]: Received client request to flush runtime journal. Mar 4 01:04:32.873123 systemd-journald[1319]: /var/log/journal/68563977610e4e2eb45f5a3fba4ff250/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Mar 4 01:04:32.873148 systemd-journald[1319]: Rotating system journal. Mar 4 01:04:32.821889 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Mar 4 01:04:32.821899 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Mar 4 01:04:32.828693 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:04:32.845601 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 4 01:04:32.875548 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 4 01:04:32.891917 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:04:32.991711 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 4 01:04:33.000473 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 01:04:33.018047 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Mar 4 01:04:33.019036 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Mar 4 01:04:33.023967 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:04:33.393656 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 4 01:04:33.403577 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:04:33.429308 systemd-udevd[1388]: Using default interface naming scheme 'v255'. Mar 4 01:04:33.555044 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:04:33.576659 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 01:04:33.610694 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 4 01:04:33.622864 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Mar 4 01:04:33.680037 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 4 01:04:33.708449 kernel: mousedev: PS/2 mouse device common for all mice Mar 4 01:04:33.752443 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#61 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 4 01:04:33.783642 systemd-networkd[1398]: lo: Link UP Mar 4 01:04:33.783653 systemd-networkd[1398]: lo: Gained carrier Mar 4 01:04:33.785556 systemd-networkd[1398]: Enumeration completed Mar 4 01:04:33.785681 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 01:04:33.790630 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:04:33.790640 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 01:04:33.798039 kernel: hv_vmbus: registering driver hyperv_fb Mar 4 01:04:33.802908 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 4 01:04:33.802929 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 4 01:04:33.810482 kernel: hv_vmbus: registering driver hv_balloon Mar 4 01:04:33.810546 kernel: Console: switching to colour dummy device 80x25 Mar 4 01:04:33.818402 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 4 01:04:33.818458 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 4 01:04:33.815662 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 4 01:04:33.827330 kernel: Console: switching to colour frame buffer device 128x48 Mar 4 01:04:33.854692 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:04:33.867416 kernel: mlx5_core a391:00:02.0 enP41873s1: Link up Mar 4 01:04:33.897390 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1389) Mar 4 01:04:33.897464 kernel: hv_netvsc 00224878-2eeb-0022-4878-2eeb00224878 eth0: Data path switched to VF: enP41873s1 Mar 4 01:04:33.902916 systemd-networkd[1398]: enP41873s1: Link UP Mar 4 01:04:33.903805 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 01:04:33.904520 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:04:33.909875 systemd-networkd[1398]: eth0: Link UP Mar 4 01:04:33.909957 systemd-networkd[1398]: eth0: Gained carrier Mar 4 01:04:33.910018 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:04:33.913963 systemd-networkd[1398]: enP41873s1: Gained carrier Mar 4 01:04:33.917922 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:04:33.919458 systemd-networkd[1398]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 4 01:04:33.974436 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 4 01:04:34.018818 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 4 01:04:34.037571 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 4 01:04:34.165640 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 01:04:34.192905 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 4 01:04:34.199256 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:04:34.218587 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 4 01:04:34.222928 lvm[1482]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 01:04:34.248692 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 4 01:04:34.254392 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 01:04:34.259721 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 4 01:04:34.259749 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 01:04:34.264391 systemd[1]: Reached target machines.target - Containers. Mar 4 01:04:34.269582 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 4 01:04:34.280481 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 4 01:04:34.286628 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 4 01:04:34.291208 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:04:34.292576 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 4 01:04:34.298584 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 4 01:04:34.305842 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 4 01:04:34.316436 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 4 01:04:34.354886 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 4 01:04:34.355713 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 4 01:04:34.391712 kernel: loop0: detected capacity change from 0 to 114432 Mar 4 01:04:34.395138 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 4 01:04:34.436764 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:04:34.724389 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 4 01:04:34.742385 kernel: loop1: detected capacity change from 0 to 209336 Mar 4 01:04:34.810377 kernel: loop2: detected capacity change from 0 to 114328 Mar 4 01:04:34.951539 systemd-networkd[1398]: eth0: Gained IPv6LL Mar 4 01:04:34.953292 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 4 01:04:35.165476 kernel: loop3: detected capacity change from 0 to 31320 Mar 4 01:04:35.532569 kernel: loop4: detected capacity change from 0 to 114432 Mar 4 01:04:35.544381 kernel: loop5: detected capacity change from 0 to 209336 Mar 4 01:04:35.562448 kernel: loop6: detected capacity change from 0 to 114328 Mar 4 01:04:35.575383 kernel: loop7: detected capacity change from 0 to 31320 Mar 4 01:04:35.585739 (sd-merge)[1509]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 4 01:04:35.586139 (sd-merge)[1509]: Merged extensions into '/usr'. Mar 4 01:04:35.590259 systemd[1]: Reloading requested from client PID 1490 ('systemd-sysext') (unit systemd-sysext.service)... Mar 4 01:04:35.590278 systemd[1]: Reloading... Mar 4 01:04:35.648010 zram_generator::config[1535]: No configuration found. Mar 4 01:04:35.777091 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:04:35.848933 systemd[1]: Reloading finished in 258 ms. Mar 4 01:04:35.863134 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 4 01:04:35.873475 systemd[1]: Starting ensure-sysext.service... Mar 4 01:04:35.877640 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 01:04:35.884340 systemd[1]: Reloading requested from client PID 1597 ('systemctl') (unit ensure-sysext.service)... Mar 4 01:04:35.884452 systemd[1]: Reloading... Mar 4 01:04:35.902607 systemd-tmpfiles[1598]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 4 01:04:35.902874 systemd-tmpfiles[1598]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 4 01:04:35.905666 systemd-tmpfiles[1598]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 4 01:04:35.905997 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Mar 4 01:04:35.906128 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Mar 4 01:04:35.908673 systemd-tmpfiles[1598]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 01:04:35.908680 systemd-tmpfiles[1598]: Skipping /boot Mar 4 01:04:35.917026 systemd-tmpfiles[1598]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 01:04:35.919402 systemd-tmpfiles[1598]: Skipping /boot Mar 4 01:04:35.966428 zram_generator::config[1627]: No configuration found. Mar 4 01:04:36.085168 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:04:36.159277 systemd[1]: Reloading finished in 274 ms. Mar 4 01:04:36.177158 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:04:36.192608 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 01:04:36.198961 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 4 01:04:36.206533 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 4 01:04:36.213525 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 01:04:36.220503 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 4 01:04:36.237727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:04:36.248422 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 01:04:36.254848 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 01:04:36.270844 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 01:04:36.277723 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:04:36.278689 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 4 01:04:36.289046 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 01:04:36.289202 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 01:04:36.294808 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 01:04:36.294953 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 01:04:36.301076 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 01:04:36.301252 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 01:04:36.315046 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:04:36.319558 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 01:04:36.327822 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 01:04:36.342408 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 01:04:36.350657 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 01:04:36.356744 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:04:36.357018 systemd[1]: Reached target time-set.target - System Time Set. Mar 4 01:04:36.362773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 01:04:36.362931 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 01:04:36.368463 systemd-resolved[1695]: Positive Trust Anchors: Mar 4 01:04:36.368573 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 01:04:36.368717 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 01:04:36.369965 systemd-resolved[1695]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 01:04:36.370306 systemd-resolved[1695]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 01:04:36.374832 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 01:04:36.374983 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 01:04:36.380661 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 01:04:36.380835 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 01:04:36.387865 systemd-resolved[1695]: Using system hostname 'ci-4081.3.6-n-8ef68d175b'. Mar 4 01:04:36.388249 systemd[1]: Finished ensure-sysext.service. Mar 4 01:04:36.393630 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 01:04:36.399181 augenrules[1727]: No rules Mar 4 01:04:36.400945 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 01:04:36.407539 systemd[1]: Reached target network.target - Network. Mar 4 01:04:36.411572 systemd[1]: Reached target network-online.target - Network is Online. Mar 4 01:04:36.416150 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:04:36.421342 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 01:04:36.421419 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 01:04:36.422049 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 4 01:04:36.865633 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 4 01:04:36.871435 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 4 01:04:41.079368 ldconfig[1486]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 4 01:04:41.095841 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 4 01:04:41.106573 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 4 01:04:41.118028 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 4 01:04:41.122988 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 01:04:41.127265 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 4 01:04:41.132168 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 4 01:04:41.137296 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 4 01:04:41.141763 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 4 01:04:41.146800 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 4 01:04:41.151801 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 4 01:04:41.151831 systemd[1]: Reached target paths.target - Path Units. Mar 4 01:04:41.155421 systemd[1]: Reached target timers.target - Timer Units. Mar 4 01:04:41.167407 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 4 01:04:41.173353 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 4 01:04:41.178564 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 4 01:04:41.183246 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 4 01:04:41.187557 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 01:04:41.191564 systemd[1]: Reached target basic.target - Basic System. Mar 4 01:04:41.195476 systemd[1]: System is tainted: cgroupsv1 Mar 4 01:04:41.195524 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 4 01:04:41.195543 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 4 01:04:41.206457 systemd[1]: Starting chronyd.service - NTP client/server... Mar 4 01:04:41.212461 systemd[1]: Starting containerd.service - containerd container runtime... Mar 4 01:04:41.228416 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 4 01:04:41.237332 (chronyd)[1757]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 4 01:04:41.248561 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 4 01:04:41.253663 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 4 01:04:41.258831 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 4 01:04:41.264626 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 4 01:04:41.264673 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 4 01:04:41.268430 jq[1764]: false Mar 4 01:04:41.266727 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 4 01:04:41.272751 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 4 01:04:41.274550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:04:41.276565 KVP[1766]: KVP starting; pid is:1766 Mar 4 01:04:41.285579 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 4 01:04:41.292594 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 4 01:04:41.298494 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 4 01:04:41.310687 extend-filesystems[1765]: Found loop4 Mar 4 01:04:41.335161 kernel: hv_utils: KVP IC version 4.0 Mar 4 01:04:41.311100 chronyd[1776]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 4 01:04:41.315521 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 4 01:04:41.335805 extend-filesystems[1765]: Found loop5 Mar 4 01:04:41.335805 extend-filesystems[1765]: Found loop6 Mar 4 01:04:41.335805 extend-filesystems[1765]: Found loop7 Mar 4 01:04:41.335805 extend-filesystems[1765]: Found sda Mar 4 01:04:41.335805 extend-filesystems[1765]: Found sda1 Mar 4 01:04:41.335805 extend-filesystems[1765]: Found sda2 Mar 4 01:04:41.335805 extend-filesystems[1765]: Found sda3 Mar 4 01:04:41.335805 extend-filesystems[1765]: Found usr Mar 4 01:04:41.335805 extend-filesystems[1765]: Found sda4 Mar 4 01:04:41.335805 extend-filesystems[1765]: Found sda6 Mar 4 01:04:41.335805 extend-filesystems[1765]: Found sda7 Mar 4 01:04:41.335805 extend-filesystems[1765]: Found sda9 Mar 4 01:04:41.335805 extend-filesystems[1765]: Checking size of /dev/sda9 Mar 4 01:04:41.323811 KVP[1766]: KVP LIC Version: 3.1 Mar 4 01:04:41.329980 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 4 01:04:41.464973 extend-filesystems[1765]: Old size kept for /dev/sda9 Mar 4 01:04:41.464973 extend-filesystems[1765]: Found sr0 Mar 4 01:04:41.360189 chronyd[1776]: Timezone right/UTC failed leap second check, ignoring Mar 4 01:04:41.341303 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 4 01:04:41.368792 chronyd[1776]: Loaded seccomp filter (level 2) Mar 4 01:04:41.349190 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 4 01:04:41.469877 dbus-daemon[1761]: [system] SELinux support is enabled Mar 4 01:04:41.355493 systemd[1]: Starting update-engine.service - Update Engine... Mar 4 01:04:41.373516 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 4 01:04:41.394204 systemd[1]: Started chronyd.service - NTP client/server. Mar 4 01:04:41.514347 update_engine[1789]: I20260304 01:04:41.445084 1789 main.cc:92] Flatcar Update Engine starting Mar 4 01:04:41.514347 update_engine[1789]: I20260304 01:04:41.472769 1789 update_check_scheduler.cc:74] Next update check in 4m45s Mar 4 01:04:41.411947 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 4 01:04:41.514678 jq[1797]: true Mar 4 01:04:41.412178 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 4 01:04:41.412446 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 4 01:04:41.412637 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 4 01:04:41.431780 systemd[1]: motdgen.service: Deactivated successfully. Mar 4 01:04:41.432002 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 4 01:04:41.443475 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 4 01:04:41.462699 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 4 01:04:41.462919 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 4 01:04:41.477753 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 4 01:04:41.516422 jq[1821]: true Mar 4 01:04:41.517375 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1818) Mar 4 01:04:41.528687 coreos-metadata[1759]: Mar 04 01:04:41.528 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 4 01:04:41.532184 coreos-metadata[1759]: Mar 04 01:04:41.532 INFO Fetch successful Mar 4 01:04:41.532529 coreos-metadata[1759]: Mar 04 01:04:41.532 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 4 01:04:41.537241 coreos-metadata[1759]: Mar 04 01:04:41.537 INFO Fetch successful Mar 4 01:04:41.537401 coreos-metadata[1759]: Mar 04 01:04:41.537 INFO Fetching http://168.63.129.16/machine/bf10725b-0d67-490a-ba40-4e7140ee895f/d5fe6abb%2D87cb%2D44d9%2D98e7%2D252495f88376.%5Fci%2D4081.3.6%2Dn%2D8ef68d175b?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 4 01:04:41.540892 coreos-metadata[1759]: Mar 04 01:04:41.540 INFO Fetch successful Mar 4 01:04:41.540892 coreos-metadata[1759]: Mar 04 01:04:41.540 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 4 01:04:41.546150 (ntainerd)[1824]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 4 01:04:41.552025 coreos-metadata[1759]: Mar 04 01:04:41.551 INFO Fetch successful Mar 4 01:04:41.579122 systemd-logind[1785]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Mar 4 01:04:41.579536 systemd-logind[1785]: New seat seat0. Mar 4 01:04:41.579942 systemd[1]: Started update-engine.service - Update Engine. Mar 4 01:04:41.592493 systemd[1]: Started systemd-logind.service - User Login Management. Mar 4 01:04:41.632690 tar[1812]: linux-arm64/LICENSE Mar 4 01:04:41.632690 tar[1812]: linux-arm64/helm Mar 4 01:04:41.647485 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 4 01:04:41.647683 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 4 01:04:41.655698 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 4 01:04:41.655906 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 4 01:04:41.665225 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 4 01:04:41.672162 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 4 01:04:41.685559 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 4 01:04:41.699797 bash[1889]: Updated "/home/core/.ssh/authorized_keys" Mar 4 01:04:41.702733 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 4 01:04:41.719240 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 4 01:04:41.721719 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 4 01:04:41.910450 locksmithd[1894]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 4 01:04:42.238536 sshd_keygen[1790]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 4 01:04:42.259673 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 4 01:04:42.281222 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 4 01:04:42.291912 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 4 01:04:42.303269 systemd[1]: issuegen.service: Deactivated successfully. Mar 4 01:04:42.304333 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 4 01:04:42.316169 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 4 01:04:42.348750 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 4 01:04:42.362554 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 4 01:04:42.376646 tar[1812]: linux-arm64/README.md Mar 4 01:04:42.383234 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 4 01:04:42.389658 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 4 01:04:42.394799 systemd[1]: Reached target getty.target - Login Prompts. Mar 4 01:04:42.417741 containerd[1824]: time="2026-03-04T01:04:42.417663040Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 4 01:04:42.418351 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 4 01:04:42.445962 containerd[1824]: time="2026-03-04T01:04:42.445917040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:04:42.447378 containerd[1824]: time="2026-03-04T01:04:42.447195160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:04:42.447378 containerd[1824]: time="2026-03-04T01:04:42.447226280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 4 01:04:42.447378 containerd[1824]: time="2026-03-04T01:04:42.447243480Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 4 01:04:42.447479 containerd[1824]: time="2026-03-04T01:04:42.447401360Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 4 01:04:42.447479 containerd[1824]: time="2026-03-04T01:04:42.447419280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 4 01:04:42.447517 containerd[1824]: time="2026-03-04T01:04:42.447479680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:04:42.447517 containerd[1824]: time="2026-03-04T01:04:42.447492440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:04:42.447707 containerd[1824]: time="2026-03-04T01:04:42.447678880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:04:42.447734 containerd[1824]: time="2026-03-04T01:04:42.447706680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 4 01:04:42.447734 containerd[1824]: time="2026-03-04T01:04:42.447721040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:04:42.447734 containerd[1824]: time="2026-03-04T01:04:42.447730600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 4 01:04:42.447812 containerd[1824]: time="2026-03-04T01:04:42.447797240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:04:42.448001 containerd[1824]: time="2026-03-04T01:04:42.447984160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:04:42.448120 containerd[1824]: time="2026-03-04T01:04:42.448104600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:04:42.448140 containerd[1824]: time="2026-03-04T01:04:42.448120520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 4 01:04:42.448199 containerd[1824]: time="2026-03-04T01:04:42.448185920Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 4 01:04:42.448249 containerd[1824]: time="2026-03-04T01:04:42.448229440Z" level=info msg="metadata content store policy set" policy=shared Mar 4 01:04:42.462718 containerd[1824]: time="2026-03-04T01:04:42.462682000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 4 01:04:42.462763 containerd[1824]: time="2026-03-04T01:04:42.462745560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 4 01:04:42.462793 containerd[1824]: time="2026-03-04T01:04:42.462765240Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 4 01:04:42.462793 containerd[1824]: time="2026-03-04T01:04:42.462786800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 4 01:04:42.462832 containerd[1824]: time="2026-03-04T01:04:42.462805160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 4 01:04:42.462968 containerd[1824]: time="2026-03-04T01:04:42.462948560Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 4 01:04:42.463271 containerd[1824]: time="2026-03-04T01:04:42.463253800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 4 01:04:42.463380 containerd[1824]: time="2026-03-04T01:04:42.463351440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 4 01:04:42.463405 containerd[1824]: time="2026-03-04T01:04:42.463379600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 4 01:04:42.463405 containerd[1824]: time="2026-03-04T01:04:42.463392760Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 4 01:04:42.463449 containerd[1824]: time="2026-03-04T01:04:42.463406840Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 4 01:04:42.463449 containerd[1824]: time="2026-03-04T01:04:42.463419040Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 4 01:04:42.463449 containerd[1824]: time="2026-03-04T01:04:42.463434960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 4 01:04:42.463504 containerd[1824]: time="2026-03-04T01:04:42.463448640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 4 01:04:42.463504 containerd[1824]: time="2026-03-04T01:04:42.463462880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 4 01:04:42.463504 containerd[1824]: time="2026-03-04T01:04:42.463481280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 4 01:04:42.463504 containerd[1824]: time="2026-03-04T01:04:42.463494280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 4 01:04:42.463572 containerd[1824]: time="2026-03-04T01:04:42.463506520Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 4 01:04:42.463572 containerd[1824]: time="2026-03-04T01:04:42.463542280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 4 01:04:42.463572 containerd[1824]: time="2026-03-04T01:04:42.463556880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 4 01:04:42.463572 containerd[1824]: time="2026-03-04T01:04:42.463568960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 4 01:04:42.463644 containerd[1824]: time="2026-03-04T01:04:42.463581320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 4 01:04:42.463644 containerd[1824]: time="2026-03-04T01:04:42.463593360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 4 01:04:42.463644 containerd[1824]: time="2026-03-04T01:04:42.463607320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 4 01:04:42.463644 containerd[1824]: time="2026-03-04T01:04:42.463619240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 4 01:04:42.463644 containerd[1824]: time="2026-03-04T01:04:42.463631600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 4 01:04:42.463733 containerd[1824]: time="2026-03-04T01:04:42.463645120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 4 01:04:42.463733 containerd[1824]: time="2026-03-04T01:04:42.463658560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 4 01:04:42.463733 containerd[1824]: time="2026-03-04T01:04:42.463670920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 4 01:04:42.463733 containerd[1824]: time="2026-03-04T01:04:42.463682720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 4 01:04:42.463733 containerd[1824]: time="2026-03-04T01:04:42.463694560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 4 01:04:42.463733 containerd[1824]: time="2026-03-04T01:04:42.463709480Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 4 01:04:42.463733 containerd[1824]: time="2026-03-04T01:04:42.463729240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 4 01:04:42.463884 containerd[1824]: time="2026-03-04T01:04:42.463742000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 4 01:04:42.463884 containerd[1824]: time="2026-03-04T01:04:42.463752640Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 4 01:04:42.463884 containerd[1824]: time="2026-03-04T01:04:42.463799560Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 4 01:04:42.463884 containerd[1824]: time="2026-03-04T01:04:42.463817440Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 4 01:04:42.463884 containerd[1824]: time="2026-03-04T01:04:42.463828360Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 4 01:04:42.463884 containerd[1824]: time="2026-03-04T01:04:42.463839280Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 4 01:04:42.463884 containerd[1824]: time="2026-03-04T01:04:42.463848840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 4 01:04:42.463884 containerd[1824]: time="2026-03-04T01:04:42.463862920Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 4 01:04:42.463884 containerd[1824]: time="2026-03-04T01:04:42.463872520Z" level=info msg="NRI interface is disabled by configuration." Mar 4 01:04:42.463884 containerd[1824]: time="2026-03-04T01:04:42.463882440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 4 01:04:42.465031 containerd[1824]: time="2026-03-04T01:04:42.464148080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 4 01:04:42.465031 containerd[1824]: time="2026-03-04T01:04:42.464212080Z" level=info msg="Connect containerd service" Mar 4 01:04:42.465031 containerd[1824]: time="2026-03-04T01:04:42.464247520Z" level=info msg="using legacy CRI server" Mar 4 01:04:42.465031 containerd[1824]: time="2026-03-04T01:04:42.464254200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 4 01:04:42.465031 containerd[1824]: time="2026-03-04T01:04:42.464336800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 4 01:04:42.465031 containerd[1824]: time="2026-03-04T01:04:42.464973520Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 4 01:04:42.465264 containerd[1824]: time="2026-03-04T01:04:42.465224720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 4 01:04:42.465287 containerd[1824]: time="2026-03-04T01:04:42.465263640Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 4 01:04:42.465933 containerd[1824]: time="2026-03-04T01:04:42.465298000Z" level=info msg="Start subscribing containerd event" Mar 4 01:04:42.465933 containerd[1824]: time="2026-03-04T01:04:42.465336280Z" level=info msg="Start recovering state" Mar 4 01:04:42.465933 containerd[1824]: time="2026-03-04T01:04:42.465407800Z" level=info msg="Start event monitor" Mar 4 01:04:42.465933 containerd[1824]: time="2026-03-04T01:04:42.465419600Z" level=info msg="Start snapshots syncer" Mar 4 01:04:42.465933 containerd[1824]: time="2026-03-04T01:04:42.465428200Z" level=info msg="Start cni network conf syncer for default" Mar 4 01:04:42.465933 containerd[1824]: time="2026-03-04T01:04:42.465436200Z" level=info msg="Start streaming server" Mar 4 01:04:42.465933 containerd[1824]: time="2026-03-04T01:04:42.465487920Z" level=info msg="containerd successfully booted in 0.049641s" Mar 4 01:04:42.465596 systemd[1]: Started containerd.service - containerd container runtime. Mar 4 01:04:42.558535 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:04:42.564116 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:04:42.565033 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 4 01:04:42.576415 systemd[1]: Startup finished in 14.420s (kernel) + 13.459s (userspace) = 27.880s. Mar 4 01:04:42.777877 login[1936]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 4 01:04:42.779034 login[1935]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:42.786783 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 4 01:04:42.793789 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 4 01:04:42.796787 systemd-logind[1785]: New session 1 of user core. Mar 4 01:04:42.820831 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 4 01:04:42.827941 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 4 01:04:42.844803 (systemd)[1965]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 4 01:04:42.968296 systemd[1965]: Queued start job for default target default.target. Mar 4 01:04:42.969030 systemd[1965]: Created slice app.slice - User Application Slice. Mar 4 01:04:42.969056 systemd[1965]: Reached target paths.target - Paths. Mar 4 01:04:42.969067 systemd[1965]: Reached target timers.target - Timers. Mar 4 01:04:42.974551 systemd[1965]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 4 01:04:42.986522 systemd[1965]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 4 01:04:42.986584 systemd[1965]: Reached target sockets.target - Sockets. Mar 4 01:04:42.986596 systemd[1965]: Reached target basic.target - Basic System. Mar 4 01:04:42.986637 systemd[1965]: Reached target default.target - Main User Target. Mar 4 01:04:42.986662 systemd[1965]: Startup finished in 136ms. Mar 4 01:04:42.986730 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 4 01:04:42.994090 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 4 01:04:43.016346 kubelet[1952]: E0304 01:04:43.016295 1952 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:04:43.020296 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:04:43.020610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:04:43.779320 login[1936]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:04:43.786538 systemd-logind[1785]: New session 2 of user core. Mar 4 01:04:43.788581 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 4 01:04:43.847688 waagent[1932]: 2026-03-04T01:04:43.847598Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 4 01:04:43.851860 waagent[1932]: 2026-03-04T01:04:43.851801Z INFO Daemon Daemon OS: flatcar 4081.3.6 Mar 4 01:04:43.855169 waagent[1932]: 2026-03-04T01:04:43.855124Z INFO Daemon Daemon Python: 3.11.9 Mar 4 01:04:43.860375 waagent[1932]: 2026-03-04T01:04:43.859434Z INFO Daemon Daemon Run daemon Mar 4 01:04:43.862609 waagent[1932]: 2026-03-04T01:04:43.862523Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Mar 4 01:04:43.869171 waagent[1932]: 2026-03-04T01:04:43.869117Z INFO Daemon Daemon Using waagent for provisioning Mar 4 01:04:43.873397 waagent[1932]: 2026-03-04T01:04:43.873345Z INFO Daemon Daemon Activate resource disk Mar 4 01:04:43.877051 waagent[1932]: 2026-03-04T01:04:43.877010Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 4 01:04:43.885943 waagent[1932]: 2026-03-04T01:04:43.885890Z INFO Daemon Daemon Found device: None Mar 4 01:04:43.889419 waagent[1932]: 2026-03-04T01:04:43.889379Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 4 01:04:43.895812 waagent[1932]: 2026-03-04T01:04:43.895774Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 4 01:04:43.905635 waagent[1932]: 2026-03-04T01:04:43.905591Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 4 01:04:43.909860 waagent[1932]: 2026-03-04T01:04:43.909824Z INFO Daemon Daemon Running default provisioning handler Mar 4 01:04:43.919708 waagent[1932]: 2026-03-04T01:04:43.919644Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 4 01:04:43.929684 waagent[1932]: 2026-03-04T01:04:43.929625Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 4 01:04:43.936767 waagent[1932]: 2026-03-04T01:04:43.936721Z INFO Daemon Daemon cloud-init is enabled: False Mar 4 01:04:43.940378 waagent[1932]: 2026-03-04T01:04:43.940336Z INFO Daemon Daemon Copying ovf-env.xml Mar 4 01:04:44.082425 waagent[1932]: 2026-03-04T01:04:44.081425Z INFO Daemon Daemon Successfully mounted dvd Mar 4 01:04:44.094924 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 4 01:04:44.095246 waagent[1932]: 2026-03-04T01:04:44.094918Z INFO Daemon Daemon Detect protocol endpoint Mar 4 01:04:44.098573 waagent[1932]: 2026-03-04T01:04:44.098525Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 4 01:04:44.102682 waagent[1932]: 2026-03-04T01:04:44.102642Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 4 01:04:44.107373 waagent[1932]: 2026-03-04T01:04:44.107334Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 4 01:04:44.111464 waagent[1932]: 2026-03-04T01:04:44.111427Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 4 01:04:44.115216 waagent[1932]: 2026-03-04T01:04:44.115178Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 4 01:04:44.154776 waagent[1932]: 2026-03-04T01:04:44.154734Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 4 01:04:44.160046 waagent[1932]: 2026-03-04T01:04:44.160022Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 4 01:04:44.163943 waagent[1932]: 2026-03-04T01:04:44.163910Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 4 01:04:44.429061 waagent[1932]: 2026-03-04T01:04:44.428923Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 4 01:04:44.433821 waagent[1932]: 2026-03-04T01:04:44.433774Z INFO Daemon Daemon Forcing an update of the goal state. Mar 4 01:04:44.441273 waagent[1932]: 2026-03-04T01:04:44.441231Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 4 01:04:44.465179 waagent[1932]: 2026-03-04T01:04:44.465140Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 4 01:04:44.469579 waagent[1932]: 2026-03-04T01:04:44.469539Z INFO Daemon Mar 4 01:04:44.471742 waagent[1932]: 2026-03-04T01:04:44.471706Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 6afb9d08-89f7-462d-b41b-0fc08c99bec6 eTag: 15488077014510918174 source: Fabric] Mar 4 01:04:44.480292 waagent[1932]: 2026-03-04T01:04:44.480251Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 4 01:04:44.485335 waagent[1932]: 2026-03-04T01:04:44.485296Z INFO Daemon Mar 4 01:04:44.487696 waagent[1932]: 2026-03-04T01:04:44.487658Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 4 01:04:44.496590 waagent[1932]: 2026-03-04T01:04:44.496561Z INFO Daemon Daemon Downloading artifacts profile blob Mar 4 01:04:44.637565 waagent[1932]: 2026-03-04T01:04:44.637483Z INFO Daemon Downloaded certificate {'thumbprint': '4E28580630077151F32D461A286CD1829862D9C1', 'hasPrivateKey': True} Mar 4 01:04:44.645327 waagent[1932]: 2026-03-04T01:04:44.645282Z INFO Daemon Fetch goal state completed Mar 4 01:04:44.687870 waagent[1932]: 2026-03-04T01:04:44.687796Z INFO Daemon Daemon Starting provisioning Mar 4 01:04:44.691735 waagent[1932]: 2026-03-04T01:04:44.691689Z INFO Daemon Daemon Handle ovf-env.xml. Mar 4 01:04:44.695400 waagent[1932]: 2026-03-04T01:04:44.695363Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-8ef68d175b] Mar 4 01:04:44.721392 waagent[1932]: 2026-03-04T01:04:44.720956Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-8ef68d175b] Mar 4 01:04:44.725967 waagent[1932]: 2026-03-04T01:04:44.725917Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 4 01:04:44.730622 waagent[1932]: 2026-03-04T01:04:44.730584Z INFO Daemon Daemon Primary interface is [eth0] Mar 4 01:04:44.756000 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:04:44.756006 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 01:04:44.756048 systemd-networkd[1398]: eth0: DHCP lease lost Mar 4 01:04:44.756914 waagent[1932]: 2026-03-04T01:04:44.756812Z INFO Daemon Daemon Create user account if not exists Mar 4 01:04:44.761032 waagent[1932]: 2026-03-04T01:04:44.760989Z INFO Daemon Daemon User core already exists, skip useradd Mar 4 01:04:44.768437 waagent[1932]: 2026-03-04T01:04:44.765340Z INFO Daemon Daemon Configure sudoer Mar 4 01:04:44.765413 systemd-networkd[1398]: eth0: DHCPv6 lease lost Mar 4 01:04:44.769006 waagent[1932]: 2026-03-04T01:04:44.768932Z INFO Daemon Daemon Configure sshd Mar 4 01:04:44.772608 waagent[1932]: 2026-03-04T01:04:44.772543Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 4 01:04:44.782455 waagent[1932]: 2026-03-04T01:04:44.782383Z INFO Daemon Daemon Deploy ssh public key. Mar 4 01:04:44.796425 systemd-networkd[1398]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 4 01:04:45.854352 waagent[1932]: 2026-03-04T01:04:45.854289Z INFO Daemon Daemon Provisioning complete Mar 4 01:04:45.870470 waagent[1932]: 2026-03-04T01:04:45.870430Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 4 01:04:45.875831 waagent[1932]: 2026-03-04T01:04:45.875781Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 4 01:04:45.883332 waagent[1932]: 2026-03-04T01:04:45.883288Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 4 01:04:46.009475 waagent[2020]: 2026-03-04T01:04:46.009350Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 4 01:04:46.010348 waagent[2020]: 2026-03-04T01:04:46.009865Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Mar 4 01:04:46.010348 waagent[2020]: 2026-03-04T01:04:46.009945Z INFO ExtHandler ExtHandler Python: 3.11.9 Mar 4 01:04:46.041385 waagent[2020]: 2026-03-04T01:04:46.041048Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 4 01:04:46.041385 waagent[2020]: 2026-03-04T01:04:46.041273Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 4 01:04:46.041385 waagent[2020]: 2026-03-04T01:04:46.041332Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 4 01:04:46.049428 waagent[2020]: 2026-03-04T01:04:46.049320Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 4 01:04:46.054825 waagent[2020]: 2026-03-04T01:04:46.054785Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 4 01:04:46.055296 waagent[2020]: 2026-03-04T01:04:46.055257Z INFO ExtHandler Mar 4 01:04:46.055361 waagent[2020]: 2026-03-04T01:04:46.055334Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 6bc1b2b6-b29f-415c-84a7-3904c1e47953 eTag: 15488077014510918174 source: Fabric] Mar 4 01:04:46.055656 waagent[2020]: 2026-03-04T01:04:46.055621Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 4 01:04:46.056212 waagent[2020]: 2026-03-04T01:04:46.056172Z INFO ExtHandler Mar 4 01:04:46.056271 waagent[2020]: 2026-03-04T01:04:46.056246Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 4 01:04:46.060364 waagent[2020]: 2026-03-04T01:04:46.060328Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 4 01:04:46.128787 waagent[2020]: 2026-03-04T01:04:46.128659Z INFO ExtHandler Downloaded certificate {'thumbprint': '4E28580630077151F32D461A286CD1829862D9C1', 'hasPrivateKey': True} Mar 4 01:04:46.129237 waagent[2020]: 2026-03-04T01:04:46.129193Z INFO ExtHandler Fetch goal state completed Mar 4 01:04:46.143701 waagent[2020]: 2026-03-04T01:04:46.143654Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2020 Mar 4 01:04:46.143842 waagent[2020]: 2026-03-04T01:04:46.143811Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 4 01:04:46.145383 waagent[2020]: 2026-03-04T01:04:46.145329Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Mar 4 01:04:46.145738 waagent[2020]: 2026-03-04T01:04:46.145701Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 4 01:04:46.199589 waagent[2020]: 2026-03-04T01:04:46.199547Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 4 01:04:46.199778 waagent[2020]: 2026-03-04T01:04:46.199742Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 4 01:04:46.205503 waagent[2020]: 2026-03-04T01:04:46.205448Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 4 01:04:46.211586 systemd[1]: Reloading requested from client PID 2033 ('systemctl') (unit waagent.service)... Mar 4 01:04:46.211602 systemd[1]: Reloading... Mar 4 01:04:46.282537 zram_generator::config[2070]: No configuration found. Mar 4 01:04:46.391478 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:04:46.468556 systemd[1]: Reloading finished in 256 ms. Mar 4 01:04:46.490981 waagent[2020]: 2026-03-04T01:04:46.489884Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 4 01:04:46.496975 systemd[1]: Reloading requested from client PID 2126 ('systemctl') (unit waagent.service)... Mar 4 01:04:46.497063 systemd[1]: Reloading... Mar 4 01:04:46.564109 zram_generator::config[2160]: No configuration found. Mar 4 01:04:46.674435 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:04:46.747720 systemd[1]: Reloading finished in 250 ms. Mar 4 01:04:46.769524 waagent[2020]: 2026-03-04T01:04:46.768417Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 4 01:04:46.769524 waagent[2020]: 2026-03-04T01:04:46.768593Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 4 01:04:47.112376 waagent[2020]: 2026-03-04T01:04:47.112244Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 4 01:04:47.113257 waagent[2020]: 2026-03-04T01:04:47.113213Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 4 01:04:47.114065 waagent[2020]: 2026-03-04T01:04:47.114017Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 4 01:04:47.114167 waagent[2020]: 2026-03-04T01:04:47.114128Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 4 01:04:47.114301 waagent[2020]: 2026-03-04T01:04:47.114210Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 4 01:04:47.114544 waagent[2020]: 2026-03-04T01:04:47.114495Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 4 01:04:47.114882 waagent[2020]: 2026-03-04T01:04:47.114831Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 4 01:04:47.115264 waagent[2020]: 2026-03-04T01:04:47.115208Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 4 01:04:47.115504 waagent[2020]: 2026-03-04T01:04:47.115463Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 4 01:04:47.115504 waagent[2020]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 4 01:04:47.115504 waagent[2020]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 4 01:04:47.115504 waagent[2020]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 4 01:04:47.115504 waagent[2020]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 4 01:04:47.115504 waagent[2020]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 4 01:04:47.115504 waagent[2020]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 4 01:04:47.116034 waagent[2020]: 2026-03-04T01:04:47.115956Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 4 01:04:47.116085 waagent[2020]: 2026-03-04T01:04:47.116053Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 4 01:04:47.116170 waagent[2020]: 2026-03-04T01:04:47.116140Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 4 01:04:47.116355 waagent[2020]: 2026-03-04T01:04:47.116282Z INFO EnvHandler ExtHandler Configure routes Mar 4 01:04:47.116740 waagent[2020]: 2026-03-04T01:04:47.116673Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 4 01:04:47.116859 waagent[2020]: 2026-03-04T01:04:47.116810Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 4 01:04:47.116905 waagent[2020]: 2026-03-04T01:04:47.116874Z INFO EnvHandler ExtHandler Gateway:None Mar 4 01:04:47.117107 waagent[2020]: 2026-03-04T01:04:47.117066Z INFO EnvHandler ExtHandler Routes:None Mar 4 01:04:47.117282 waagent[2020]: 2026-03-04T01:04:47.117234Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 4 01:04:47.123153 waagent[2020]: 2026-03-04T01:04:47.123109Z INFO ExtHandler ExtHandler Mar 4 01:04:47.123634 waagent[2020]: 2026-03-04T01:04:47.123586Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 1199eb6c-f1cd-4633-b203-4a1d13323d1c correlation e360e9ae-345f-420c-90c7-17f817cade6f created: 2026-03-04T01:03:48.225395Z] Mar 4 01:04:47.124353 waagent[2020]: 2026-03-04T01:04:47.124313Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 4 01:04:47.124984 waagent[2020]: 2026-03-04T01:04:47.124948Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Mar 4 01:04:47.159395 waagent[2020]: 2026-03-04T01:04:47.159326Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 6C90C052-5762-4ABA-9642-CB23895FE51A;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 4 01:04:47.172149 waagent[2020]: 2026-03-04T01:04:47.172075Z INFO MonitorHandler ExtHandler Network interfaces: Mar 4 01:04:47.172149 waagent[2020]: Executing ['ip', '-a', '-o', 'link']: Mar 4 01:04:47.172149 waagent[2020]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 4 01:04:47.172149 waagent[2020]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:78:2e:eb brd ff:ff:ff:ff:ff:ff Mar 4 01:04:47.172149 waagent[2020]: 3: enP41873s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:78:2e:eb brd ff:ff:ff:ff:ff:ff\ altname enP41873p0s2 Mar 4 01:04:47.172149 waagent[2020]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 4 01:04:47.172149 waagent[2020]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 4 01:04:47.172149 waagent[2020]: 2: eth0 inet 10.200.20.12/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 4 01:04:47.172149 waagent[2020]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 4 01:04:47.172149 waagent[2020]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 4 01:04:47.172149 waagent[2020]: 2: eth0 inet6 fe80::222:48ff:fe78:2eeb/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 4 01:04:47.219428 waagent[2020]: 2026-03-04T01:04:47.219332Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 4 01:04:47.219428 waagent[2020]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 4 01:04:47.219428 waagent[2020]: pkts bytes target prot opt in out source destination Mar 4 01:04:47.219428 waagent[2020]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 4 01:04:47.219428 waagent[2020]: pkts bytes target prot opt in out source destination Mar 4 01:04:47.219428 waagent[2020]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 4 01:04:47.219428 waagent[2020]: pkts bytes target prot opt in out source destination Mar 4 01:04:47.219428 waagent[2020]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 4 01:04:47.219428 waagent[2020]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 4 01:04:47.219428 waagent[2020]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 4 01:04:47.222259 waagent[2020]: 2026-03-04T01:04:47.222197Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 4 01:04:47.222259 waagent[2020]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 4 01:04:47.222259 waagent[2020]: pkts bytes target prot opt in out source destination Mar 4 01:04:47.222259 waagent[2020]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 4 01:04:47.222259 waagent[2020]: pkts bytes target prot opt in out source destination Mar 4 01:04:47.222259 waagent[2020]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 4 01:04:47.222259 waagent[2020]: pkts bytes target prot opt in out source destination Mar 4 01:04:47.222259 waagent[2020]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 4 01:04:47.222259 waagent[2020]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 4 01:04:47.222259 waagent[2020]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 4 01:04:47.222505 waagent[2020]: 2026-03-04T01:04:47.222471Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 4 01:04:53.271301 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 4 01:04:53.278574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:04:53.380523 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:04:53.384274 (kubelet)[2262]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:04:53.483783 kubelet[2262]: E0304 01:04:53.483729 2262 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:04:53.489570 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:04:53.489734 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:05:03.572082 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 4 01:05:03.578718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:05:03.682267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:05:03.685255 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:05:03.787261 kubelet[2282]: E0304 01:05:03.787211 2282 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:05:03.791533 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:05:03.791702 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:05:04.335277 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 4 01:05:04.340566 systemd[1]: Started sshd@0-10.200.20.12:22-10.200.16.10:36294.service - OpenSSH per-connection server daemon (10.200.16.10:36294). Mar 4 01:05:04.869506 sshd[2290]: Accepted publickey for core from 10.200.16.10 port 36294 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:05:04.870264 sshd[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:04.874190 systemd-logind[1785]: New session 3 of user core. Mar 4 01:05:04.880773 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 4 01:05:05.160453 chronyd[1776]: Selected source PHC0 Mar 4 01:05:05.302585 systemd[1]: Started sshd@1-10.200.20.12:22-10.200.16.10:36298.service - OpenSSH per-connection server daemon (10.200.16.10:36298). Mar 4 01:05:05.785458 sshd[2295]: Accepted publickey for core from 10.200.16.10 port 36298 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:05:05.786719 sshd[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:05.791070 systemd-logind[1785]: New session 4 of user core. Mar 4 01:05:05.796583 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 4 01:05:06.138566 sshd[2295]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:06.142204 systemd-logind[1785]: Session 4 logged out. Waiting for processes to exit. Mar 4 01:05:06.142813 systemd[1]: sshd@1-10.200.20.12:22-10.200.16.10:36298.service: Deactivated successfully. Mar 4 01:05:06.145450 systemd[1]: session-4.scope: Deactivated successfully. Mar 4 01:05:06.146280 systemd-logind[1785]: Removed session 4. Mar 4 01:05:06.230797 systemd[1]: Started sshd@2-10.200.20.12:22-10.200.16.10:36312.service - OpenSSH per-connection server daemon (10.200.16.10:36312). Mar 4 01:05:06.717760 sshd[2303]: Accepted publickey for core from 10.200.16.10 port 36312 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:05:06.718596 sshd[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:06.722873 systemd-logind[1785]: New session 5 of user core. Mar 4 01:05:06.729594 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 4 01:05:07.069561 sshd[2303]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:07.072951 systemd-logind[1785]: Session 5 logged out. Waiting for processes to exit. Mar 4 01:05:07.074786 systemd[1]: sshd@2-10.200.20.12:22-10.200.16.10:36312.service: Deactivated successfully. Mar 4 01:05:07.077011 systemd[1]: session-5.scope: Deactivated successfully. Mar 4 01:05:07.077919 systemd-logind[1785]: Removed session 5. Mar 4 01:05:07.155575 systemd[1]: Started sshd@3-10.200.20.12:22-10.200.16.10:36314.service - OpenSSH per-connection server daemon (10.200.16.10:36314). Mar 4 01:05:07.638386 sshd[2311]: Accepted publickey for core from 10.200.16.10 port 36314 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:05:07.638992 sshd[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:07.642848 systemd-logind[1785]: New session 6 of user core. Mar 4 01:05:07.651677 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 4 01:05:07.990547 sshd[2311]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:07.993644 systemd[1]: sshd@3-10.200.20.12:22-10.200.16.10:36314.service: Deactivated successfully. Mar 4 01:05:07.996104 systemd-logind[1785]: Session 6 logged out. Waiting for processes to exit. Mar 4 01:05:07.996767 systemd[1]: session-6.scope: Deactivated successfully. Mar 4 01:05:07.997590 systemd-logind[1785]: Removed session 6. Mar 4 01:05:08.078573 systemd[1]: Started sshd@4-10.200.20.12:22-10.200.16.10:36326.service - OpenSSH per-connection server daemon (10.200.16.10:36326). Mar 4 01:05:08.563916 sshd[2319]: Accepted publickey for core from 10.200.16.10 port 36326 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:05:08.564731 sshd[2319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:08.568378 systemd-logind[1785]: New session 7 of user core. Mar 4 01:05:08.573631 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 4 01:05:08.986634 sudo[2323]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 4 01:05:08.986901 sudo[2323]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:05:08.998391 sudo[2323]: pam_unix(sudo:session): session closed for user root Mar 4 01:05:09.074460 sshd[2319]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:09.078682 systemd[1]: sshd@4-10.200.20.12:22-10.200.16.10:36326.service: Deactivated successfully. Mar 4 01:05:09.081302 systemd-logind[1785]: Session 7 logged out. Waiting for processes to exit. Mar 4 01:05:09.081981 systemd[1]: session-7.scope: Deactivated successfully. Mar 4 01:05:09.082790 systemd-logind[1785]: Removed session 7. Mar 4 01:05:09.163569 systemd[1]: Started sshd@5-10.200.20.12:22-10.200.16.10:36342.service - OpenSSH per-connection server daemon (10.200.16.10:36342). Mar 4 01:05:09.648338 sshd[2328]: Accepted publickey for core from 10.200.16.10 port 36342 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:05:09.649172 sshd[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:09.653294 systemd-logind[1785]: New session 8 of user core. Mar 4 01:05:09.658659 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 4 01:05:09.922276 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 4 01:05:09.922615 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:05:09.926217 sudo[2333]: pam_unix(sudo:session): session closed for user root Mar 4 01:05:09.930655 sudo[2332]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 4 01:05:09.930895 sudo[2332]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:05:09.941562 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 4 01:05:09.943702 auditctl[2336]: No rules Mar 4 01:05:09.944144 systemd[1]: audit-rules.service: Deactivated successfully. Mar 4 01:05:09.944355 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 4 01:05:09.947655 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 01:05:09.968654 augenrules[2355]: No rules Mar 4 01:05:09.969996 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 01:05:09.972946 sudo[2332]: pam_unix(sudo:session): session closed for user root Mar 4 01:05:10.051524 sshd[2328]: pam_unix(sshd:session): session closed for user core Mar 4 01:05:10.055798 systemd[1]: sshd@5-10.200.20.12:22-10.200.16.10:36342.service: Deactivated successfully. Mar 4 01:05:10.056119 systemd-logind[1785]: Session 8 logged out. Waiting for processes to exit. Mar 4 01:05:10.058047 systemd[1]: session-8.scope: Deactivated successfully. Mar 4 01:05:10.059684 systemd-logind[1785]: Removed session 8. Mar 4 01:05:10.137663 systemd[1]: Started sshd@6-10.200.20.12:22-10.200.16.10:57264.service - OpenSSH per-connection server daemon (10.200.16.10:57264). Mar 4 01:05:10.619812 sshd[2364]: Accepted publickey for core from 10.200.16.10 port 57264 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:05:10.620625 sshd[2364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:05:10.624016 systemd-logind[1785]: New session 9 of user core. Mar 4 01:05:10.630813 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 4 01:05:10.893454 sudo[2368]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 4 01:05:10.893723 sudo[2368]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:05:12.070554 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 4 01:05:12.070925 (dockerd)[2383]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 4 01:05:12.873376 dockerd[2383]: time="2026-03-04T01:05:12.871930431Z" level=info msg="Starting up" Mar 4 01:05:13.542277 dockerd[2383]: time="2026-03-04T01:05:13.542074705Z" level=info msg="Loading containers: start." Mar 4 01:05:13.745439 kernel: Initializing XFRM netlink socket Mar 4 01:05:13.821960 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 4 01:05:13.827514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:05:13.942539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:05:13.945661 (kubelet)[2462]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:05:14.029760 kubelet[2462]: E0304 01:05:14.029702 2462 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:05:14.033528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:05:14.033683 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:05:14.351547 systemd-networkd[1398]: docker0: Link UP Mar 4 01:05:14.396375 dockerd[2383]: time="2026-03-04T01:05:14.396314877Z" level=info msg="Loading containers: done." Mar 4 01:05:14.476128 dockerd[2383]: time="2026-03-04T01:05:14.476079346Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 4 01:05:14.476273 dockerd[2383]: time="2026-03-04T01:05:14.476188266Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 4 01:05:14.476333 dockerd[2383]: time="2026-03-04T01:05:14.476313866Z" level=info msg="Daemon has completed initialization" Mar 4 01:05:14.535402 dockerd[2383]: time="2026-03-04T01:05:14.535106059Z" level=info msg="API listen on /run/docker.sock" Mar 4 01:05:14.535838 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 4 01:05:14.911586 containerd[1824]: time="2026-03-04T01:05:14.911275851Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 4 01:05:15.805071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount633468334.mount: Deactivated successfully. Mar 4 01:05:17.641489 containerd[1824]: time="2026-03-04T01:05:17.640453265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:17.644743 containerd[1824]: time="2026-03-04T01:05:17.644712825Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390174" Mar 4 01:05:17.647504 containerd[1824]: time="2026-03-04T01:05:17.647471824Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:17.652945 containerd[1824]: time="2026-03-04T01:05:17.652909104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:17.654064 containerd[1824]: time="2026-03-04T01:05:17.654029063Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 2.742708572s" Mar 4 01:05:17.654126 containerd[1824]: time="2026-03-04T01:05:17.654068023Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 4 01:05:17.654631 containerd[1824]: time="2026-03-04T01:05:17.654605103Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 4 01:05:19.827598 containerd[1824]: time="2026-03-04T01:05:19.827535668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:19.831647 containerd[1824]: time="2026-03-04T01:05:19.831616787Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552106" Mar 4 01:05:19.834866 containerd[1824]: time="2026-03-04T01:05:19.834823307Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:19.840163 containerd[1824]: time="2026-03-04T01:05:19.839719066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:19.840891 containerd[1824]: time="2026-03-04T01:05:19.840825226Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 2.186189163s" Mar 4 01:05:19.840945 containerd[1824]: time="2026-03-04T01:05:19.840890306Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 4 01:05:19.841320 containerd[1824]: time="2026-03-04T01:05:19.841298986Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 4 01:05:21.908902 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 4 01:05:24.071947 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 4 01:05:24.079540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:05:24.207519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:05:24.210472 (kubelet)[2612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:05:24.241830 kubelet[2612]: E0304 01:05:24.241773 2612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:05:24.246535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:05:24.246694 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:05:26.699035 update_engine[1789]: I20260304 01:05:26.698400 1789 update_attempter.cc:509] Updating boot flags... Mar 4 01:05:26.741419 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2632) Mar 4 01:05:26.828119 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2625) Mar 4 01:05:28.824065 containerd[1824]: time="2026-03-04T01:05:28.824014685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:28.826740 containerd[1824]: time="2026-03-04T01:05:28.826705045Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301305" Mar 4 01:05:28.829662 containerd[1824]: time="2026-03-04T01:05:28.829635645Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:28.834656 containerd[1824]: time="2026-03-04T01:05:28.834417724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:28.835814 containerd[1824]: time="2026-03-04T01:05:28.835449284Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 8.994121018s" Mar 4 01:05:28.835814 containerd[1824]: time="2026-03-04T01:05:28.835480364Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 4 01:05:28.836050 containerd[1824]: time="2026-03-04T01:05:28.836027484Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 4 01:05:30.188278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2370648969.mount: Deactivated successfully. Mar 4 01:05:30.642699 containerd[1824]: time="2026-03-04T01:05:30.642647439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:30.645416 containerd[1824]: time="2026-03-04T01:05:30.645201719Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148870" Mar 4 01:05:30.648386 containerd[1824]: time="2026-03-04T01:05:30.648165079Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:30.652102 containerd[1824]: time="2026-03-04T01:05:30.652056159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:30.652904 containerd[1824]: time="2026-03-04T01:05:30.652623519Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 1.816465755s" Mar 4 01:05:30.652904 containerd[1824]: time="2026-03-04T01:05:30.652657479Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 4 01:05:30.653071 containerd[1824]: time="2026-03-04T01:05:30.653047159Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 4 01:05:31.317552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount691653884.mount: Deactivated successfully. Mar 4 01:05:32.728383 containerd[1824]: time="2026-03-04T01:05:32.728325583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:32.731148 containerd[1824]: time="2026-03-04T01:05:32.731118903Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Mar 4 01:05:32.734338 containerd[1824]: time="2026-03-04T01:05:32.734311583Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:32.739295 containerd[1824]: time="2026-03-04T01:05:32.739267863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:32.740322 containerd[1824]: time="2026-03-04T01:05:32.739989223Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.086911384s" Mar 4 01:05:32.740322 containerd[1824]: time="2026-03-04T01:05:32.740019623Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 4 01:05:32.740767 containerd[1824]: time="2026-03-04T01:05:32.740748023Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 4 01:05:33.324170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3025727372.mount: Deactivated successfully. Mar 4 01:05:33.343062 containerd[1824]: time="2026-03-04T01:05:33.343022406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:33.345714 containerd[1824]: time="2026-03-04T01:05:33.345690166Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 4 01:05:33.348685 containerd[1824]: time="2026-03-04T01:05:33.348647766Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:33.352784 containerd[1824]: time="2026-03-04T01:05:33.352744326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:33.353768 containerd[1824]: time="2026-03-04T01:05:33.353414766Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 612.569183ms" Mar 4 01:05:33.353768 containerd[1824]: time="2026-03-04T01:05:33.353446166Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 4 01:05:33.354584 containerd[1824]: time="2026-03-04T01:05:33.354395486Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 4 01:05:33.999734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount257860488.mount: Deactivated successfully. Mar 4 01:05:34.322618 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 4 01:05:34.328737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:05:34.455536 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:05:34.458191 (kubelet)[2778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:05:34.493653 kubelet[2778]: E0304 01:05:34.492355 2778 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:05:34.497475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:05:34.497630 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:05:37.186149 containerd[1824]: time="2026-03-04T01:05:37.186095903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:37.189234 containerd[1824]: time="2026-03-04T01:05:37.189009023Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Mar 4 01:05:37.193375 containerd[1824]: time="2026-03-04T01:05:37.192132303Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:37.197272 containerd[1824]: time="2026-03-04T01:05:37.197227302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:05:37.198517 containerd[1824]: time="2026-03-04T01:05:37.198489902Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 3.844063936s" Mar 4 01:05:37.198617 containerd[1824]: time="2026-03-04T01:05:37.198600382Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 4 01:05:42.720849 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:05:42.728641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:05:42.762482 systemd[1]: Reloading requested from client PID 2865 ('systemctl') (unit session-9.scope)... Mar 4 01:05:42.762621 systemd[1]: Reloading... Mar 4 01:05:42.867389 zram_generator::config[2906]: No configuration found. Mar 4 01:05:42.959377 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:05:43.035642 systemd[1]: Reloading finished in 272 ms. Mar 4 01:05:43.081313 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 4 01:05:43.081449 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 4 01:05:43.081743 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:05:43.084936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:05:43.283542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:05:43.301803 (kubelet)[2985]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 01:05:43.330728 kubelet[2985]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:05:43.330728 kubelet[2985]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 01:05:43.330728 kubelet[2985]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:05:43.368472 kubelet[2985]: I0304 01:05:43.368036 2985 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 01:05:44.385639 kubelet[2985]: I0304 01:05:44.385607 2985 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 4 01:05:44.386008 kubelet[2985]: I0304 01:05:44.385995 2985 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 01:05:44.386642 kubelet[2985]: I0304 01:05:44.386617 2985 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 01:05:44.406304 kubelet[2985]: E0304 01:05:44.406271 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 01:05:44.406820 kubelet[2985]: I0304 01:05:44.406804 2985 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 01:05:44.414079 kubelet[2985]: E0304 01:05:44.414043 2985 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 01:05:44.414188 kubelet[2985]: I0304 01:05:44.414177 2985 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 4 01:05:44.417335 kubelet[2985]: I0304 01:05:44.417317 2985 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 4 01:05:44.417783 kubelet[2985]: I0304 01:05:44.417758 2985 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 01:05:44.417977 kubelet[2985]: I0304 01:05:44.417843 2985 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-8ef68d175b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 4 01:05:44.418090 kubelet[2985]: I0304 01:05:44.418080 2985 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 01:05:44.418148 kubelet[2985]: I0304 01:05:44.418139 2985 container_manager_linux.go:303] "Creating device plugin manager" Mar 4 01:05:44.418312 kubelet[2985]: I0304 01:05:44.418301 2985 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:05:44.420991 kubelet[2985]: I0304 01:05:44.420977 2985 kubelet.go:480] "Attempting to sync node with API server" Mar 4 01:05:44.421075 kubelet[2985]: I0304 01:05:44.421065 2985 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 01:05:44.421143 kubelet[2985]: I0304 01:05:44.421135 2985 kubelet.go:386] "Adding apiserver pod source" Mar 4 01:05:44.422334 kubelet[2985]: I0304 01:05:44.422315 2985 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 01:05:44.426065 kubelet[2985]: E0304 01:05:44.426039 2985 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-8ef68d175b&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 01:05:44.426657 kubelet[2985]: E0304 01:05:44.426633 2985 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 01:05:44.427017 kubelet[2985]: I0304 01:05:44.426736 2985 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 01:05:44.427299 kubelet[2985]: I0304 01:05:44.427283 2985 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 01:05:44.427352 kubelet[2985]: W0304 01:05:44.427341 2985 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 4 01:05:44.430258 kubelet[2985]: I0304 01:05:44.430209 2985 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 4 01:05:44.430258 kubelet[2985]: I0304 01:05:44.430246 2985 server.go:1289] "Started kubelet" Mar 4 01:05:44.431789 kubelet[2985]: I0304 01:05:44.431742 2985 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 01:05:44.433475 kubelet[2985]: I0304 01:05:44.433454 2985 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 01:05:44.434524 kubelet[2985]: I0304 01:05:44.434038 2985 server.go:317] "Adding debug handlers to kubelet server" Mar 4 01:05:44.439548 kubelet[2985]: I0304 01:05:44.439496 2985 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 01:05:44.439751 kubelet[2985]: I0304 01:05:44.439726 2985 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 01:05:44.440080 kubelet[2985]: I0304 01:05:44.440061 2985 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 01:05:44.441404 kubelet[2985]: I0304 01:05:44.441386 2985 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 4 01:05:44.442788 kubelet[2985]: E0304 01:05:44.442443 2985 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-8ef68d175b\" not found" Mar 4 01:05:44.442979 kubelet[2985]: E0304 01:05:44.441996 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-8ef68d175b.18997de38178c46f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-8ef68d175b,UID:ci-4081.3.6-n-8ef68d175b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-8ef68d175b,},FirstTimestamp:2026-03-04 01:05:44.430224495 +0000 UTC m=+1.125236200,LastTimestamp:2026-03-04 01:05:44.430224495 +0000 UTC m=+1.125236200,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-8ef68d175b,}" Mar 4 01:05:44.443063 kubelet[2985]: E0304 01:05:44.443042 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8ef68d175b?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="200ms" Mar 4 01:05:44.443219 kubelet[2985]: I0304 01:05:44.443200 2985 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 4 01:05:44.444372 kubelet[2985]: E0304 01:05:44.443924 2985 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 01:05:44.444372 kubelet[2985]: I0304 01:05:44.444145 2985 reconciler.go:26] "Reconciler: start to sync state" Mar 4 01:05:44.444372 kubelet[2985]: I0304 01:05:44.444263 2985 factory.go:223] Registration of the systemd container factory successfully Mar 4 01:05:44.444372 kubelet[2985]: I0304 01:05:44.444328 2985 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 01:05:44.445745 kubelet[2985]: I0304 01:05:44.445722 2985 factory.go:223] Registration of the containerd container factory successfully Mar 4 01:05:44.464842 kubelet[2985]: E0304 01:05:44.464761 2985 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 01:05:44.485658 kubelet[2985]: I0304 01:05:44.485610 2985 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 4 01:05:44.486944 kubelet[2985]: I0304 01:05:44.486922 2985 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 4 01:05:44.486944 kubelet[2985]: I0304 01:05:44.486942 2985 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 4 01:05:44.487029 kubelet[2985]: I0304 01:05:44.486962 2985 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 01:05:44.487029 kubelet[2985]: I0304 01:05:44.486968 2985 kubelet.go:2436] "Starting kubelet main sync loop" Mar 4 01:05:44.487029 kubelet[2985]: E0304 01:05:44.487004 2985 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 01:05:44.488453 kubelet[2985]: E0304 01:05:44.488215 2985 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 01:05:44.543554 kubelet[2985]: E0304 01:05:44.543523 2985 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-8ef68d175b\" not found" Mar 4 01:05:44.568081 kubelet[2985]: I0304 01:05:44.568056 2985 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 01:05:44.568081 kubelet[2985]: I0304 01:05:44.568072 2985 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 01:05:44.568202 kubelet[2985]: I0304 01:05:44.568095 2985 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:05:44.574835 kubelet[2985]: I0304 01:05:44.574814 2985 policy_none.go:49] "None policy: Start" Mar 4 01:05:44.574884 kubelet[2985]: I0304 01:05:44.574839 2985 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 4 01:05:44.574884 kubelet[2985]: I0304 01:05:44.574851 2985 state_mem.go:35] "Initializing new in-memory state store" Mar 4 01:05:44.581826 kubelet[2985]: E0304 01:05:44.581803 2985 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 01:05:44.581990 kubelet[2985]: I0304 01:05:44.581974 2985 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 01:05:44.582019 kubelet[2985]: I0304 01:05:44.581989 2985 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 01:05:44.583232 kubelet[2985]: I0304 01:05:44.583215 2985 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 01:05:44.585679 kubelet[2985]: E0304 01:05:44.585659 2985 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 01:05:44.585754 kubelet[2985]: E0304 01:05:44.585694 2985 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-8ef68d175b\" not found" Mar 4 01:05:44.597047 kubelet[2985]: E0304 01:05:44.596940 2985 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8ef68d175b\" not found" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:44.602385 kubelet[2985]: E0304 01:05:44.602247 2985 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8ef68d175b\" not found" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:44.611214 kubelet[2985]: E0304 01:05:44.610269 2985 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8ef68d175b\" not found" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:44.642532 kubelet[2985]: E0304 01:05:44.641266 2985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-8ef68d175b.18997de38178c46f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-8ef68d175b,UID:ci-4081.3.6-n-8ef68d175b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-8ef68d175b,},FirstTimestamp:2026-03-04 01:05:44.430224495 +0000 UTC m=+1.125236200,LastTimestamp:2026-03-04 01:05:44.430224495 +0000 UTC m=+1.125236200,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-8ef68d175b,}" Mar 4 01:05:44.643820 kubelet[2985]: E0304 01:05:44.643784 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8ef68d175b?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="400ms" Mar 4 01:05:44.644959 kubelet[2985]: I0304 01:05:44.644940 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbec19bb87e5d6d622ce572431c20c0b-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8ef68d175b\" (UID: \"cbec19bb87e5d6d622ce572431c20c0b\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:44.645000 kubelet[2985]: I0304 01:05:44.644967 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ef7bdc996681c20bdf65f27b6b2fdf5-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8ef68d175b\" (UID: \"3ef7bdc996681c20bdf65f27b6b2fdf5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:44.645000 kubelet[2985]: I0304 01:05:44.644989 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ef7bdc996681c20bdf65f27b6b2fdf5-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8ef68d175b\" (UID: \"3ef7bdc996681c20bdf65f27b6b2fdf5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:44.645051 kubelet[2985]: I0304 01:05:44.645003 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ef7bdc996681c20bdf65f27b6b2fdf5-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-8ef68d175b\" (UID: \"3ef7bdc996681c20bdf65f27b6b2fdf5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:44.645051 kubelet[2985]: I0304 01:05:44.645019 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ef7bdc996681c20bdf65f27b6b2fdf5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-8ef68d175b\" (UID: \"3ef7bdc996681c20bdf65f27b6b2fdf5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:44.645051 kubelet[2985]: I0304 01:05:44.645035 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d0759ab0d573dfe64153113064148f9-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-8ef68d175b\" (UID: \"5d0759ab0d573dfe64153113064148f9\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:44.645051 kubelet[2985]: I0304 01:05:44.645048 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbec19bb87e5d6d622ce572431c20c0b-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8ef68d175b\" (UID: \"cbec19bb87e5d6d622ce572431c20c0b\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:44.645137 kubelet[2985]: I0304 01:05:44.645062 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbec19bb87e5d6d622ce572431c20c0b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-8ef68d175b\" (UID: \"cbec19bb87e5d6d622ce572431c20c0b\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:44.645137 kubelet[2985]: I0304 01:05:44.645077 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3ef7bdc996681c20bdf65f27b6b2fdf5-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-8ef68d175b\" (UID: \"3ef7bdc996681c20bdf65f27b6b2fdf5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:44.684457 kubelet[2985]: I0304 01:05:44.684431 2985 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:44.684764 kubelet[2985]: E0304 01:05:44.684739 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:44.886386 kubelet[2985]: I0304 01:05:44.886335 2985 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:44.886666 kubelet[2985]: E0304 01:05:44.886639 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:44.899559 containerd[1824]: time="2026-03-04T01:05:44.899468854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-8ef68d175b,Uid:cbec19bb87e5d6d622ce572431c20c0b,Namespace:kube-system,Attempt:0,}" Mar 4 01:05:44.903990 containerd[1824]: time="2026-03-04T01:05:44.903757134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-8ef68d175b,Uid:3ef7bdc996681c20bdf65f27b6b2fdf5,Namespace:kube-system,Attempt:0,}" Mar 4 01:05:44.911913 containerd[1824]: time="2026-03-04T01:05:44.911884254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-8ef68d175b,Uid:5d0759ab0d573dfe64153113064148f9,Namespace:kube-system,Attempt:0,}" Mar 4 01:05:45.045118 kubelet[2985]: E0304 01:05:45.045078 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8ef68d175b?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="800ms" Mar 4 01:05:45.275004 kubelet[2985]: E0304 01:05:45.274899 2985 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-8ef68d175b&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 01:05:45.288707 kubelet[2985]: I0304 01:05:45.288485 2985 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:45.288912 kubelet[2985]: E0304 01:05:45.288892 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:45.431114 kubelet[2985]: E0304 01:05:45.431082 2985 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 01:05:45.601792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3317256217.mount: Deactivated successfully. Mar 4 01:05:45.636135 containerd[1824]: time="2026-03-04T01:05:45.635327396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:05:45.641161 containerd[1824]: time="2026-03-04T01:05:45.640430917Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:05:45.643251 containerd[1824]: time="2026-03-04T01:05:45.643222637Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 4 01:05:45.646014 containerd[1824]: time="2026-03-04T01:05:45.645982877Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 01:05:45.650046 containerd[1824]: time="2026-03-04T01:05:45.650013677Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:05:45.654385 containerd[1824]: time="2026-03-04T01:05:45.653174517Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:05:45.655371 containerd[1824]: time="2026-03-04T01:05:45.655336957Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 01:05:45.659180 containerd[1824]: time="2026-03-04T01:05:45.659145078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:05:45.660013 containerd[1824]: time="2026-03-04T01:05:45.659987318Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 756.163504ms" Mar 4 01:05:45.661721 containerd[1824]: time="2026-03-04T01:05:45.661687598Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 762.144184ms" Mar 4 01:05:45.664173 containerd[1824]: time="2026-03-04T01:05:45.664131678Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 752.073424ms" Mar 4 01:05:45.845607 kubelet[2985]: E0304 01:05:45.845569 2985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8ef68d175b?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="1.6s" Mar 4 01:05:45.985599 kubelet[2985]: E0304 01:05:45.985494 2985 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 01:05:46.045645 kubelet[2985]: E0304 01:05:46.045601 2985 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 01:05:46.091881 kubelet[2985]: I0304 01:05:46.091856 2985 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:46.092292 kubelet[2985]: E0304 01:05:46.092266 2985 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:46.276310 containerd[1824]: time="2026-03-04T01:05:46.276181310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:05:46.277463 containerd[1824]: time="2026-03-04T01:05:46.277351630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:05:46.277589 containerd[1824]: time="2026-03-04T01:05:46.277567310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:05:46.279003 containerd[1824]: time="2026-03-04T01:05:46.278936030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:05:46.281736 containerd[1824]: time="2026-03-04T01:05:46.281672870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:05:46.281736 containerd[1824]: time="2026-03-04T01:05:46.281716110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:05:46.281905 containerd[1824]: time="2026-03-04T01:05:46.281726830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:05:46.281905 containerd[1824]: time="2026-03-04T01:05:46.281797270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:05:46.283748 containerd[1824]: time="2026-03-04T01:05:46.282284071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:05:46.283748 containerd[1824]: time="2026-03-04T01:05:46.283386191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:05:46.283748 containerd[1824]: time="2026-03-04T01:05:46.283397071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:05:46.283748 containerd[1824]: time="2026-03-04T01:05:46.283465591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:05:46.341902 containerd[1824]: time="2026-03-04T01:05:46.341794034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-8ef68d175b,Uid:3ef7bdc996681c20bdf65f27b6b2fdf5,Namespace:kube-system,Attempt:0,} returns sandbox id \"400a0a42dfb19c42ac94c331dd202896e204ed24b67f42d9a4bdcb681dbbf70f\"" Mar 4 01:05:46.346147 containerd[1824]: time="2026-03-04T01:05:46.345939794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-8ef68d175b,Uid:cbec19bb87e5d6d622ce572431c20c0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"65cc88147aa86fe41cbd9e630caa156d4e0eeb839c76abc6aed9ba5fb5b64531\"" Mar 4 01:05:46.352636 containerd[1824]: time="2026-03-04T01:05:46.352601954Z" level=info msg="CreateContainer within sandbox \"400a0a42dfb19c42ac94c331dd202896e204ed24b67f42d9a4bdcb681dbbf70f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 4 01:05:46.354581 containerd[1824]: time="2026-03-04T01:05:46.354556474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-8ef68d175b,Uid:5d0759ab0d573dfe64153113064148f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"20f32e0c64d403621f91a8468442277bca6080139521036f463a4c5f4e904cab\"" Mar 4 01:05:46.357780 containerd[1824]: time="2026-03-04T01:05:46.357710714Z" level=info msg="CreateContainer within sandbox \"65cc88147aa86fe41cbd9e630caa156d4e0eeb839c76abc6aed9ba5fb5b64531\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 4 01:05:46.363025 containerd[1824]: time="2026-03-04T01:05:46.362991275Z" level=info msg="CreateContainer within sandbox \"20f32e0c64d403621f91a8468442277bca6080139521036f463a4c5f4e904cab\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 4 01:05:46.409431 containerd[1824]: time="2026-03-04T01:05:46.409285557Z" level=info msg="CreateContainer within sandbox \"400a0a42dfb19c42ac94c331dd202896e204ed24b67f42d9a4bdcb681dbbf70f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9257d67fbdede18b1983c02b17d72c5a5eaf96e12938d710c8d64e8ec9fab090\"" Mar 4 01:05:46.410388 containerd[1824]: time="2026-03-04T01:05:46.410007797Z" level=info msg="StartContainer for \"9257d67fbdede18b1983c02b17d72c5a5eaf96e12938d710c8d64e8ec9fab090\"" Mar 4 01:05:46.421696 containerd[1824]: time="2026-03-04T01:05:46.421653158Z" level=info msg="CreateContainer within sandbox \"65cc88147aa86fe41cbd9e630caa156d4e0eeb839c76abc6aed9ba5fb5b64531\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fd0c9a9aec8a388a871eabf6b35560d053e8f994c38e46393d25eb2442701e29\"" Mar 4 01:05:46.422507 containerd[1824]: time="2026-03-04T01:05:46.422482998Z" level=info msg="StartContainer for \"fd0c9a9aec8a388a871eabf6b35560d053e8f994c38e46393d25eb2442701e29\"" Mar 4 01:05:46.431408 containerd[1824]: time="2026-03-04T01:05:46.431162198Z" level=info msg="CreateContainer within sandbox \"20f32e0c64d403621f91a8468442277bca6080139521036f463a4c5f4e904cab\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"080dc411c2bc7976cefd9e94e898b62d34d5c8623beafb1e5001255b376be416\"" Mar 4 01:05:46.431835 containerd[1824]: time="2026-03-04T01:05:46.431715078Z" level=info msg="StartContainer for \"080dc411c2bc7976cefd9e94e898b62d34d5c8623beafb1e5001255b376be416\"" Mar 4 01:05:46.477389 containerd[1824]: time="2026-03-04T01:05:46.476926521Z" level=info msg="StartContainer for \"9257d67fbdede18b1983c02b17d72c5a5eaf96e12938d710c8d64e8ec9fab090\" returns successfully" Mar 4 01:05:46.511866 kubelet[2985]: E0304 01:05:46.511697 2985 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8ef68d175b\" not found" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:46.536781 containerd[1824]: time="2026-03-04T01:05:46.536664044Z" level=info msg="StartContainer for \"fd0c9a9aec8a388a871eabf6b35560d053e8f994c38e46393d25eb2442701e29\" returns successfully" Mar 4 01:05:46.539267 kubelet[2985]: E0304 01:05:46.538826 2985 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 01:05:46.554380 containerd[1824]: time="2026-03-04T01:05:46.553664125Z" level=info msg="StartContainer for \"080dc411c2bc7976cefd9e94e898b62d34d5c8623beafb1e5001255b376be416\" returns successfully" Mar 4 01:05:47.515371 kubelet[2985]: E0304 01:05:47.513878 2985 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8ef68d175b\" not found" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:47.518366 kubelet[2985]: E0304 01:05:47.517626 2985 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8ef68d175b\" not found" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:47.694881 kubelet[2985]: I0304 01:05:47.694847 2985 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:48.521999 kubelet[2985]: E0304 01:05:48.521969 2985 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8ef68d175b\" not found" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:48.523559 kubelet[2985]: E0304 01:05:48.523402 2985 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8ef68d175b\" not found" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:49.342506 kubelet[2985]: I0304 01:05:49.342278 2985 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:49.344384 kubelet[2985]: I0304 01:05:49.343439 2985 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:49.397418 kubelet[2985]: E0304 01:05:49.397378 2985 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-8ef68d175b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:49.397418 kubelet[2985]: I0304 01:05:49.397415 2985 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:49.400388 kubelet[2985]: E0304 01:05:49.399747 2985 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-8ef68d175b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:49.400388 kubelet[2985]: I0304 01:05:49.399772 2985 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:49.403048 kubelet[2985]: E0304 01:05:49.403005 2985 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-8ef68d175b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:49.429415 kubelet[2985]: I0304 01:05:49.429330 2985 apiserver.go:52] "Watching apiserver" Mar 4 01:05:49.443819 kubelet[2985]: I0304 01:05:49.443760 2985 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 4 01:05:49.521388 kubelet[2985]: I0304 01:05:49.521282 2985 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:49.524449 kubelet[2985]: I0304 01:05:49.524352 2985 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:49.525051 kubelet[2985]: E0304 01:05:49.524779 2985 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-8ef68d175b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:49.526523 kubelet[2985]: E0304 01:05:49.526356 2985 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-8ef68d175b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:50.521964 kubelet[2985]: I0304 01:05:50.521927 2985 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:50.540688 kubelet[2985]: I0304 01:05:50.540315 2985 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 01:05:51.244679 kubelet[2985]: I0304 01:05:51.244648 2985 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:51.258384 kubelet[2985]: I0304 01:05:51.257885 2985 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 01:05:51.859288 systemd[1]: Reloading requested from client PID 3271 ('systemctl') (unit session-9.scope)... Mar 4 01:05:51.859304 systemd[1]: Reloading... Mar 4 01:05:51.946423 zram_generator::config[3332]: No configuration found. Mar 4 01:05:52.035338 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:05:52.117861 systemd[1]: Reloading finished in 258 ms. Mar 4 01:05:52.152811 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:05:52.170320 systemd[1]: kubelet.service: Deactivated successfully. Mar 4 01:05:52.170670 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:05:52.176052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:05:52.282791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:05:52.296774 (kubelet)[3385]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 01:05:52.377683 kubelet[3385]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:05:52.377683 kubelet[3385]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 01:05:52.377683 kubelet[3385]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:05:52.378407 kubelet[3385]: I0304 01:05:52.377852 3385 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 01:05:52.386260 kubelet[3385]: I0304 01:05:52.385608 3385 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 4 01:05:52.386260 kubelet[3385]: I0304 01:05:52.385630 3385 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 01:05:52.386260 kubelet[3385]: I0304 01:05:52.385818 3385 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 01:05:52.387585 kubelet[3385]: I0304 01:05:52.387273 3385 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 4 01:05:52.391378 kubelet[3385]: I0304 01:05:52.391339 3385 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 01:05:52.394546 kubelet[3385]: E0304 01:05:52.394521 3385 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 01:05:52.394546 kubelet[3385]: I0304 01:05:52.394546 3385 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 4 01:05:52.399668 kubelet[3385]: I0304 01:05:52.399641 3385 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 4 01:05:52.402581 kubelet[3385]: I0304 01:05:52.400170 3385 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 01:05:52.402581 kubelet[3385]: I0304 01:05:52.400198 3385 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-8ef68d175b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 4 01:05:52.402581 kubelet[3385]: I0304 01:05:52.400541 3385 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 01:05:52.402581 kubelet[3385]: I0304 01:05:52.400551 3385 container_manager_linux.go:303] "Creating device plugin manager" Mar 4 01:05:52.402581 kubelet[3385]: I0304 01:05:52.400599 3385 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:05:52.403672 kubelet[3385]: I0304 01:05:52.400731 3385 kubelet.go:480] "Attempting to sync node with API server" Mar 4 01:05:52.403672 kubelet[3385]: I0304 01:05:52.400743 3385 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 01:05:52.403672 kubelet[3385]: I0304 01:05:52.400770 3385 kubelet.go:386] "Adding apiserver pod source" Mar 4 01:05:52.403672 kubelet[3385]: I0304 01:05:52.400780 3385 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 01:05:52.408586 kubelet[3385]: I0304 01:05:52.408565 3385 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 01:05:52.409313 kubelet[3385]: I0304 01:05:52.409299 3385 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 01:05:52.420029 kubelet[3385]: I0304 01:05:52.419050 3385 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 4 01:05:52.420029 kubelet[3385]: I0304 01:05:52.419088 3385 server.go:1289] "Started kubelet" Mar 4 01:05:52.428298 kubelet[3385]: I0304 01:05:52.428278 3385 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 01:05:52.434283 kubelet[3385]: E0304 01:05:52.434260 3385 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 01:05:52.434538 kubelet[3385]: I0304 01:05:52.434517 3385 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 01:05:52.435307 kubelet[3385]: I0304 01:05:52.435290 3385 server.go:317] "Adding debug handlers to kubelet server" Mar 4 01:05:52.438163 kubelet[3385]: I0304 01:05:52.438117 3385 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 01:05:52.438419 kubelet[3385]: I0304 01:05:52.438402 3385 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 01:05:52.438664 kubelet[3385]: I0304 01:05:52.438647 3385 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 01:05:52.440337 kubelet[3385]: I0304 01:05:52.440323 3385 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 4 01:05:52.440682 kubelet[3385]: I0304 01:05:52.440667 3385 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 4 01:05:52.440904 kubelet[3385]: I0304 01:05:52.440894 3385 reconciler.go:26] "Reconciler: start to sync state" Mar 4 01:05:52.441659 kubelet[3385]: I0304 01:05:52.441640 3385 factory.go:223] Registration of the systemd container factory successfully Mar 4 01:05:52.442678 kubelet[3385]: I0304 01:05:52.441858 3385 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 01:05:52.444632 kubelet[3385]: I0304 01:05:52.444618 3385 factory.go:223] Registration of the containerd container factory successfully Mar 4 01:05:52.445882 kubelet[3385]: I0304 01:05:52.445846 3385 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 4 01:05:52.446678 kubelet[3385]: I0304 01:05:52.446657 3385 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 4 01:05:52.446735 kubelet[3385]: I0304 01:05:52.446684 3385 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 4 01:05:52.446735 kubelet[3385]: I0304 01:05:52.446705 3385 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 01:05:52.446735 kubelet[3385]: I0304 01:05:52.446712 3385 kubelet.go:2436] "Starting kubelet main sync loop" Mar 4 01:05:52.446797 kubelet[3385]: E0304 01:05:52.446751 3385 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 01:05:52.502111 kubelet[3385]: I0304 01:05:52.502089 3385 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 01:05:52.502236 kubelet[3385]: I0304 01:05:52.502225 3385 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 01:05:52.502307 kubelet[3385]: I0304 01:05:52.502300 3385 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:05:52.502528 kubelet[3385]: I0304 01:05:52.502515 3385 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 4 01:05:52.502608 kubelet[3385]: I0304 01:05:52.502588 3385 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 4 01:05:52.502656 kubelet[3385]: I0304 01:05:52.502650 3385 policy_none.go:49] "None policy: Start" Mar 4 01:05:52.502706 kubelet[3385]: I0304 01:05:52.502698 3385 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 4 01:05:52.502766 kubelet[3385]: I0304 01:05:52.502758 3385 state_mem.go:35] "Initializing new in-memory state store" Mar 4 01:05:52.502899 kubelet[3385]: I0304 01:05:52.502890 3385 state_mem.go:75] "Updated machine memory state" Mar 4 01:05:52.504062 kubelet[3385]: E0304 01:05:52.504047 3385 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 01:05:52.504321 kubelet[3385]: I0304 01:05:52.504298 3385 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 01:05:52.504483 kubelet[3385]: I0304 01:05:52.504456 3385 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 01:05:52.505053 kubelet[3385]: I0304 01:05:52.505035 3385 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 01:05:52.506846 kubelet[3385]: E0304 01:05:52.506233 3385 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 01:05:52.547799 kubelet[3385]: I0304 01:05:52.547759 3385 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:52.548053 kubelet[3385]: I0304 01:05:52.548040 3385 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:52.548401 kubelet[3385]: I0304 01:05:52.548379 3385 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:52.556620 kubelet[3385]: I0304 01:05:52.556601 3385 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 01:05:52.561284 kubelet[3385]: I0304 01:05:52.561265 3385 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 01:05:52.561427 kubelet[3385]: E0304 01:05:52.561413 3385 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-8ef68d175b\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:52.561534 kubelet[3385]: I0304 01:05:52.561270 3385 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 01:05:52.561616 kubelet[3385]: E0304 01:05:52.561606 3385 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-8ef68d175b\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:52.610760 kubelet[3385]: I0304 01:05:52.610736 3385 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:52.625233 kubelet[3385]: I0304 01:05:52.624942 3385 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:52.625233 kubelet[3385]: I0304 01:05:52.625024 3385 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:52.642889 kubelet[3385]: I0304 01:05:52.642254 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ef7bdc996681c20bdf65f27b6b2fdf5-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8ef68d175b\" (UID: \"3ef7bdc996681c20bdf65f27b6b2fdf5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:52.642889 kubelet[3385]: I0304 01:05:52.642287 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3ef7bdc996681c20bdf65f27b6b2fdf5-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-8ef68d175b\" (UID: \"3ef7bdc996681c20bdf65f27b6b2fdf5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:52.642889 kubelet[3385]: I0304 01:05:52.642304 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ef7bdc996681c20bdf65f27b6b2fdf5-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8ef68d175b\" (UID: \"3ef7bdc996681c20bdf65f27b6b2fdf5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:52.642889 kubelet[3385]: I0304 01:05:52.642319 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ef7bdc996681c20bdf65f27b6b2fdf5-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-8ef68d175b\" (UID: \"3ef7bdc996681c20bdf65f27b6b2fdf5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:52.642889 kubelet[3385]: I0304 01:05:52.642336 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbec19bb87e5d6d622ce572431c20c0b-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8ef68d175b\" (UID: \"cbec19bb87e5d6d622ce572431c20c0b\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:52.643079 kubelet[3385]: I0304 01:05:52.642351 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbec19bb87e5d6d622ce572431c20c0b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-8ef68d175b\" (UID: \"cbec19bb87e5d6d622ce572431c20c0b\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:52.643079 kubelet[3385]: I0304 01:05:52.642378 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ef7bdc996681c20bdf65f27b6b2fdf5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-8ef68d175b\" (UID: \"3ef7bdc996681c20bdf65f27b6b2fdf5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:52.643079 kubelet[3385]: I0304 01:05:52.642393 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d0759ab0d573dfe64153113064148f9-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-8ef68d175b\" (UID: \"5d0759ab0d573dfe64153113064148f9\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:52.643079 kubelet[3385]: I0304 01:05:52.642410 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbec19bb87e5d6d622ce572431c20c0b-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8ef68d175b\" (UID: \"cbec19bb87e5d6d622ce572431c20c0b\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:53.404244 kubelet[3385]: I0304 01:05:53.403221 3385 apiserver.go:52] "Watching apiserver" Mar 4 01:05:53.441386 kubelet[3385]: I0304 01:05:53.441285 3385 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 4 01:05:53.485371 kubelet[3385]: I0304 01:05:53.485177 3385 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:53.485725 kubelet[3385]: I0304 01:05:53.485711 3385 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:53.497874 kubelet[3385]: I0304 01:05:53.497621 3385 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 01:05:53.497874 kubelet[3385]: E0304 01:05:53.497669 3385 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-8ef68d175b\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:53.500649 kubelet[3385]: I0304 01:05:53.500629 3385 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 01:05:53.500725 kubelet[3385]: E0304 01:05:53.500671 3385 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-8ef68d175b\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8ef68d175b" Mar 4 01:05:53.518353 kubelet[3385]: I0304 01:05:53.517791 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8ef68d175b" podStartSLOduration=2.517774074 podStartE2EDuration="2.517774074s" podCreationTimestamp="2026-03-04 01:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:05:53.507843912 +0000 UTC m=+1.206133725" watchObservedRunningTime="2026-03-04 01:05:53.517774074 +0000 UTC m=+1.216063887" Mar 4 01:05:53.529649 kubelet[3385]: I0304 01:05:53.529277 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8ef68d175b" podStartSLOduration=3.529259638 podStartE2EDuration="3.529259638s" podCreationTimestamp="2026-03-04 01:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:05:53.529121678 +0000 UTC m=+1.227411491" watchObservedRunningTime="2026-03-04 01:05:53.529259638 +0000 UTC m=+1.227549451" Mar 4 01:05:53.529649 kubelet[3385]: I0304 01:05:53.529404 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8ef68d175b" podStartSLOduration=1.5293979979999999 podStartE2EDuration="1.529397998s" podCreationTimestamp="2026-03-04 01:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:05:53.517989355 +0000 UTC m=+1.216279128" watchObservedRunningTime="2026-03-04 01:05:53.529397998 +0000 UTC m=+1.227687811" Mar 4 01:05:58.278805 kubelet[3385]: I0304 01:05:58.278778 3385 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 4 01:05:58.279511 containerd[1824]: time="2026-03-04T01:05:58.279475148Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 4 01:05:58.279895 kubelet[3385]: I0304 01:05:58.279652 3385 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 4 01:05:59.272980 kubelet[3385]: I0304 01:05:59.272945 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5fffff29-71a6-4e4f-8fc8-3fb108781260-xtables-lock\") pod \"kube-proxy-5pxtf\" (UID: \"5fffff29-71a6-4e4f-8fc8-3fb108781260\") " pod="kube-system/kube-proxy-5pxtf" Mar 4 01:05:59.273175 kubelet[3385]: I0304 01:05:59.272994 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt82x\" (UniqueName: \"kubernetes.io/projected/5fffff29-71a6-4e4f-8fc8-3fb108781260-kube-api-access-vt82x\") pod \"kube-proxy-5pxtf\" (UID: \"5fffff29-71a6-4e4f-8fc8-3fb108781260\") " pod="kube-system/kube-proxy-5pxtf" Mar 4 01:05:59.273175 kubelet[3385]: I0304 01:05:59.273018 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5fffff29-71a6-4e4f-8fc8-3fb108781260-kube-proxy\") pod \"kube-proxy-5pxtf\" (UID: \"5fffff29-71a6-4e4f-8fc8-3fb108781260\") " pod="kube-system/kube-proxy-5pxtf" Mar 4 01:05:59.273175 kubelet[3385]: I0304 01:05:59.273036 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fffff29-71a6-4e4f-8fc8-3fb108781260-lib-modules\") pod \"kube-proxy-5pxtf\" (UID: \"5fffff29-71a6-4e4f-8fc8-3fb108781260\") " pod="kube-system/kube-proxy-5pxtf" Mar 4 01:05:59.374094 kubelet[3385]: I0304 01:05:59.374053 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b783c7a5-3810-4dca-9fe1-90d1078787e9-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-tlhwp\" (UID: \"b783c7a5-3810-4dca-9fe1-90d1078787e9\") " pod="tigera-operator/tigera-operator-6bf85f8dd-tlhwp" Mar 4 01:05:59.374094 kubelet[3385]: I0304 01:05:59.374092 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dv4j\" (UniqueName: \"kubernetes.io/projected/b783c7a5-3810-4dca-9fe1-90d1078787e9-kube-api-access-7dv4j\") pod \"tigera-operator-6bf85f8dd-tlhwp\" (UID: \"b783c7a5-3810-4dca-9fe1-90d1078787e9\") " pod="tigera-operator/tigera-operator-6bf85f8dd-tlhwp" Mar 4 01:05:59.488010 containerd[1824]: time="2026-03-04T01:05:59.487633229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5pxtf,Uid:5fffff29-71a6-4e4f-8fc8-3fb108781260,Namespace:kube-system,Attempt:0,}" Mar 4 01:05:59.532759 containerd[1824]: time="2026-03-04T01:05:59.532517747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:05:59.532759 containerd[1824]: time="2026-03-04T01:05:59.532604387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:05:59.532759 containerd[1824]: time="2026-03-04T01:05:59.532638907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:05:59.532991 containerd[1824]: time="2026-03-04T01:05:59.532944747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:05:59.569695 containerd[1824]: time="2026-03-04T01:05:59.569638545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5pxtf,Uid:5fffff29-71a6-4e4f-8fc8-3fb108781260,Namespace:kube-system,Attempt:0,} returns sandbox id \"389389bedc72ca462eff21e1a76c0b8703542bd72c27cd4cec4b0aaea6e78482\"" Mar 4 01:05:59.579475 containerd[1824]: time="2026-03-04T01:05:59.579423864Z" level=info msg="CreateContainer within sandbox \"389389bedc72ca462eff21e1a76c0b8703542bd72c27cd4cec4b0aaea6e78482\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 4 01:05:59.614387 containerd[1824]: time="2026-03-04T01:05:59.614312502Z" level=info msg="CreateContainer within sandbox \"389389bedc72ca462eff21e1a76c0b8703542bd72c27cd4cec4b0aaea6e78482\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2d18f8f879bcf0e7aa44e4412d6aab4a4af331f6c1973dd2a806bd577c33daa1\"" Mar 4 01:05:59.615584 containerd[1824]: time="2026-03-04T01:05:59.615507222Z" level=info msg="StartContainer for \"2d18f8f879bcf0e7aa44e4412d6aab4a4af331f6c1973dd2a806bd577c33daa1\"" Mar 4 01:05:59.659317 containerd[1824]: time="2026-03-04T01:05:59.658806180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-tlhwp,Uid:b783c7a5-3810-4dca-9fe1-90d1078787e9,Namespace:tigera-operator,Attempt:0,}" Mar 4 01:05:59.666500 containerd[1824]: time="2026-03-04T01:05:59.666330820Z" level=info msg="StartContainer for \"2d18f8f879bcf0e7aa44e4412d6aab4a4af331f6c1973dd2a806bd577c33daa1\" returns successfully" Mar 4 01:05:59.703371 containerd[1824]: time="2026-03-04T01:05:59.703255538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:05:59.703371 containerd[1824]: time="2026-03-04T01:05:59.703307018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:05:59.703371 containerd[1824]: time="2026-03-04T01:05:59.703327858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:05:59.704620 containerd[1824]: time="2026-03-04T01:05:59.704550818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:05:59.749222 containerd[1824]: time="2026-03-04T01:05:59.749187896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-tlhwp,Uid:b783c7a5-3810-4dca-9fe1-90d1078787e9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d47b705ee4bf3aa6d5623daedc40bbb23d8bd4da32b55f0671e2c6137edd87a9\"" Mar 4 01:05:59.754171 containerd[1824]: time="2026-03-04T01:05:59.754140055Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 4 01:06:00.387584 systemd[1]: run-containerd-runc-k8s.io-389389bedc72ca462eff21e1a76c0b8703542bd72c27cd4cec4b0aaea6e78482-runc.W5Ciyd.mount: Deactivated successfully. Mar 4 01:06:00.511593 kubelet[3385]: I0304 01:06:00.510615 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5pxtf" podStartSLOduration=1.5106006170000001 podStartE2EDuration="1.510600617s" podCreationTimestamp="2026-03-04 01:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:06:00.510449217 +0000 UTC m=+8.208739030" watchObservedRunningTime="2026-03-04 01:06:00.510600617 +0000 UTC m=+8.208890430" Mar 4 01:06:01.526804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2663036558.mount: Deactivated successfully. Mar 4 01:06:02.518030 containerd[1824]: time="2026-03-04T01:06:02.517942609Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:02.521109 containerd[1824]: time="2026-03-04T01:06:02.521075209Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 4 01:06:02.524846 containerd[1824]: time="2026-03-04T01:06:02.524607449Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:02.529888 containerd[1824]: time="2026-03-04T01:06:02.529833889Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:02.532190 containerd[1824]: time="2026-03-04T01:06:02.532150289Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.777636954s" Mar 4 01:06:02.532190 containerd[1824]: time="2026-03-04T01:06:02.532188689Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 4 01:06:02.540182 containerd[1824]: time="2026-03-04T01:06:02.540150328Z" level=info msg="CreateContainer within sandbox \"d47b705ee4bf3aa6d5623daedc40bbb23d8bd4da32b55f0671e2c6137edd87a9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 4 01:06:02.560944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount184341096.mount: Deactivated successfully. Mar 4 01:06:02.568433 containerd[1824]: time="2026-03-04T01:06:02.568356207Z" level=info msg="CreateContainer within sandbox \"d47b705ee4bf3aa6d5623daedc40bbb23d8bd4da32b55f0671e2c6137edd87a9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bf19e49e385b1a150204af3a3012c141558a2d18f62a1d264f5901dabd4a437e\"" Mar 4 01:06:02.571174 containerd[1824]: time="2026-03-04T01:06:02.569080567Z" level=info msg="StartContainer for \"bf19e49e385b1a150204af3a3012c141558a2d18f62a1d264f5901dabd4a437e\"" Mar 4 01:06:02.614646 containerd[1824]: time="2026-03-04T01:06:02.614605605Z" level=info msg="StartContainer for \"bf19e49e385b1a150204af3a3012c141558a2d18f62a1d264f5901dabd4a437e\" returns successfully" Mar 4 01:06:03.515938 kubelet[3385]: I0304 01:06:03.515884 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-tlhwp" podStartSLOduration=1.7354286559999998 podStartE2EDuration="4.51586641s" podCreationTimestamp="2026-03-04 01:05:59 +0000 UTC" firstStartedPulling="2026-03-04 01:05:59.752885735 +0000 UTC m=+7.451175548" lastFinishedPulling="2026-03-04 01:06:02.533323489 +0000 UTC m=+10.231613302" observedRunningTime="2026-03-04 01:06:03.51570901 +0000 UTC m=+11.213998823" watchObservedRunningTime="2026-03-04 01:06:03.51586641 +0000 UTC m=+11.214156183" Mar 4 01:06:08.463522 sudo[2368]: pam_unix(sudo:session): session closed for user root Mar 4 01:06:08.537771 sshd[2364]: pam_unix(sshd:session): session closed for user core Mar 4 01:06:08.544747 systemd[1]: sshd@6-10.200.20.12:22-10.200.16.10:57264.service: Deactivated successfully. Mar 4 01:06:08.544747 systemd-logind[1785]: Session 9 logged out. Waiting for processes to exit. Mar 4 01:06:08.556729 systemd[1]: session-9.scope: Deactivated successfully. Mar 4 01:06:08.561671 systemd-logind[1785]: Removed session 9. Mar 4 01:06:15.572892 kubelet[3385]: I0304 01:06:15.572855 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vk9l\" (UniqueName: \"kubernetes.io/projected/b59a6d96-4f85-4e64-beae-c6ff054083b8-kube-api-access-4vk9l\") pod \"calico-typha-68f795685f-mbgr2\" (UID: \"b59a6d96-4f85-4e64-beae-c6ff054083b8\") " pod="calico-system/calico-typha-68f795685f-mbgr2" Mar 4 01:06:15.572892 kubelet[3385]: I0304 01:06:15.572892 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b59a6d96-4f85-4e64-beae-c6ff054083b8-tigera-ca-bundle\") pod \"calico-typha-68f795685f-mbgr2\" (UID: \"b59a6d96-4f85-4e64-beae-c6ff054083b8\") " pod="calico-system/calico-typha-68f795685f-mbgr2" Mar 4 01:06:15.573308 kubelet[3385]: I0304 01:06:15.572911 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b59a6d96-4f85-4e64-beae-c6ff054083b8-typha-certs\") pod \"calico-typha-68f795685f-mbgr2\" (UID: \"b59a6d96-4f85-4e64-beae-c6ff054083b8\") " pod="calico-system/calico-typha-68f795685f-mbgr2" Mar 4 01:06:15.675561 kubelet[3385]: I0304 01:06:15.673626 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8f97affa-a13f-4531-890e-e009f8f900a9-var-run-calico\") pod \"calico-node-vrlcp\" (UID: \"8f97affa-a13f-4531-890e-e009f8f900a9\") " pod="calico-system/calico-node-vrlcp" Mar 4 01:06:15.675561 kubelet[3385]: I0304 01:06:15.674444 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8f97affa-a13f-4531-890e-e009f8f900a9-cni-log-dir\") pod \"calico-node-vrlcp\" (UID: \"8f97affa-a13f-4531-890e-e009f8f900a9\") " pod="calico-system/calico-node-vrlcp" Mar 4 01:06:15.675561 kubelet[3385]: I0304 01:06:15.674551 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f97affa-a13f-4531-890e-e009f8f900a9-lib-modules\") pod \"calico-node-vrlcp\" (UID: \"8f97affa-a13f-4531-890e-e009f8f900a9\") " pod="calico-system/calico-node-vrlcp" Mar 4 01:06:15.675561 kubelet[3385]: I0304 01:06:15.674569 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8f97affa-a13f-4531-890e-e009f8f900a9-node-certs\") pod \"calico-node-vrlcp\" (UID: \"8f97affa-a13f-4531-890e-e009f8f900a9\") " pod="calico-system/calico-node-vrlcp" Mar 4 01:06:15.675561 kubelet[3385]: I0304 01:06:15.674589 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f97affa-a13f-4531-890e-e009f8f900a9-tigera-ca-bundle\") pod \"calico-node-vrlcp\" (UID: \"8f97affa-a13f-4531-890e-e009f8f900a9\") " pod="calico-system/calico-node-vrlcp" Mar 4 01:06:15.675830 kubelet[3385]: I0304 01:06:15.674610 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8f97affa-a13f-4531-890e-e009f8f900a9-flexvol-driver-host\") pod \"calico-node-vrlcp\" (UID: \"8f97affa-a13f-4531-890e-e009f8f900a9\") " pod="calico-system/calico-node-vrlcp" Mar 4 01:06:15.675830 kubelet[3385]: I0304 01:06:15.674642 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/8f97affa-a13f-4531-890e-e009f8f900a9-nodeproc\") pod \"calico-node-vrlcp\" (UID: \"8f97affa-a13f-4531-890e-e009f8f900a9\") " pod="calico-system/calico-node-vrlcp" Mar 4 01:06:15.675830 kubelet[3385]: I0304 01:06:15.674656 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8f97affa-a13f-4531-890e-e009f8f900a9-policysync\") pod \"calico-node-vrlcp\" (UID: \"8f97affa-a13f-4531-890e-e009f8f900a9\") " pod="calico-system/calico-node-vrlcp" Mar 4 01:06:15.675830 kubelet[3385]: I0304 01:06:15.674670 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/8f97affa-a13f-4531-890e-e009f8f900a9-sys-fs\") pod \"calico-node-vrlcp\" (UID: \"8f97affa-a13f-4531-890e-e009f8f900a9\") " pod="calico-system/calico-node-vrlcp" Mar 4 01:06:15.675830 kubelet[3385]: I0304 01:06:15.674682 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8f97affa-a13f-4531-890e-e009f8f900a9-var-lib-calico\") pod \"calico-node-vrlcp\" (UID: \"8f97affa-a13f-4531-890e-e009f8f900a9\") " pod="calico-system/calico-node-vrlcp" Mar 4 01:06:15.675963 kubelet[3385]: I0304 01:06:15.674715 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f97affa-a13f-4531-890e-e009f8f900a9-xtables-lock\") pod \"calico-node-vrlcp\" (UID: \"8f97affa-a13f-4531-890e-e009f8f900a9\") " pod="calico-system/calico-node-vrlcp" Mar 4 01:06:15.675963 kubelet[3385]: I0304 01:06:15.674733 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8f97affa-a13f-4531-890e-e009f8f900a9-cni-bin-dir\") pod \"calico-node-vrlcp\" (UID: \"8f97affa-a13f-4531-890e-e009f8f900a9\") " pod="calico-system/calico-node-vrlcp" Mar 4 01:06:15.675963 kubelet[3385]: I0304 01:06:15.674788 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8f97affa-a13f-4531-890e-e009f8f900a9-cni-net-dir\") pod \"calico-node-vrlcp\" (UID: \"8f97affa-a13f-4531-890e-e009f8f900a9\") " pod="calico-system/calico-node-vrlcp" Mar 4 01:06:15.675963 kubelet[3385]: I0304 01:06:15.674821 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/8f97affa-a13f-4531-890e-e009f8f900a9-bpffs\") pod \"calico-node-vrlcp\" (UID: \"8f97affa-a13f-4531-890e-e009f8f900a9\") " pod="calico-system/calico-node-vrlcp" Mar 4 01:06:15.675963 kubelet[3385]: I0304 01:06:15.674836 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks5hr\" (UniqueName: \"kubernetes.io/projected/8f97affa-a13f-4531-890e-e009f8f900a9-kube-api-access-ks5hr\") pod \"calico-node-vrlcp\" (UID: \"8f97affa-a13f-4531-890e-e009f8f900a9\") " pod="calico-system/calico-node-vrlcp" Mar 4 01:06:15.742274 kubelet[3385]: E0304 01:06:15.742048 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzdds" podUID="6e8fe3b2-8956-4035-bad2-31607646ad57" Mar 4 01:06:15.775881 kubelet[3385]: I0304 01:06:15.775844 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6e8fe3b2-8956-4035-bad2-31607646ad57-kubelet-dir\") pod \"csi-node-driver-mzdds\" (UID: \"6e8fe3b2-8956-4035-bad2-31607646ad57\") " pod="calico-system/csi-node-driver-mzdds" Mar 4 01:06:15.776594 kubelet[3385]: I0304 01:06:15.775984 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk2zj\" (UniqueName: \"kubernetes.io/projected/6e8fe3b2-8956-4035-bad2-31607646ad57-kube-api-access-xk2zj\") pod \"csi-node-driver-mzdds\" (UID: \"6e8fe3b2-8956-4035-bad2-31607646ad57\") " pod="calico-system/csi-node-driver-mzdds" Mar 4 01:06:15.776594 kubelet[3385]: I0304 01:06:15.776005 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6e8fe3b2-8956-4035-bad2-31607646ad57-varrun\") pod \"csi-node-driver-mzdds\" (UID: \"6e8fe3b2-8956-4035-bad2-31607646ad57\") " pod="calico-system/csi-node-driver-mzdds" Mar 4 01:06:15.776594 kubelet[3385]: I0304 01:06:15.776022 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e8fe3b2-8956-4035-bad2-31607646ad57-registration-dir\") pod \"csi-node-driver-mzdds\" (UID: \"6e8fe3b2-8956-4035-bad2-31607646ad57\") " pod="calico-system/csi-node-driver-mzdds" Mar 4 01:06:15.776594 kubelet[3385]: I0304 01:06:15.776039 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e8fe3b2-8956-4035-bad2-31607646ad57-socket-dir\") pod \"csi-node-driver-mzdds\" (UID: \"6e8fe3b2-8956-4035-bad2-31607646ad57\") " pod="calico-system/csi-node-driver-mzdds" Mar 4 01:06:15.779945 kubelet[3385]: E0304 01:06:15.779862 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.780168 kubelet[3385]: W0304 01:06:15.780147 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.780490 kubelet[3385]: E0304 01:06:15.780474 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.780791 kubelet[3385]: E0304 01:06:15.780777 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.781088 kubelet[3385]: W0304 01:06:15.780997 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.781088 kubelet[3385]: E0304 01:06:15.781018 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.781385 kubelet[3385]: E0304 01:06:15.781350 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.781550 kubelet[3385]: W0304 01:06:15.781466 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.781550 kubelet[3385]: E0304 01:06:15.781483 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.784709 kubelet[3385]: E0304 01:06:15.784435 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.784871 kubelet[3385]: W0304 01:06:15.784806 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.784871 kubelet[3385]: E0304 01:06:15.784827 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.786523 kubelet[3385]: E0304 01:06:15.786211 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.786523 kubelet[3385]: W0304 01:06:15.786227 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.786523 kubelet[3385]: E0304 01:06:15.786241 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.787743 kubelet[3385]: E0304 01:06:15.787026 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.787743 kubelet[3385]: W0304 01:06:15.787045 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.787743 kubelet[3385]: E0304 01:06:15.787058 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.789205 kubelet[3385]: E0304 01:06:15.789092 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.789205 kubelet[3385]: W0304 01:06:15.789106 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.789205 kubelet[3385]: E0304 01:06:15.789120 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.790407 kubelet[3385]: E0304 01:06:15.790276 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.790407 kubelet[3385]: W0304 01:06:15.790290 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.790640 kubelet[3385]: E0304 01:06:15.790303 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.791574 kubelet[3385]: E0304 01:06:15.791548 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.791934 kubelet[3385]: W0304 01:06:15.791917 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.792677 kubelet[3385]: E0304 01:06:15.792504 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.794247 kubelet[3385]: E0304 01:06:15.794232 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.794329 kubelet[3385]: W0304 01:06:15.794318 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.794329 kubelet[3385]: E0304 01:06:15.794349 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.794704 kubelet[3385]: E0304 01:06:15.794691 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.794798 kubelet[3385]: W0304 01:06:15.794785 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.794869 kubelet[3385]: E0304 01:06:15.794854 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.796619 kubelet[3385]: E0304 01:06:15.796505 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.796619 kubelet[3385]: W0304 01:06:15.796519 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.796619 kubelet[3385]: E0304 01:06:15.796531 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.797457 kubelet[3385]: E0304 01:06:15.797437 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.797563 kubelet[3385]: W0304 01:06:15.797546 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.797635 kubelet[3385]: E0304 01:06:15.797569 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.797790 kubelet[3385]: E0304 01:06:15.797775 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.797828 kubelet[3385]: W0304 01:06:15.797790 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.797828 kubelet[3385]: E0304 01:06:15.797801 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.798035 kubelet[3385]: E0304 01:06:15.798022 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.798035 kubelet[3385]: W0304 01:06:15.798035 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.798112 kubelet[3385]: E0304 01:06:15.798047 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.798277 kubelet[3385]: E0304 01:06:15.798266 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.798277 kubelet[3385]: W0304 01:06:15.798276 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.798443 kubelet[3385]: E0304 01:06:15.798284 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.798522 kubelet[3385]: E0304 01:06:15.798510 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.798522 kubelet[3385]: W0304 01:06:15.798520 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.798667 kubelet[3385]: E0304 01:06:15.798528 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.798739 kubelet[3385]: E0304 01:06:15.798728 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.798739 kubelet[3385]: W0304 01:06:15.798737 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.798795 kubelet[3385]: E0304 01:06:15.798750 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.798990 kubelet[3385]: E0304 01:06:15.798976 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.798990 kubelet[3385]: W0304 01:06:15.798988 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.799145 kubelet[3385]: E0304 01:06:15.798998 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.799216 kubelet[3385]: E0304 01:06:15.799205 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.799216 kubelet[3385]: W0304 01:06:15.799215 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.799377 kubelet[3385]: E0304 01:06:15.799223 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.799455 kubelet[3385]: E0304 01:06:15.799443 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.799455 kubelet[3385]: W0304 01:06:15.799453 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.799603 kubelet[3385]: E0304 01:06:15.799461 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.799721 kubelet[3385]: E0304 01:06:15.799710 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.799721 kubelet[3385]: W0304 01:06:15.799719 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.799887 kubelet[3385]: E0304 01:06:15.799728 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.800833 kubelet[3385]: E0304 01:06:15.800767 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.800833 kubelet[3385]: W0304 01:06:15.800781 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.800833 kubelet[3385]: E0304 01:06:15.800796 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.801831 kubelet[3385]: E0304 01:06:15.801811 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.801831 kubelet[3385]: W0304 01:06:15.801829 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.801917 kubelet[3385]: E0304 01:06:15.801842 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.811720 kubelet[3385]: E0304 01:06:15.811621 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.811720 kubelet[3385]: W0304 01:06:15.811714 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.811870 kubelet[3385]: E0304 01:06:15.811749 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.852793 containerd[1824]: time="2026-03-04T01:06:15.852687160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68f795685f-mbgr2,Uid:b59a6d96-4f85-4e64-beae-c6ff054083b8,Namespace:calico-system,Attempt:0,}" Mar 4 01:06:15.877384 kubelet[3385]: E0304 01:06:15.877233 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.877384 kubelet[3385]: W0304 01:06:15.877252 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.877384 kubelet[3385]: E0304 01:06:15.877271 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.877551 kubelet[3385]: E0304 01:06:15.877508 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.877551 kubelet[3385]: W0304 01:06:15.877517 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.877551 kubelet[3385]: E0304 01:06:15.877527 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.877766 kubelet[3385]: E0304 01:06:15.877753 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.877766 kubelet[3385]: W0304 01:06:15.877765 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.877829 kubelet[3385]: E0304 01:06:15.877775 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.878386 kubelet[3385]: E0304 01:06:15.878349 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.878386 kubelet[3385]: W0304 01:06:15.878375 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.878386 kubelet[3385]: E0304 01:06:15.878386 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.881288 kubelet[3385]: E0304 01:06:15.881266 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.881288 kubelet[3385]: W0304 01:06:15.881282 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.881288 kubelet[3385]: E0304 01:06:15.881293 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.881512 kubelet[3385]: E0304 01:06:15.881485 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.881512 kubelet[3385]: W0304 01:06:15.881495 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.881512 kubelet[3385]: E0304 01:06:15.881504 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.881708 kubelet[3385]: E0304 01:06:15.881695 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.881708 kubelet[3385]: W0304 01:06:15.881705 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.881770 kubelet[3385]: E0304 01:06:15.881715 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.883107 kubelet[3385]: E0304 01:06:15.883084 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.883107 kubelet[3385]: W0304 01:06:15.883104 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.883107 kubelet[3385]: E0304 01:06:15.883115 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.884569 kubelet[3385]: E0304 01:06:15.884431 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.884569 kubelet[3385]: W0304 01:06:15.884447 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.884569 kubelet[3385]: E0304 01:06:15.884458 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.885512 kubelet[3385]: E0304 01:06:15.885395 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.885512 kubelet[3385]: W0304 01:06:15.885409 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.885512 kubelet[3385]: E0304 01:06:15.885420 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.885901 kubelet[3385]: E0304 01:06:15.885781 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.885901 kubelet[3385]: W0304 01:06:15.885792 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.885901 kubelet[3385]: E0304 01:06:15.885802 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.886114 kubelet[3385]: E0304 01:06:15.886084 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.886114 kubelet[3385]: W0304 01:06:15.886094 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.886114 kubelet[3385]: E0304 01:06:15.886104 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.886605 kubelet[3385]: E0304 01:06:15.886484 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.886605 kubelet[3385]: W0304 01:06:15.886500 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.886605 kubelet[3385]: E0304 01:06:15.886511 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.886849 kubelet[3385]: E0304 01:06:15.886777 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.886849 kubelet[3385]: W0304 01:06:15.886788 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.886849 kubelet[3385]: E0304 01:06:15.886802 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.887246 kubelet[3385]: E0304 01:06:15.887133 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.887246 kubelet[3385]: W0304 01:06:15.887144 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.887246 kubelet[3385]: E0304 01:06:15.887153 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.887638 kubelet[3385]: E0304 01:06:15.887547 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.887638 kubelet[3385]: W0304 01:06:15.887558 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.887638 kubelet[3385]: E0304 01:06:15.887568 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.887905 kubelet[3385]: E0304 01:06:15.887815 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.887905 kubelet[3385]: W0304 01:06:15.887825 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.887905 kubelet[3385]: E0304 01:06:15.887834 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.888330 kubelet[3385]: E0304 01:06:15.888228 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.888330 kubelet[3385]: W0304 01:06:15.888239 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.888330 kubelet[3385]: E0304 01:06:15.888249 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.888602 kubelet[3385]: E0304 01:06:15.888537 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.888602 kubelet[3385]: W0304 01:06:15.888547 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.888602 kubelet[3385]: E0304 01:06:15.888557 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.888969 kubelet[3385]: E0304 01:06:15.888875 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.888969 kubelet[3385]: W0304 01:06:15.888885 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.888969 kubelet[3385]: E0304 01:06:15.888895 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.889364 kubelet[3385]: E0304 01:06:15.889239 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.889364 kubelet[3385]: W0304 01:06:15.889250 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.889364 kubelet[3385]: E0304 01:06:15.889259 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.889663 kubelet[3385]: E0304 01:06:15.889591 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.889663 kubelet[3385]: W0304 01:06:15.889602 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.889663 kubelet[3385]: E0304 01:06:15.889612 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.890008 kubelet[3385]: E0304 01:06:15.889927 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.890008 kubelet[3385]: W0304 01:06:15.889937 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.890008 kubelet[3385]: E0304 01:06:15.889947 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.890387 kubelet[3385]: E0304 01:06:15.890259 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.890387 kubelet[3385]: W0304 01:06:15.890279 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.890387 kubelet[3385]: E0304 01:06:15.890289 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.891693 kubelet[3385]: E0304 01:06:15.891581 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.891693 kubelet[3385]: W0304 01:06:15.891602 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.891693 kubelet[3385]: E0304 01:06:15.891619 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.896381 containerd[1824]: time="2026-03-04T01:06:15.895512724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:06:15.896381 containerd[1824]: time="2026-03-04T01:06:15.895564124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:06:15.896381 containerd[1824]: time="2026-03-04T01:06:15.895579324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:06:15.896381 containerd[1824]: time="2026-03-04T01:06:15.895651804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:06:15.901371 kubelet[3385]: E0304 01:06:15.901339 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:15.901457 kubelet[3385]: W0304 01:06:15.901379 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:15.901457 kubelet[3385]: E0304 01:06:15.901400 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:15.933559 containerd[1824]: time="2026-03-04T01:06:15.933522647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vrlcp,Uid:8f97affa-a13f-4531-890e-e009f8f900a9,Namespace:calico-system,Attempt:0,}" Mar 4 01:06:15.944078 containerd[1824]: time="2026-03-04T01:06:15.944043448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68f795685f-mbgr2,Uid:b59a6d96-4f85-4e64-beae-c6ff054083b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"f374bec55903295b4fe2c23f79a2c95dd5f6c79deb82162bfa48d517be2fa484\"" Mar 4 01:06:15.945847 containerd[1824]: time="2026-03-04T01:06:15.945823769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 4 01:06:15.982113 containerd[1824]: time="2026-03-04T01:06:15.982029172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:06:15.982277 containerd[1824]: time="2026-03-04T01:06:15.982092132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:06:15.982343 containerd[1824]: time="2026-03-04T01:06:15.982270812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:06:15.982953 containerd[1824]: time="2026-03-04T01:06:15.982908492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:06:16.011268 containerd[1824]: time="2026-03-04T01:06:16.011234654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vrlcp,Uid:8f97affa-a13f-4531-890e-e009f8f900a9,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa2420a7e42f47ecdb4a4d85c7b3bf914820d35b0ab2b134614f22ea117b5714\"" Mar 4 01:06:17.448059 kubelet[3385]: E0304 01:06:17.448008 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzdds" podUID="6e8fe3b2-8956-4035-bad2-31607646ad57" Mar 4 01:06:17.471979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3747084570.mount: Deactivated successfully. Mar 4 01:06:18.128131 containerd[1824]: time="2026-03-04T01:06:18.127425410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:18.131154 containerd[1824]: time="2026-03-04T01:06:18.131127890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 4 01:06:18.134192 containerd[1824]: time="2026-03-04T01:06:18.134167050Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:18.139111 containerd[1824]: time="2026-03-04T01:06:18.139082811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:18.139947 containerd[1824]: time="2026-03-04T01:06:18.139914851Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.193965802s" Mar 4 01:06:18.140309 containerd[1824]: time="2026-03-04T01:06:18.139949971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 4 01:06:18.141987 containerd[1824]: time="2026-03-04T01:06:18.141791971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 4 01:06:18.157219 containerd[1824]: time="2026-03-04T01:06:18.157181053Z" level=info msg="CreateContainer within sandbox \"f374bec55903295b4fe2c23f79a2c95dd5f6c79deb82162bfa48d517be2fa484\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 4 01:06:18.189647 containerd[1824]: time="2026-03-04T01:06:18.189580696Z" level=info msg="CreateContainer within sandbox \"f374bec55903295b4fe2c23f79a2c95dd5f6c79deb82162bfa48d517be2fa484\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ebe7654e1cf48a51bd3153ae6db26e7cc517bb7b8aa9cb8eceb515fbaeff452a\"" Mar 4 01:06:18.190579 containerd[1824]: time="2026-03-04T01:06:18.190376176Z" level=info msg="StartContainer for \"ebe7654e1cf48a51bd3153ae6db26e7cc517bb7b8aa9cb8eceb515fbaeff452a\"" Mar 4 01:06:18.252278 containerd[1824]: time="2026-03-04T01:06:18.252238582Z" level=info msg="StartContainer for \"ebe7654e1cf48a51bd3153ae6db26e7cc517bb7b8aa9cb8eceb515fbaeff452a\" returns successfully" Mar 4 01:06:18.570283 kubelet[3385]: I0304 01:06:18.569675 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-68f795685f-mbgr2" podStartSLOduration=1.374009091 podStartE2EDuration="3.569650054s" podCreationTimestamp="2026-03-04 01:06:15 +0000 UTC" firstStartedPulling="2026-03-04 01:06:15.945494168 +0000 UTC m=+23.643783941" lastFinishedPulling="2026-03-04 01:06:18.141135091 +0000 UTC m=+25.839424904" observedRunningTime="2026-03-04 01:06:18.569627694 +0000 UTC m=+26.267917467" watchObservedRunningTime="2026-03-04 01:06:18.569650054 +0000 UTC m=+26.267939867" Mar 4 01:06:18.582192 kubelet[3385]: E0304 01:06:18.582078 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.582192 kubelet[3385]: W0304 01:06:18.582099 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.582192 kubelet[3385]: E0304 01:06:18.582118 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.582451 kubelet[3385]: E0304 01:06:18.582439 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.582544 kubelet[3385]: W0304 01:06:18.582509 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.582730 kubelet[3385]: E0304 01:06:18.582628 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.582919 kubelet[3385]: E0304 01:06:18.582823 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.582919 kubelet[3385]: W0304 01:06:18.582836 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.582919 kubelet[3385]: E0304 01:06:18.582852 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.583056 kubelet[3385]: E0304 01:06:18.583046 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.583107 kubelet[3385]: W0304 01:06:18.583097 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.583163 kubelet[3385]: E0304 01:06:18.583154 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.583467 kubelet[3385]: E0304 01:06:18.583387 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.583467 kubelet[3385]: W0304 01:06:18.583397 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.583467 kubelet[3385]: E0304 01:06:18.583407 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.583632 kubelet[3385]: E0304 01:06:18.583621 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.585072 kubelet[3385]: W0304 01:06:18.584175 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.585072 kubelet[3385]: E0304 01:06:18.584196 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.585072 kubelet[3385]: E0304 01:06:18.584435 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.585072 kubelet[3385]: W0304 01:06:18.584445 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.585072 kubelet[3385]: E0304 01:06:18.584469 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.585072 kubelet[3385]: E0304 01:06:18.584631 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.585072 kubelet[3385]: W0304 01:06:18.584639 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.585072 kubelet[3385]: E0304 01:06:18.584648 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.585072 kubelet[3385]: E0304 01:06:18.584827 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.585072 kubelet[3385]: W0304 01:06:18.584836 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.585327 kubelet[3385]: E0304 01:06:18.584860 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.585327 kubelet[3385]: E0304 01:06:18.585020 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.585327 kubelet[3385]: W0304 01:06:18.585027 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.585327 kubelet[3385]: E0304 01:06:18.585038 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.585327 kubelet[3385]: E0304 01:06:18.585248 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.585327 kubelet[3385]: W0304 01:06:18.585258 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.585327 kubelet[3385]: E0304 01:06:18.585268 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.585539 kubelet[3385]: E0304 01:06:18.585484 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.585539 kubelet[3385]: W0304 01:06:18.585496 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.585539 kubelet[3385]: E0304 01:06:18.585505 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.585934 kubelet[3385]: E0304 01:06:18.585772 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.585934 kubelet[3385]: W0304 01:06:18.585786 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.585934 kubelet[3385]: E0304 01:06:18.585797 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.586025 kubelet[3385]: E0304 01:06:18.585987 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.586025 kubelet[3385]: W0304 01:06:18.585995 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.586025 kubelet[3385]: E0304 01:06:18.586004 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.586607 kubelet[3385]: E0304 01:06:18.586142 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.586607 kubelet[3385]: W0304 01:06:18.586155 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.586607 kubelet[3385]: E0304 01:06:18.586164 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.606699 kubelet[3385]: E0304 01:06:18.606676 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.606699 kubelet[3385]: W0304 01:06:18.606694 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.606908 kubelet[3385]: E0304 01:06:18.606710 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.606981 kubelet[3385]: E0304 01:06:18.606967 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.606981 kubelet[3385]: W0304 01:06:18.606979 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.607036 kubelet[3385]: E0304 01:06:18.606988 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.607183 kubelet[3385]: E0304 01:06:18.607171 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.607183 kubelet[3385]: W0304 01:06:18.607181 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.607249 kubelet[3385]: E0304 01:06:18.607189 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.607400 kubelet[3385]: E0304 01:06:18.607388 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.607400 kubelet[3385]: W0304 01:06:18.607398 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.607465 kubelet[3385]: E0304 01:06:18.607407 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.607600 kubelet[3385]: E0304 01:06:18.607589 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.607600 kubelet[3385]: W0304 01:06:18.607599 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.607666 kubelet[3385]: E0304 01:06:18.607607 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.607800 kubelet[3385]: E0304 01:06:18.607787 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.607800 kubelet[3385]: W0304 01:06:18.607798 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.607853 kubelet[3385]: E0304 01:06:18.607808 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.607990 kubelet[3385]: E0304 01:06:18.607978 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.607990 kubelet[3385]: W0304 01:06:18.607989 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.608119 kubelet[3385]: E0304 01:06:18.607997 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.608252 kubelet[3385]: E0304 01:06:18.608238 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.608252 kubelet[3385]: W0304 01:06:18.608249 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.608306 kubelet[3385]: E0304 01:06:18.608258 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.608458 kubelet[3385]: E0304 01:06:18.608446 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.608458 kubelet[3385]: W0304 01:06:18.608456 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.608603 kubelet[3385]: E0304 01:06:18.608464 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.608673 kubelet[3385]: E0304 01:06:18.608660 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.608709 kubelet[3385]: W0304 01:06:18.608673 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.608709 kubelet[3385]: E0304 01:06:18.608682 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.608867 kubelet[3385]: E0304 01:06:18.608852 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.608867 kubelet[3385]: W0304 01:06:18.608863 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.609004 kubelet[3385]: E0304 01:06:18.608872 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.609075 kubelet[3385]: E0304 01:06:18.609063 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.609075 kubelet[3385]: W0304 01:06:18.609073 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.609126 kubelet[3385]: E0304 01:06:18.609081 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.609256 kubelet[3385]: E0304 01:06:18.609244 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.609256 kubelet[3385]: W0304 01:06:18.609254 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.609321 kubelet[3385]: E0304 01:06:18.609262 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.609574 kubelet[3385]: E0304 01:06:18.609560 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.609574 kubelet[3385]: W0304 01:06:18.609572 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.609639 kubelet[3385]: E0304 01:06:18.609580 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.609774 kubelet[3385]: E0304 01:06:18.609763 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.609774 kubelet[3385]: W0304 01:06:18.609773 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.609839 kubelet[3385]: E0304 01:06:18.609782 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.609937 kubelet[3385]: E0304 01:06:18.609927 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.609937 kubelet[3385]: W0304 01:06:18.609936 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.609994 kubelet[3385]: E0304 01:06:18.609944 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.610100 kubelet[3385]: E0304 01:06:18.610090 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.610100 kubelet[3385]: W0304 01:06:18.610099 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.610159 kubelet[3385]: E0304 01:06:18.610107 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:18.610618 kubelet[3385]: E0304 01:06:18.610604 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:18.610618 kubelet[3385]: W0304 01:06:18.610617 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:18.610686 kubelet[3385]: E0304 01:06:18.610626 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.448118 kubelet[3385]: E0304 01:06:19.448020 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzdds" podUID="6e8fe3b2-8956-4035-bad2-31607646ad57" Mar 4 01:06:19.553666 kubelet[3385]: I0304 01:06:19.553636 3385 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:06:19.568083 containerd[1824]: time="2026-03-04T01:06:19.568036553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:19.570834 containerd[1824]: time="2026-03-04T01:06:19.570621154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 4 01:06:19.573672 containerd[1824]: time="2026-03-04T01:06:19.573632834Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:19.578639 containerd[1824]: time="2026-03-04T01:06:19.578587074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:19.579945 containerd[1824]: time="2026-03-04T01:06:19.579456035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.437631224s" Mar 4 01:06:19.579945 containerd[1824]: time="2026-03-04T01:06:19.579488835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 4 01:06:19.588309 containerd[1824]: time="2026-03-04T01:06:19.588243515Z" level=info msg="CreateContainer within sandbox \"fa2420a7e42f47ecdb4a4d85c7b3bf914820d35b0ab2b134614f22ea117b5714\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 4 01:06:19.592449 kubelet[3385]: E0304 01:06:19.592355 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.592449 kubelet[3385]: W0304 01:06:19.592397 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.592449 kubelet[3385]: E0304 01:06:19.592415 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.593057 kubelet[3385]: E0304 01:06:19.592964 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.593057 kubelet[3385]: W0304 01:06:19.592977 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.593057 kubelet[3385]: E0304 01:06:19.592988 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.593481 kubelet[3385]: E0304 01:06:19.593136 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.593481 kubelet[3385]: W0304 01:06:19.593144 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.593481 kubelet[3385]: E0304 01:06:19.593156 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.593481 kubelet[3385]: E0304 01:06:19.593308 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.593481 kubelet[3385]: W0304 01:06:19.593316 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.593481 kubelet[3385]: E0304 01:06:19.593325 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.593753 kubelet[3385]: E0304 01:06:19.593655 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.593753 kubelet[3385]: W0304 01:06:19.593668 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.593753 kubelet[3385]: E0304 01:06:19.593678 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.594011 kubelet[3385]: E0304 01:06:19.593838 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.594011 kubelet[3385]: W0304 01:06:19.593853 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.594011 kubelet[3385]: E0304 01:06:19.593863 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.594126 kubelet[3385]: E0304 01:06:19.594115 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.594252 kubelet[3385]: W0304 01:06:19.594168 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.594252 kubelet[3385]: E0304 01:06:19.594180 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.594534 kubelet[3385]: E0304 01:06:19.594431 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.594534 kubelet[3385]: W0304 01:06:19.594446 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.594534 kubelet[3385]: E0304 01:06:19.594456 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.594716 kubelet[3385]: E0304 01:06:19.594672 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.594716 kubelet[3385]: W0304 01:06:19.594683 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.594716 kubelet[3385]: E0304 01:06:19.594692 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.595022 kubelet[3385]: E0304 01:06:19.594937 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.595022 kubelet[3385]: W0304 01:06:19.594948 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.595022 kubelet[3385]: E0304 01:06:19.594961 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.595213 kubelet[3385]: E0304 01:06:19.595107 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.595213 kubelet[3385]: W0304 01:06:19.595115 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.595213 kubelet[3385]: E0304 01:06:19.595125 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.595535 kubelet[3385]: E0304 01:06:19.595432 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.595535 kubelet[3385]: W0304 01:06:19.595443 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.595535 kubelet[3385]: E0304 01:06:19.595452 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.595744 kubelet[3385]: E0304 01:06:19.595701 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.595744 kubelet[3385]: W0304 01:06:19.595712 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.595744 kubelet[3385]: E0304 01:06:19.595722 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.596070 kubelet[3385]: E0304 01:06:19.595976 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.596070 kubelet[3385]: W0304 01:06:19.595989 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.596070 kubelet[3385]: E0304 01:06:19.596002 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.596265 kubelet[3385]: E0304 01:06:19.596143 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.596265 kubelet[3385]: W0304 01:06:19.596151 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.596265 kubelet[3385]: E0304 01:06:19.596160 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.613558 kubelet[3385]: E0304 01:06:19.613535 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.613558 kubelet[3385]: W0304 01:06:19.613555 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.613728 kubelet[3385]: E0304 01:06:19.613571 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.613873 kubelet[3385]: E0304 01:06:19.613742 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.613873 kubelet[3385]: W0304 01:06:19.613750 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.613873 kubelet[3385]: E0304 01:06:19.613760 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.614051 kubelet[3385]: E0304 01:06:19.614036 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.614194 kubelet[3385]: W0304 01:06:19.614115 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.614194 kubelet[3385]: E0304 01:06:19.614132 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.614424 kubelet[3385]: E0304 01:06:19.614412 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.614596 kubelet[3385]: W0304 01:06:19.614501 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.614596 kubelet[3385]: E0304 01:06:19.614519 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.614741 kubelet[3385]: E0304 01:06:19.614731 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.614868 kubelet[3385]: W0304 01:06:19.614789 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.614868 kubelet[3385]: E0304 01:06:19.614803 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.615117 kubelet[3385]: E0304 01:06:19.615056 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.615117 kubelet[3385]: W0304 01:06:19.615068 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.615117 kubelet[3385]: E0304 01:06:19.615078 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.615550 kubelet[3385]: E0304 01:06:19.615457 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.615550 kubelet[3385]: W0304 01:06:19.615472 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.615550 kubelet[3385]: E0304 01:06:19.615485 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.617796 kubelet[3385]: E0304 01:06:19.617645 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.617796 kubelet[3385]: W0304 01:06:19.617660 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.617796 kubelet[3385]: E0304 01:06:19.617676 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.618071 kubelet[3385]: E0304 01:06:19.617955 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.618071 kubelet[3385]: W0304 01:06:19.617967 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.618071 kubelet[3385]: E0304 01:06:19.617977 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.619048 kubelet[3385]: E0304 01:06:19.618836 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.619048 kubelet[3385]: W0304 01:06:19.618850 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.619048 kubelet[3385]: E0304 01:06:19.618862 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.621053 kubelet[3385]: E0304 01:06:19.619222 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.621053 kubelet[3385]: W0304 01:06:19.619231 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.621053 kubelet[3385]: E0304 01:06:19.619242 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.621053 kubelet[3385]: E0304 01:06:19.619422 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.621053 kubelet[3385]: W0304 01:06:19.619432 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.621053 kubelet[3385]: E0304 01:06:19.619440 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.621053 kubelet[3385]: E0304 01:06:19.619607 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.621053 kubelet[3385]: W0304 01:06:19.619616 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.621053 kubelet[3385]: E0304 01:06:19.619624 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.621053 kubelet[3385]: E0304 01:06:19.619844 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.621321 kubelet[3385]: W0304 01:06:19.619856 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.621321 kubelet[3385]: E0304 01:06:19.619867 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.621321 kubelet[3385]: E0304 01:06:19.620023 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.621321 kubelet[3385]: W0304 01:06:19.620032 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.621321 kubelet[3385]: E0304 01:06:19.620041 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.621321 kubelet[3385]: E0304 01:06:19.620201 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.621321 kubelet[3385]: W0304 01:06:19.620210 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.621321 kubelet[3385]: E0304 01:06:19.620220 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.621321 kubelet[3385]: E0304 01:06:19.620545 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.621321 kubelet[3385]: W0304 01:06:19.620558 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.621663 kubelet[3385]: E0304 01:06:19.620572 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.621663 kubelet[3385]: E0304 01:06:19.620898 3385 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 01:06:19.621663 kubelet[3385]: W0304 01:06:19.620918 3385 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 01:06:19.621663 kubelet[3385]: E0304 01:06:19.620928 3385 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 01:06:19.630313 containerd[1824]: time="2026-03-04T01:06:19.630198440Z" level=info msg="CreateContainer within sandbox \"fa2420a7e42f47ecdb4a4d85c7b3bf914820d35b0ab2b134614f22ea117b5714\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fca7c7c533d16f3427ad5b8892372864f087ad774c76f466064ccde7d2c7b6fc\"" Mar 4 01:06:19.631166 containerd[1824]: time="2026-03-04T01:06:19.630899400Z" level=info msg="StartContainer for \"fca7c7c533d16f3427ad5b8892372864f087ad774c76f466064ccde7d2c7b6fc\"" Mar 4 01:06:19.684600 containerd[1824]: time="2026-03-04T01:06:19.684031685Z" level=info msg="StartContainer for \"fca7c7c533d16f3427ad5b8892372864f087ad774c76f466064ccde7d2c7b6fc\" returns successfully" Mar 4 01:06:19.706233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fca7c7c533d16f3427ad5b8892372864f087ad774c76f466064ccde7d2c7b6fc-rootfs.mount: Deactivated successfully. Mar 4 01:06:20.875413 containerd[1824]: time="2026-03-04T01:06:20.875330764Z" level=info msg="shim disconnected" id=fca7c7c533d16f3427ad5b8892372864f087ad774c76f466064ccde7d2c7b6fc namespace=k8s.io Mar 4 01:06:20.875413 containerd[1824]: time="2026-03-04T01:06:20.875436004Z" level=warning msg="cleaning up after shim disconnected" id=fca7c7c533d16f3427ad5b8892372864f087ad774c76f466064ccde7d2c7b6fc namespace=k8s.io Mar 4 01:06:20.875413 containerd[1824]: time="2026-03-04T01:06:20.875446524Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:06:21.448016 kubelet[3385]: E0304 01:06:21.447965 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzdds" podUID="6e8fe3b2-8956-4035-bad2-31607646ad57" Mar 4 01:06:21.562395 containerd[1824]: time="2026-03-04T01:06:21.561966832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 4 01:06:22.511578 kubelet[3385]: I0304 01:06:22.511550 3385 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:06:23.447292 kubelet[3385]: E0304 01:06:23.447249 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzdds" podUID="6e8fe3b2-8956-4035-bad2-31607646ad57" Mar 4 01:06:25.447633 kubelet[3385]: E0304 01:06:25.447590 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzdds" podUID="6e8fe3b2-8956-4035-bad2-31607646ad57" Mar 4 01:06:27.447021 kubelet[3385]: E0304 01:06:27.446974 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzdds" podUID="6e8fe3b2-8956-4035-bad2-31607646ad57" Mar 4 01:06:29.447445 kubelet[3385]: E0304 01:06:29.447388 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzdds" podUID="6e8fe3b2-8956-4035-bad2-31607646ad57" Mar 4 01:06:29.906135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1126166295.mount: Deactivated successfully. Mar 4 01:06:29.951170 containerd[1824]: time="2026-03-04T01:06:29.950705078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:29.960120 containerd[1824]: time="2026-03-04T01:06:29.959965797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 4 01:06:29.960120 containerd[1824]: time="2026-03-04T01:06:29.960078677Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:29.965390 containerd[1824]: time="2026-03-04T01:06:29.965294917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:29.966217 containerd[1824]: time="2026-03-04T01:06:29.965774036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 8.403770764s" Mar 4 01:06:29.966217 containerd[1824]: time="2026-03-04T01:06:29.965804076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 4 01:06:29.973537 containerd[1824]: time="2026-03-04T01:06:29.973436156Z" level=info msg="CreateContainer within sandbox \"fa2420a7e42f47ecdb4a4d85c7b3bf914820d35b0ab2b134614f22ea117b5714\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 4 01:06:30.011558 containerd[1824]: time="2026-03-04T01:06:30.011345113Z" level=info msg="CreateContainer within sandbox \"fa2420a7e42f47ecdb4a4d85c7b3bf914820d35b0ab2b134614f22ea117b5714\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"dd72814435a2c4133feb14cfc8ae2d62cbe60f2fb0d7b1dc6a9224d933038949\"" Mar 4 01:06:30.012021 containerd[1824]: time="2026-03-04T01:06:30.011995513Z" level=info msg="StartContainer for \"dd72814435a2c4133feb14cfc8ae2d62cbe60f2fb0d7b1dc6a9224d933038949\"" Mar 4 01:06:30.066855 containerd[1824]: time="2026-03-04T01:06:30.066758629Z" level=info msg="StartContainer for \"dd72814435a2c4133feb14cfc8ae2d62cbe60f2fb0d7b1dc6a9224d933038949\" returns successfully" Mar 4 01:06:30.904602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd72814435a2c4133feb14cfc8ae2d62cbe60f2fb0d7b1dc6a9224d933038949-rootfs.mount: Deactivated successfully. Mar 4 01:06:31.447550 kubelet[3385]: E0304 01:06:31.447505 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzdds" podUID="6e8fe3b2-8956-4035-bad2-31607646ad57" Mar 4 01:06:31.727318 containerd[1824]: time="2026-03-04T01:06:31.726981155Z" level=info msg="shim disconnected" id=dd72814435a2c4133feb14cfc8ae2d62cbe60f2fb0d7b1dc6a9224d933038949 namespace=k8s.io Mar 4 01:06:31.727318 containerd[1824]: time="2026-03-04T01:06:31.727031435Z" level=warning msg="cleaning up after shim disconnected" id=dd72814435a2c4133feb14cfc8ae2d62cbe60f2fb0d7b1dc6a9224d933038949 namespace=k8s.io Mar 4 01:06:31.727318 containerd[1824]: time="2026-03-04T01:06:31.727039235Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:06:32.583808 containerd[1824]: time="2026-03-04T01:06:32.582741615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 4 01:06:33.448059 kubelet[3385]: E0304 01:06:33.447923 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzdds" podUID="6e8fe3b2-8956-4035-bad2-31607646ad57" Mar 4 01:06:35.447763 kubelet[3385]: E0304 01:06:35.447467 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzdds" podUID="6e8fe3b2-8956-4035-bad2-31607646ad57" Mar 4 01:06:35.658622 containerd[1824]: time="2026-03-04T01:06:35.658572937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:35.661375 containerd[1824]: time="2026-03-04T01:06:35.661337336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 4 01:06:35.664339 containerd[1824]: time="2026-03-04T01:06:35.664288296Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:35.668671 containerd[1824]: time="2026-03-04T01:06:35.668625455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:35.669487 containerd[1824]: time="2026-03-04T01:06:35.669373615Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.08657676s" Mar 4 01:06:35.669487 containerd[1824]: time="2026-03-04T01:06:35.669405015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 4 01:06:35.677777 containerd[1824]: time="2026-03-04T01:06:35.677743613Z" level=info msg="CreateContainer within sandbox \"fa2420a7e42f47ecdb4a4d85c7b3bf914820d35b0ab2b134614f22ea117b5714\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 4 01:06:35.714588 containerd[1824]: time="2026-03-04T01:06:35.714471607Z" level=info msg="CreateContainer within sandbox \"fa2420a7e42f47ecdb4a4d85c7b3bf914820d35b0ab2b134614f22ea117b5714\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"97bd6fc0af80ea5f9dc8034158172012410d952ac70bf8e862ec055d73e1cafc\"" Mar 4 01:06:35.715774 containerd[1824]: time="2026-03-04T01:06:35.715468526Z" level=info msg="StartContainer for \"97bd6fc0af80ea5f9dc8034158172012410d952ac70bf8e862ec055d73e1cafc\"" Mar 4 01:06:35.773317 containerd[1824]: time="2026-03-04T01:06:35.772422876Z" level=info msg="StartContainer for \"97bd6fc0af80ea5f9dc8034158172012410d952ac70bf8e862ec055d73e1cafc\" returns successfully" Mar 4 01:06:37.447921 kubelet[3385]: E0304 01:06:37.447839 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzdds" podUID="6e8fe3b2-8956-4035-bad2-31607646ad57" Mar 4 01:06:37.831033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97bd6fc0af80ea5f9dc8034158172012410d952ac70bf8e862ec055d73e1cafc-rootfs.mount: Deactivated successfully. Mar 4 01:06:37.848866 containerd[1824]: time="2026-03-04T01:06:37.848734101Z" level=info msg="shim disconnected" id=97bd6fc0af80ea5f9dc8034158172012410d952ac70bf8e862ec055d73e1cafc namespace=k8s.io Mar 4 01:06:37.849482 containerd[1824]: time="2026-03-04T01:06:37.849289901Z" level=warning msg="cleaning up after shim disconnected" id=97bd6fc0af80ea5f9dc8034158172012410d952ac70bf8e862ec055d73e1cafc namespace=k8s.io Mar 4 01:06:37.849482 containerd[1824]: time="2026-03-04T01:06:37.849311381Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:06:37.878076 kubelet[3385]: I0304 01:06:37.878048 3385 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 4 01:06:38.032192 kubelet[3385]: I0304 01:06:38.031738 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a0493f89-de54-4276-8d41-63add615c949-whisker-backend-key-pair\") pod \"whisker-5d6b8d5546-vcjv4\" (UID: \"a0493f89-de54-4276-8d41-63add615c949\") " pod="calico-system/whisker-5d6b8d5546-vcjv4" Mar 4 01:06:38.032192 kubelet[3385]: I0304 01:06:38.031777 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0493f89-de54-4276-8d41-63add615c949-whisker-ca-bundle\") pod \"whisker-5d6b8d5546-vcjv4\" (UID: \"a0493f89-de54-4276-8d41-63add615c949\") " pod="calico-system/whisker-5d6b8d5546-vcjv4" Mar 4 01:06:38.032192 kubelet[3385]: I0304 01:06:38.031794 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swlnz\" (UniqueName: \"kubernetes.io/projected/43b4ceef-d4ae-420e-b9e4-6a44b6d9ede3-kube-api-access-swlnz\") pod \"calico-apiserver-7c58d797df-nrmb5\" (UID: \"43b4ceef-d4ae-420e-b9e4-6a44b6d9ede3\") " pod="calico-system/calico-apiserver-7c58d797df-nrmb5" Mar 4 01:06:38.032192 kubelet[3385]: I0304 01:06:38.031810 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ea0036f7-27e7-42ae-8dc8-5a1007c59806-goldmane-key-pair\") pod \"goldmane-5b85766d88-q7bfd\" (UID: \"ea0036f7-27e7-42ae-8dc8-5a1007c59806\") " pod="calico-system/goldmane-5b85766d88-q7bfd" Mar 4 01:06:38.032192 kubelet[3385]: I0304 01:06:38.031829 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4xkh\" (UniqueName: \"kubernetes.io/projected/ea0036f7-27e7-42ae-8dc8-5a1007c59806-kube-api-access-h4xkh\") pod \"goldmane-5b85766d88-q7bfd\" (UID: \"ea0036f7-27e7-42ae-8dc8-5a1007c59806\") " pod="calico-system/goldmane-5b85766d88-q7bfd" Mar 4 01:06:38.032483 kubelet[3385]: I0304 01:06:38.031848 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a0493f89-de54-4276-8d41-63add615c949-nginx-config\") pod \"whisker-5d6b8d5546-vcjv4\" (UID: \"a0493f89-de54-4276-8d41-63add615c949\") " pod="calico-system/whisker-5d6b8d5546-vcjv4" Mar 4 01:06:38.032483 kubelet[3385]: I0304 01:06:38.031864 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87x77\" (UniqueName: \"kubernetes.io/projected/f7716632-24cc-4b2d-9197-7d4214b114df-kube-api-access-87x77\") pod \"calico-apiserver-7c58d797df-d2knn\" (UID: \"f7716632-24cc-4b2d-9197-7d4214b114df\") " pod="calico-system/calico-apiserver-7c58d797df-d2knn" Mar 4 01:06:38.032483 kubelet[3385]: I0304 01:06:38.031882 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rtqh\" (UniqueName: \"kubernetes.io/projected/a0493f89-de54-4276-8d41-63add615c949-kube-api-access-5rtqh\") pod \"whisker-5d6b8d5546-vcjv4\" (UID: \"a0493f89-de54-4276-8d41-63add615c949\") " pod="calico-system/whisker-5d6b8d5546-vcjv4" Mar 4 01:06:38.032483 kubelet[3385]: I0304 01:06:38.031921 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/43b4ceef-d4ae-420e-b9e4-6a44b6d9ede3-calico-apiserver-certs\") pod \"calico-apiserver-7c58d797df-nrmb5\" (UID: \"43b4ceef-d4ae-420e-b9e4-6a44b6d9ede3\") " pod="calico-system/calico-apiserver-7c58d797df-nrmb5" Mar 4 01:06:38.032483 kubelet[3385]: I0304 01:06:38.031937 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn9c6\" (UniqueName: \"kubernetes.io/projected/3d654472-1f4e-4e23-8263-e9fe218626cc-kube-api-access-pn9c6\") pod \"coredns-674b8bbfcf-t9nxp\" (UID: \"3d654472-1f4e-4e23-8263-e9fe218626cc\") " pod="kube-system/coredns-674b8bbfcf-t9nxp" Mar 4 01:06:38.032594 kubelet[3385]: I0304 01:06:38.031955 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc4w9\" (UniqueName: \"kubernetes.io/projected/381e9cc1-21d8-4ab6-bdd5-52a4f34253d4-kube-api-access-dc4w9\") pod \"coredns-674b8bbfcf-dz2n2\" (UID: \"381e9cc1-21d8-4ab6-bdd5-52a4f34253d4\") " pod="kube-system/coredns-674b8bbfcf-dz2n2" Mar 4 01:06:38.032594 kubelet[3385]: I0304 01:06:38.031969 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h84c4\" (UniqueName: \"kubernetes.io/projected/d9e5a6df-2938-4880-baa9-bc2b5f03ecb3-kube-api-access-h84c4\") pod \"calico-kube-controllers-6594fc5f87-7lc2r\" (UID: \"d9e5a6df-2938-4880-baa9-bc2b5f03ecb3\") " pod="calico-system/calico-kube-controllers-6594fc5f87-7lc2r" Mar 4 01:06:38.032594 kubelet[3385]: I0304 01:06:38.031991 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea0036f7-27e7-42ae-8dc8-5a1007c59806-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-q7bfd\" (UID: \"ea0036f7-27e7-42ae-8dc8-5a1007c59806\") " pod="calico-system/goldmane-5b85766d88-q7bfd" Mar 4 01:06:38.032594 kubelet[3385]: I0304 01:06:38.032007 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f7716632-24cc-4b2d-9197-7d4214b114df-calico-apiserver-certs\") pod \"calico-apiserver-7c58d797df-d2knn\" (UID: \"f7716632-24cc-4b2d-9197-7d4214b114df\") " pod="calico-system/calico-apiserver-7c58d797df-d2knn" Mar 4 01:06:38.032594 kubelet[3385]: I0304 01:06:38.032024 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/381e9cc1-21d8-4ab6-bdd5-52a4f34253d4-config-volume\") pod \"coredns-674b8bbfcf-dz2n2\" (UID: \"381e9cc1-21d8-4ab6-bdd5-52a4f34253d4\") " pod="kube-system/coredns-674b8bbfcf-dz2n2" Mar 4 01:06:38.032700 kubelet[3385]: I0304 01:06:38.032038 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d654472-1f4e-4e23-8263-e9fe218626cc-config-volume\") pod \"coredns-674b8bbfcf-t9nxp\" (UID: \"3d654472-1f4e-4e23-8263-e9fe218626cc\") " pod="kube-system/coredns-674b8bbfcf-t9nxp" Mar 4 01:06:38.032700 kubelet[3385]: I0304 01:06:38.032055 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9e5a6df-2938-4880-baa9-bc2b5f03ecb3-tigera-ca-bundle\") pod \"calico-kube-controllers-6594fc5f87-7lc2r\" (UID: \"d9e5a6df-2938-4880-baa9-bc2b5f03ecb3\") " pod="calico-system/calico-kube-controllers-6594fc5f87-7lc2r" Mar 4 01:06:38.032700 kubelet[3385]: I0304 01:06:38.032069 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea0036f7-27e7-42ae-8dc8-5a1007c59806-config\") pod \"goldmane-5b85766d88-q7bfd\" (UID: \"ea0036f7-27e7-42ae-8dc8-5a1007c59806\") " pod="calico-system/goldmane-5b85766d88-q7bfd" Mar 4 01:06:38.228217 containerd[1824]: time="2026-03-04T01:06:38.228116992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dz2n2,Uid:381e9cc1-21d8-4ab6-bdd5-52a4f34253d4,Namespace:kube-system,Attempt:0,}" Mar 4 01:06:38.240154 containerd[1824]: time="2026-03-04T01:06:38.239862470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d6b8d5546-vcjv4,Uid:a0493f89-de54-4276-8d41-63add615c949,Namespace:calico-system,Attempt:0,}" Mar 4 01:06:38.249174 containerd[1824]: time="2026-03-04T01:06:38.249135468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t9nxp,Uid:3d654472-1f4e-4e23-8263-e9fe218626cc,Namespace:kube-system,Attempt:0,}" Mar 4 01:06:38.261170 containerd[1824]: time="2026-03-04T01:06:38.260907746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6594fc5f87-7lc2r,Uid:d9e5a6df-2938-4880-baa9-bc2b5f03ecb3,Namespace:calico-system,Attempt:0,}" Mar 4 01:06:38.264255 containerd[1824]: time="2026-03-04T01:06:38.264228306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-q7bfd,Uid:ea0036f7-27e7-42ae-8dc8-5a1007c59806,Namespace:calico-system,Attempt:0,}" Mar 4 01:06:38.268006 containerd[1824]: time="2026-03-04T01:06:38.267847865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c58d797df-nrmb5,Uid:43b4ceef-d4ae-420e-b9e4-6a44b6d9ede3,Namespace:calico-system,Attempt:0,}" Mar 4 01:06:38.268501 containerd[1824]: time="2026-03-04T01:06:38.268342185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c58d797df-d2knn,Uid:f7716632-24cc-4b2d-9197-7d4214b114df,Namespace:calico-system,Attempt:0,}" Mar 4 01:06:38.535475 containerd[1824]: time="2026-03-04T01:06:38.535065017Z" level=error msg="Failed to destroy network for sandbox \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.535618 containerd[1824]: time="2026-03-04T01:06:38.535526417Z" level=error msg="encountered an error cleaning up failed sandbox \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.536246 containerd[1824]: time="2026-03-04T01:06:38.535617737Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dz2n2,Uid:381e9cc1-21d8-4ab6-bdd5-52a4f34253d4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.537714 kubelet[3385]: E0304 01:06:38.537236 3385 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.537714 kubelet[3385]: E0304 01:06:38.537315 3385 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dz2n2" Mar 4 01:06:38.537714 kubelet[3385]: E0304 01:06:38.537347 3385 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dz2n2" Mar 4 01:06:38.538184 kubelet[3385]: E0304 01:06:38.537425 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dz2n2_kube-system(381e9cc1-21d8-4ab6-bdd5-52a4f34253d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dz2n2_kube-system(381e9cc1-21d8-4ab6-bdd5-52a4f34253d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dz2n2" podUID="381e9cc1-21d8-4ab6-bdd5-52a4f34253d4" Mar 4 01:06:38.538899 containerd[1824]: time="2026-03-04T01:06:38.538870776Z" level=error msg="Failed to destroy network for sandbox \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.539425 containerd[1824]: time="2026-03-04T01:06:38.539313416Z" level=error msg="encountered an error cleaning up failed sandbox \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.540086 containerd[1824]: time="2026-03-04T01:06:38.540053976Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t9nxp,Uid:3d654472-1f4e-4e23-8263-e9fe218626cc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.540479 kubelet[3385]: E0304 01:06:38.540309 3385 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.540479 kubelet[3385]: E0304 01:06:38.540353 3385 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t9nxp" Mar 4 01:06:38.540479 kubelet[3385]: E0304 01:06:38.540399 3385 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t9nxp" Mar 4 01:06:38.540622 kubelet[3385]: E0304 01:06:38.540435 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-t9nxp_kube-system(3d654472-1f4e-4e23-8263-e9fe218626cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-t9nxp_kube-system(3d654472-1f4e-4e23-8263-e9fe218626cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-t9nxp" podUID="3d654472-1f4e-4e23-8263-e9fe218626cc" Mar 4 01:06:38.544996 containerd[1824]: time="2026-03-04T01:06:38.544864695Z" level=error msg="Failed to destroy network for sandbox \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.546848 containerd[1824]: time="2026-03-04T01:06:38.546634855Z" level=error msg="encountered an error cleaning up failed sandbox \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.546916 containerd[1824]: time="2026-03-04T01:06:38.546866135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d6b8d5546-vcjv4,Uid:a0493f89-de54-4276-8d41-63add615c949,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.547346 kubelet[3385]: E0304 01:06:38.547318 3385 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.547590 kubelet[3385]: E0304 01:06:38.547473 3385 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d6b8d5546-vcjv4" Mar 4 01:06:38.547590 kubelet[3385]: E0304 01:06:38.547499 3385 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d6b8d5546-vcjv4" Mar 4 01:06:38.548370 kubelet[3385]: E0304 01:06:38.547719 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d6b8d5546-vcjv4_calico-system(a0493f89-de54-4276-8d41-63add615c949)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d6b8d5546-vcjv4_calico-system(a0493f89-de54-4276-8d41-63add615c949)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d6b8d5546-vcjv4" podUID="a0493f89-de54-4276-8d41-63add615c949" Mar 4 01:06:38.567472 containerd[1824]: time="2026-03-04T01:06:38.567429371Z" level=error msg="Failed to destroy network for sandbox \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.567914 containerd[1824]: time="2026-03-04T01:06:38.567888291Z" level=error msg="encountered an error cleaning up failed sandbox \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.569042 containerd[1824]: time="2026-03-04T01:06:38.568011251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-q7bfd,Uid:ea0036f7-27e7-42ae-8dc8-5a1007c59806,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.569728 kubelet[3385]: E0304 01:06:38.569339 3385 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.569728 kubelet[3385]: E0304 01:06:38.569414 3385 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-q7bfd" Mar 4 01:06:38.569728 kubelet[3385]: E0304 01:06:38.569437 3385 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-q7bfd" Mar 4 01:06:38.569885 kubelet[3385]: E0304 01:06:38.569489 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-q7bfd_calico-system(ea0036f7-27e7-42ae-8dc8-5a1007c59806)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-q7bfd_calico-system(ea0036f7-27e7-42ae-8dc8-5a1007c59806)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-q7bfd" podUID="ea0036f7-27e7-42ae-8dc8-5a1007c59806" Mar 4 01:06:38.578252 containerd[1824]: time="2026-03-04T01:06:38.578203489Z" level=error msg="Failed to destroy network for sandbox \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.578532 containerd[1824]: time="2026-03-04T01:06:38.578505409Z" level=error msg="encountered an error cleaning up failed sandbox \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.578567 containerd[1824]: time="2026-03-04T01:06:38.578548849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c58d797df-nrmb5,Uid:43b4ceef-d4ae-420e-b9e4-6a44b6d9ede3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.578746 kubelet[3385]: E0304 01:06:38.578718 3385 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.578984 kubelet[3385]: E0304 01:06:38.578863 3385 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7c58d797df-nrmb5" Mar 4 01:06:38.578984 kubelet[3385]: E0304 01:06:38.578887 3385 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7c58d797df-nrmb5" Mar 4 01:06:38.578984 kubelet[3385]: E0304 01:06:38.578951 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c58d797df-nrmb5_calico-system(43b4ceef-d4ae-420e-b9e4-6a44b6d9ede3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c58d797df-nrmb5_calico-system(43b4ceef-d4ae-420e-b9e4-6a44b6d9ede3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7c58d797df-nrmb5" podUID="43b4ceef-d4ae-420e-b9e4-6a44b6d9ede3" Mar 4 01:06:38.584918 containerd[1824]: time="2026-03-04T01:06:38.584531328Z" level=error msg="Failed to destroy network for sandbox \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.584918 containerd[1824]: time="2026-03-04T01:06:38.584792768Z" level=error msg="encountered an error cleaning up failed sandbox \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.584918 containerd[1824]: time="2026-03-04T01:06:38.584838728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6594fc5f87-7lc2r,Uid:d9e5a6df-2938-4880-baa9-bc2b5f03ecb3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.585377 kubelet[3385]: E0304 01:06:38.585145 3385 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.585377 kubelet[3385]: E0304 01:06:38.585220 3385 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6594fc5f87-7lc2r" Mar 4 01:06:38.585377 kubelet[3385]: E0304 01:06:38.585251 3385 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6594fc5f87-7lc2r" Mar 4 01:06:38.585665 kubelet[3385]: E0304 01:06:38.585307 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6594fc5f87-7lc2r_calico-system(d9e5a6df-2938-4880-baa9-bc2b5f03ecb3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6594fc5f87-7lc2r_calico-system(d9e5a6df-2938-4880-baa9-bc2b5f03ecb3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6594fc5f87-7lc2r" podUID="d9e5a6df-2938-4880-baa9-bc2b5f03ecb3" Mar 4 01:06:38.594135 containerd[1824]: time="2026-03-04T01:06:38.594082886Z" level=error msg="Failed to destroy network for sandbox \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.594422 containerd[1824]: time="2026-03-04T01:06:38.594398406Z" level=error msg="encountered an error cleaning up failed sandbox \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.594473 containerd[1824]: time="2026-03-04T01:06:38.594450966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c58d797df-d2knn,Uid:f7716632-24cc-4b2d-9197-7d4214b114df,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.594966 kubelet[3385]: E0304 01:06:38.594678 3385 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.594966 kubelet[3385]: E0304 01:06:38.594721 3385 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7c58d797df-d2knn" Mar 4 01:06:38.594966 kubelet[3385]: E0304 01:06:38.594738 3385 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7c58d797df-d2knn" Mar 4 01:06:38.595131 kubelet[3385]: E0304 01:06:38.594775 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c58d797df-d2knn_calico-system(f7716632-24cc-4b2d-9197-7d4214b114df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c58d797df-d2knn_calico-system(f7716632-24cc-4b2d-9197-7d4214b114df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7c58d797df-d2knn" podUID="f7716632-24cc-4b2d-9197-7d4214b114df" Mar 4 01:06:38.607434 kubelet[3385]: I0304 01:06:38.607374 3385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Mar 4 01:06:38.611900 containerd[1824]: time="2026-03-04T01:06:38.611034243Z" level=info msg="StopPodSandbox for \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\"" Mar 4 01:06:38.616707 kubelet[3385]: I0304 01:06:38.616627 3385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Mar 4 01:06:38.617609 containerd[1824]: time="2026-03-04T01:06:38.617489002Z" level=info msg="StopPodSandbox for \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\"" Mar 4 01:06:38.618532 containerd[1824]: time="2026-03-04T01:06:38.618479402Z" level=info msg="Ensure that sandbox c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75 in task-service has been cleanup successfully" Mar 4 01:06:38.621328 containerd[1824]: time="2026-03-04T01:06:38.621245361Z" level=info msg="Ensure that sandbox 13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b in task-service has been cleanup successfully" Mar 4 01:06:38.637559 kubelet[3385]: I0304 01:06:38.637171 3385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Mar 4 01:06:38.639654 containerd[1824]: time="2026-03-04T01:06:38.639225558Z" level=info msg="StopPodSandbox for \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\"" Mar 4 01:06:38.639654 containerd[1824]: time="2026-03-04T01:06:38.639457238Z" level=info msg="Ensure that sandbox 084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc in task-service has been cleanup successfully" Mar 4 01:06:38.642436 kubelet[3385]: I0304 01:06:38.642402 3385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Mar 4 01:06:38.642982 containerd[1824]: time="2026-03-04T01:06:38.642955277Z" level=info msg="StopPodSandbox for \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\"" Mar 4 01:06:38.643728 containerd[1824]: time="2026-03-04T01:06:38.643698037Z" level=info msg="Ensure that sandbox 3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0 in task-service has been cleanup successfully" Mar 4 01:06:38.646425 containerd[1824]: time="2026-03-04T01:06:38.643539277Z" level=info msg="CreateContainer within sandbox \"fa2420a7e42f47ecdb4a4d85c7b3bf914820d35b0ab2b134614f22ea117b5714\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 4 01:06:38.657560 kubelet[3385]: I0304 01:06:38.657535 3385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Mar 4 01:06:38.661570 containerd[1824]: time="2026-03-04T01:06:38.661118154Z" level=info msg="StopPodSandbox for \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\"" Mar 4 01:06:38.661694 kubelet[3385]: I0304 01:06:38.661177 3385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Mar 4 01:06:38.662116 containerd[1824]: time="2026-03-04T01:06:38.662084354Z" level=info msg="StopPodSandbox for \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\"" Mar 4 01:06:38.662419 containerd[1824]: time="2026-03-04T01:06:38.662185594Z" level=info msg="Ensure that sandbox df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1 in task-service has been cleanup successfully" Mar 4 01:06:38.662628 containerd[1824]: time="2026-03-04T01:06:38.662558594Z" level=info msg="Ensure that sandbox a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d in task-service has been cleanup successfully" Mar 4 01:06:38.663311 kubelet[3385]: I0304 01:06:38.663116 3385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Mar 4 01:06:38.665864 containerd[1824]: time="2026-03-04T01:06:38.665838433Z" level=info msg="StopPodSandbox for \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\"" Mar 4 01:06:38.666998 containerd[1824]: time="2026-03-04T01:06:38.666468153Z" level=info msg="Ensure that sandbox 0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12 in task-service has been cleanup successfully" Mar 4 01:06:38.718268 containerd[1824]: time="2026-03-04T01:06:38.718226304Z" level=info msg="CreateContainer within sandbox \"fa2420a7e42f47ecdb4a4d85c7b3bf914820d35b0ab2b134614f22ea117b5714\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b7ab3e0c0cb103d7ae752fe678e8f716f3e24672c943c8123c5bcd221a50b751\"" Mar 4 01:06:38.719121 containerd[1824]: time="2026-03-04T01:06:38.719097343Z" level=info msg="StartContainer for \"b7ab3e0c0cb103d7ae752fe678e8f716f3e24672c943c8123c5bcd221a50b751\"" Mar 4 01:06:38.724292 containerd[1824]: time="2026-03-04T01:06:38.724253543Z" level=error msg="StopPodSandbox for \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\" failed" error="failed to destroy network for sandbox \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.724715 kubelet[3385]: E0304 01:06:38.724466 3385 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Mar 4 01:06:38.724715 kubelet[3385]: E0304 01:06:38.724518 3385 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0"} Mar 4 01:06:38.724715 kubelet[3385]: E0304 01:06:38.724564 3385 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"43b4ceef-d4ae-420e-b9e4-6a44b6d9ede3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:06:38.724715 kubelet[3385]: E0304 01:06:38.724586 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"43b4ceef-d4ae-420e-b9e4-6a44b6d9ede3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7c58d797df-nrmb5" podUID="43b4ceef-d4ae-420e-b9e4-6a44b6d9ede3" Mar 4 01:06:38.744783 containerd[1824]: time="2026-03-04T01:06:38.744400339Z" level=error msg="StopPodSandbox for \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\" failed" error="failed to destroy network for sandbox \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.745873 kubelet[3385]: E0304 01:06:38.744638 3385 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Mar 4 01:06:38.745873 kubelet[3385]: E0304 01:06:38.744682 3385 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1"} Mar 4 01:06:38.745873 kubelet[3385]: E0304 01:06:38.744721 3385 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ea0036f7-27e7-42ae-8dc8-5a1007c59806\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:06:38.745873 kubelet[3385]: E0304 01:06:38.744742 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ea0036f7-27e7-42ae-8dc8-5a1007c59806\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-q7bfd" podUID="ea0036f7-27e7-42ae-8dc8-5a1007c59806" Mar 4 01:06:38.752695 containerd[1824]: time="2026-03-04T01:06:38.752651097Z" level=error msg="StopPodSandbox for \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\" failed" error="failed to destroy network for sandbox \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.752882 kubelet[3385]: E0304 01:06:38.752847 3385 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Mar 4 01:06:38.752943 kubelet[3385]: E0304 01:06:38.752894 3385 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d"} Mar 4 01:06:38.752943 kubelet[3385]: E0304 01:06:38.752924 3385 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9e5a6df-2938-4880-baa9-bc2b5f03ecb3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:06:38.753028 kubelet[3385]: E0304 01:06:38.752946 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9e5a6df-2938-4880-baa9-bc2b5f03ecb3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6594fc5f87-7lc2r" podUID="d9e5a6df-2938-4880-baa9-bc2b5f03ecb3" Mar 4 01:06:38.757387 containerd[1824]: time="2026-03-04T01:06:38.756316737Z" level=error msg="StopPodSandbox for \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\" failed" error="failed to destroy network for sandbox \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.757485 kubelet[3385]: E0304 01:06:38.756540 3385 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Mar 4 01:06:38.757485 kubelet[3385]: E0304 01:06:38.756577 3385 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75"} Mar 4 01:06:38.757485 kubelet[3385]: E0304 01:06:38.756613 3385 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"381e9cc1-21d8-4ab6-bdd5-52a4f34253d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:06:38.757485 kubelet[3385]: E0304 01:06:38.756634 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"381e9cc1-21d8-4ab6-bdd5-52a4f34253d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dz2n2" podUID="381e9cc1-21d8-4ab6-bdd5-52a4f34253d4" Mar 4 01:06:38.758269 containerd[1824]: time="2026-03-04T01:06:38.758234656Z" level=error msg="StopPodSandbox for \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\" failed" error="failed to destroy network for sandbox \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.758499 kubelet[3385]: E0304 01:06:38.758469 3385 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Mar 4 01:06:38.758575 kubelet[3385]: E0304 01:06:38.758505 3385 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b"} Mar 4 01:06:38.758575 kubelet[3385]: E0304 01:06:38.758528 3385 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a0493f89-de54-4276-8d41-63add615c949\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:06:38.758575 kubelet[3385]: E0304 01:06:38.758557 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a0493f89-de54-4276-8d41-63add615c949\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d6b8d5546-vcjv4" podUID="a0493f89-de54-4276-8d41-63add615c949" Mar 4 01:06:38.759128 containerd[1824]: time="2026-03-04T01:06:38.758884536Z" level=error msg="StopPodSandbox for \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\" failed" error="failed to destroy network for sandbox \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.759187 kubelet[3385]: E0304 01:06:38.759016 3385 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Mar 4 01:06:38.759187 kubelet[3385]: E0304 01:06:38.759041 3385 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc"} Mar 4 01:06:38.759187 kubelet[3385]: E0304 01:06:38.759061 3385 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7716632-24cc-4b2d-9197-7d4214b114df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:06:38.759187 kubelet[3385]: E0304 01:06:38.759092 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7716632-24cc-4b2d-9197-7d4214b114df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7c58d797df-d2knn" podUID="f7716632-24cc-4b2d-9197-7d4214b114df" Mar 4 01:06:38.761648 containerd[1824]: time="2026-03-04T01:06:38.761605536Z" level=error msg="StopPodSandbox for \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\" failed" error="failed to destroy network for sandbox \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 01:06:38.762480 kubelet[3385]: E0304 01:06:38.762452 3385 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Mar 4 01:06:38.762595 kubelet[3385]: E0304 01:06:38.762580 3385 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12"} Mar 4 01:06:38.762678 kubelet[3385]: E0304 01:06:38.762666 3385 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3d654472-1f4e-4e23-8263-e9fe218626cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 4 01:06:38.762803 kubelet[3385]: E0304 01:06:38.762786 3385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3d654472-1f4e-4e23-8263-e9fe218626cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-t9nxp" podUID="3d654472-1f4e-4e23-8263-e9fe218626cc" Mar 4 01:06:38.803386 containerd[1824]: time="2026-03-04T01:06:38.801287129Z" level=info msg="StartContainer for \"b7ab3e0c0cb103d7ae752fe678e8f716f3e24672c943c8123c5bcd221a50b751\" returns successfully" Mar 4 01:06:39.449666 containerd[1824]: time="2026-03-04T01:06:39.449568371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mzdds,Uid:6e8fe3b2-8956-4035-bad2-31607646ad57,Namespace:calico-system,Attempt:0,}" Mar 4 01:06:39.660822 systemd-networkd[1398]: cali2166a1afb44: Link UP Mar 4 01:06:39.661538 systemd-networkd[1398]: cali2166a1afb44: Gained carrier Mar 4 01:06:39.680925 containerd[1824]: time="2026-03-04T01:06:39.676680650Z" level=info msg="StopPodSandbox for \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\"" Mar 4 01:06:39.692695 containerd[1824]: 2026-03-04 01:06:39.496 [ERROR][4596] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 01:06:39.692695 containerd[1824]: 2026-03-04 01:06:39.514 [INFO][4596] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8ef68d175b-k8s-csi--node--driver--mzdds-eth0 csi-node-driver- calico-system 6e8fe3b2-8956-4035-bad2-31607646ad57 709 0 2026-03-04 01:06:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-8ef68d175b csi-node-driver-mzdds eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2166a1afb44 [] [] }} ContainerID="0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448" Namespace="calico-system" Pod="csi-node-driver-mzdds" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-csi--node--driver--mzdds-" Mar 4 01:06:39.692695 containerd[1824]: 2026-03-04 01:06:39.514 [INFO][4596] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448" Namespace="calico-system" Pod="csi-node-driver-mzdds" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-csi--node--driver--mzdds-eth0" Mar 4 01:06:39.692695 containerd[1824]: 2026-03-04 01:06:39.537 [INFO][4608] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448" HandleID="k8s-pod-network.0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448" Workload="ci--4081.3.6--n--8ef68d175b-k8s-csi--node--driver--mzdds-eth0" Mar 4 01:06:39.692695 containerd[1824]: 2026-03-04 01:06:39.546 [INFO][4608] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448" HandleID="k8s-pod-network.0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448" Workload="ci--4081.3.6--n--8ef68d175b-k8s-csi--node--driver--mzdds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a9e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8ef68d175b", "pod":"csi-node-driver-mzdds", "timestamp":"2026-03-04 01:06:39.537650796 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8ef68d175b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000286c60)} Mar 4 01:06:39.692695 containerd[1824]: 2026-03-04 01:06:39.546 [INFO][4608] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:39.692695 containerd[1824]: 2026-03-04 01:06:39.546 [INFO][4608] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:39.692695 containerd[1824]: 2026-03-04 01:06:39.546 [INFO][4608] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8ef68d175b' Mar 4 01:06:39.692695 containerd[1824]: 2026-03-04 01:06:39.548 [INFO][4608] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:39.692695 containerd[1824]: 2026-03-04 01:06:39.551 [INFO][4608] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:39.692695 containerd[1824]: 2026-03-04 01:06:39.554 [INFO][4608] ipam/ipam.go 526: Trying affinity for 192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:39.692695 containerd[1824]: 2026-03-04 01:06:39.556 [INFO][4608] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:39.692695 containerd[1824]: 2026-03-04 01:06:39.557 [INFO][4608] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:39.692695 containerd[1824]: 2026-03-04 01:06:39.558 [INFO][4608] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:39.692695 containerd[1824]: 2026-03-04 01:06:39.559 [INFO][4608] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448 Mar 4 01:06:39.692695 containerd[1824]: 2026-03-04 01:06:39.567 [INFO][4608] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:39.692695 containerd[1824]: 2026-03-04 01:06:39.573 [INFO][4608] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.65/26] block=192.168.36.64/26 handle="k8s-pod-network.0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:39.692695 containerd[1824]: 2026-03-04 01:06:39.573 [INFO][4608] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.65/26] handle="k8s-pod-network.0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:39.692695 containerd[1824]: 2026-03-04 01:06:39.573 [INFO][4608] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:39.692695 containerd[1824]: 2026-03-04 01:06:39.573 [INFO][4608] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.65/26] IPv6=[] ContainerID="0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448" HandleID="k8s-pod-network.0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448" Workload="ci--4081.3.6--n--8ef68d175b-k8s-csi--node--driver--mzdds-eth0" Mar 4 01:06:39.693755 containerd[1824]: 2026-03-04 01:06:39.576 [INFO][4596] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448" Namespace="calico-system" Pod="csi-node-driver-mzdds" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-csi--node--driver--mzdds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-csi--node--driver--mzdds-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6e8fe3b2-8956-4035-bad2-31607646ad57", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"", Pod:"csi-node-driver-mzdds", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2166a1afb44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:39.693755 containerd[1824]: 2026-03-04 01:06:39.576 [INFO][4596] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.65/32] ContainerID="0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448" Namespace="calico-system" Pod="csi-node-driver-mzdds" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-csi--node--driver--mzdds-eth0" Mar 4 01:06:39.693755 containerd[1824]: 2026-03-04 01:06:39.576 [INFO][4596] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2166a1afb44 ContainerID="0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448" Namespace="calico-system" Pod="csi-node-driver-mzdds" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-csi--node--driver--mzdds-eth0" Mar 4 01:06:39.693755 containerd[1824]: 2026-03-04 01:06:39.661 [INFO][4596] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448" Namespace="calico-system" Pod="csi-node-driver-mzdds" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-csi--node--driver--mzdds-eth0" Mar 4 01:06:39.693755 containerd[1824]: 2026-03-04 01:06:39.661 [INFO][4596] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448" Namespace="calico-system" Pod="csi-node-driver-mzdds" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-csi--node--driver--mzdds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-csi--node--driver--mzdds-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6e8fe3b2-8956-4035-bad2-31607646ad57", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448", Pod:"csi-node-driver-mzdds", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.36.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2166a1afb44", MAC:"9a:09:5a:13:f8:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:39.693755 containerd[1824]: 2026-03-04 01:06:39.683 [INFO][4596] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448" Namespace="calico-system" Pod="csi-node-driver-mzdds" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-csi--node--driver--mzdds-eth0" Mar 4 01:06:39.728943 containerd[1824]: time="2026-03-04T01:06:39.727888521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:06:39.728943 containerd[1824]: time="2026-03-04T01:06:39.728306481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:06:39.728943 containerd[1824]: time="2026-03-04T01:06:39.728353921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:06:39.728943 containerd[1824]: time="2026-03-04T01:06:39.728469121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:06:39.754504 systemd[1]: run-containerd-runc-k8s.io-0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448-runc.iH8hb0.mount: Deactivated successfully. Mar 4 01:06:39.775870 kubelet[3385]: I0304 01:06:39.775570 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vrlcp" podStartSLOduration=5.117765632 podStartE2EDuration="24.775551993s" podCreationTimestamp="2026-03-04 01:06:15 +0000 UTC" firstStartedPulling="2026-03-04 01:06:16.012522254 +0000 UTC m=+23.710812067" lastFinishedPulling="2026-03-04 01:06:35.670308655 +0000 UTC m=+43.368598428" observedRunningTime="2026-03-04 01:06:39.700553046 +0000 UTC m=+47.398842899" watchObservedRunningTime="2026-03-04 01:06:39.775551993 +0000 UTC m=+47.473841806" Mar 4 01:06:39.779904 containerd[1824]: time="2026-03-04T01:06:39.779577912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mzdds,Uid:6e8fe3b2-8956-4035-bad2-31607646ad57,Namespace:calico-system,Attempt:0,} returns sandbox id \"0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448\"" Mar 4 01:06:39.781127 containerd[1824]: time="2026-03-04T01:06:39.781089032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 4 01:06:39.819006 containerd[1824]: 2026-03-04 01:06:39.777 [INFO][4633] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Mar 4 01:06:39.819006 containerd[1824]: 2026-03-04 01:06:39.779 [INFO][4633] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" iface="eth0" netns="/var/run/netns/cni-df104d39-cadd-cdc7-25b1-e6ed649fcc31" Mar 4 01:06:39.819006 containerd[1824]: 2026-03-04 01:06:39.779 [INFO][4633] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" iface="eth0" netns="/var/run/netns/cni-df104d39-cadd-cdc7-25b1-e6ed649fcc31" Mar 4 01:06:39.819006 containerd[1824]: 2026-03-04 01:06:39.781 [INFO][4633] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" iface="eth0" netns="/var/run/netns/cni-df104d39-cadd-cdc7-25b1-e6ed649fcc31" Mar 4 01:06:39.819006 containerd[1824]: 2026-03-04 01:06:39.781 [INFO][4633] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Mar 4 01:06:39.819006 containerd[1824]: 2026-03-04 01:06:39.781 [INFO][4633] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Mar 4 01:06:39.819006 containerd[1824]: 2026-03-04 01:06:39.804 [INFO][4681] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" HandleID="k8s-pod-network.13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Workload="ci--4081.3.6--n--8ef68d175b-k8s-whisker--5d6b8d5546--vcjv4-eth0" Mar 4 01:06:39.819006 containerd[1824]: 2026-03-04 01:06:39.804 [INFO][4681] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:39.819006 containerd[1824]: 2026-03-04 01:06:39.804 [INFO][4681] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:39.819006 containerd[1824]: 2026-03-04 01:06:39.814 [WARNING][4681] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" HandleID="k8s-pod-network.13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Workload="ci--4081.3.6--n--8ef68d175b-k8s-whisker--5d6b8d5546--vcjv4-eth0" Mar 4 01:06:39.819006 containerd[1824]: 2026-03-04 01:06:39.814 [INFO][4681] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" HandleID="k8s-pod-network.13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Workload="ci--4081.3.6--n--8ef68d175b-k8s-whisker--5d6b8d5546--vcjv4-eth0" Mar 4 01:06:39.819006 containerd[1824]: 2026-03-04 01:06:39.815 [INFO][4681] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:39.819006 containerd[1824]: 2026-03-04 01:06:39.817 [INFO][4633] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Mar 4 01:06:39.819588 containerd[1824]: time="2026-03-04T01:06:39.819126585Z" level=info msg="TearDown network for sandbox \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\" successfully" Mar 4 01:06:39.819588 containerd[1824]: time="2026-03-04T01:06:39.819150625Z" level=info msg="StopPodSandbox for \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\" returns successfully" Mar 4 01:06:39.831975 systemd[1]: run-netns-cni\x2ddf104d39\x2dcadd\x2dcdc7\x2d25b1\x2de6ed649fcc31.mount: Deactivated successfully. Mar 4 01:06:39.951386 kubelet[3385]: I0304 01:06:39.951026 3385 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0493f89-de54-4276-8d41-63add615c949-whisker-ca-bundle\") pod \"a0493f89-de54-4276-8d41-63add615c949\" (UID: \"a0493f89-de54-4276-8d41-63add615c949\") " Mar 4 01:06:39.951386 kubelet[3385]: I0304 01:06:39.951066 3385 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rtqh\" (UniqueName: \"kubernetes.io/projected/a0493f89-de54-4276-8d41-63add615c949-kube-api-access-5rtqh\") pod \"a0493f89-de54-4276-8d41-63add615c949\" (UID: \"a0493f89-de54-4276-8d41-63add615c949\") " Mar 4 01:06:39.951386 kubelet[3385]: I0304 01:06:39.951099 3385 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a0493f89-de54-4276-8d41-63add615c949-whisker-backend-key-pair\") pod \"a0493f89-de54-4276-8d41-63add615c949\" (UID: \"a0493f89-de54-4276-8d41-63add615c949\") " Mar 4 01:06:39.951386 kubelet[3385]: I0304 01:06:39.951122 3385 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a0493f89-de54-4276-8d41-63add615c949-nginx-config\") pod \"a0493f89-de54-4276-8d41-63add615c949\" (UID: \"a0493f89-de54-4276-8d41-63add615c949\") " Mar 4 01:06:39.951581 kubelet[3385]: I0304 01:06:39.951421 3385 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0493f89-de54-4276-8d41-63add615c949-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a0493f89-de54-4276-8d41-63add615c949" (UID: "a0493f89-de54-4276-8d41-63add615c949"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 01:06:39.951726 kubelet[3385]: I0304 01:06:39.951708 3385 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0493f89-de54-4276-8d41-63add615c949-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "a0493f89-de54-4276-8d41-63add615c949" (UID: "a0493f89-de54-4276-8d41-63add615c949"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 01:06:39.956854 kubelet[3385]: I0304 01:06:39.956815 3385 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0493f89-de54-4276-8d41-63add615c949-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a0493f89-de54-4276-8d41-63add615c949" (UID: "a0493f89-de54-4276-8d41-63add615c949"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 4 01:06:39.957535 kubelet[3385]: I0304 01:06:39.957495 3385 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0493f89-de54-4276-8d41-63add615c949-kube-api-access-5rtqh" (OuterVolumeSpecName: "kube-api-access-5rtqh") pod "a0493f89-de54-4276-8d41-63add615c949" (UID: "a0493f89-de54-4276-8d41-63add615c949"). InnerVolumeSpecName "kube-api-access-5rtqh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 01:06:39.958118 systemd[1]: var-lib-kubelet-pods-a0493f89\x2dde54\x2d4276\x2d8d41\x2d63add615c949-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 4 01:06:39.961510 systemd[1]: var-lib-kubelet-pods-a0493f89\x2dde54\x2d4276\x2d8d41\x2d63add615c949-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5rtqh.mount: Deactivated successfully. Mar 4 01:06:40.051673 kubelet[3385]: I0304 01:06:40.051563 3385 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0493f89-de54-4276-8d41-63add615c949-whisker-ca-bundle\") on node \"ci-4081.3.6-n-8ef68d175b\" DevicePath \"\"" Mar 4 01:06:40.051673 kubelet[3385]: I0304 01:06:40.051600 3385 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5rtqh\" (UniqueName: \"kubernetes.io/projected/a0493f89-de54-4276-8d41-63add615c949-kube-api-access-5rtqh\") on node \"ci-4081.3.6-n-8ef68d175b\" DevicePath \"\"" Mar 4 01:06:40.051673 kubelet[3385]: I0304 01:06:40.051614 3385 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a0493f89-de54-4276-8d41-63add615c949-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-8ef68d175b\" DevicePath \"\"" Mar 4 01:06:40.051673 kubelet[3385]: I0304 01:06:40.051624 3385 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a0493f89-de54-4276-8d41-63add615c949-nginx-config\") on node \"ci-4081.3.6-n-8ef68d175b\" DevicePath \"\"" Mar 4 01:06:40.438440 kernel: calico-node[4762]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 4 01:06:40.869799 kubelet[3385]: I0304 01:06:40.869759 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/05780f59-f98a-40b9-b68c-037e596452a4-nginx-config\") pod \"whisker-6d76cf5675-cmrvd\" (UID: \"05780f59-f98a-40b9-b68c-037e596452a4\") " pod="calico-system/whisker-6d76cf5675-cmrvd" Mar 4 01:06:40.869799 kubelet[3385]: I0304 01:06:40.869805 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47jg7\" (UniqueName: \"kubernetes.io/projected/05780f59-f98a-40b9-b68c-037e596452a4-kube-api-access-47jg7\") pod \"whisker-6d76cf5675-cmrvd\" (UID: \"05780f59-f98a-40b9-b68c-037e596452a4\") " pod="calico-system/whisker-6d76cf5675-cmrvd" Mar 4 01:06:40.870343 kubelet[3385]: I0304 01:06:40.869826 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05780f59-f98a-40b9-b68c-037e596452a4-whisker-backend-key-pair\") pod \"whisker-6d76cf5675-cmrvd\" (UID: \"05780f59-f98a-40b9-b68c-037e596452a4\") " pod="calico-system/whisker-6d76cf5675-cmrvd" Mar 4 01:06:40.870343 kubelet[3385]: I0304 01:06:40.869846 3385 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05780f59-f98a-40b9-b68c-037e596452a4-whisker-ca-bundle\") pod \"whisker-6d76cf5675-cmrvd\" (UID: \"05780f59-f98a-40b9-b68c-037e596452a4\") " pod="calico-system/whisker-6d76cf5675-cmrvd" Mar 4 01:06:41.081726 containerd[1824]: time="2026-03-04T01:06:41.081693196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d76cf5675-cmrvd,Uid:05780f59-f98a-40b9-b68c-037e596452a4,Namespace:calico-system,Attempt:0,}" Mar 4 01:06:41.102978 systemd-networkd[1398]: vxlan.calico: Link UP Mar 4 01:06:41.102987 systemd-networkd[1398]: vxlan.calico: Gained carrier Mar 4 01:06:41.279344 systemd-networkd[1398]: calid724830e07d: Link UP Mar 4 01:06:41.279572 systemd-networkd[1398]: calid724830e07d: Gained carrier Mar 4 01:06:41.297066 containerd[1824]: 2026-03-04 01:06:41.186 [INFO][4844] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8ef68d175b-k8s-whisker--6d76cf5675--cmrvd-eth0 whisker-6d76cf5675- calico-system 05780f59-f98a-40b9-b68c-037e596452a4 924 0 2026-03-04 01:06:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6d76cf5675 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-8ef68d175b whisker-6d76cf5675-cmrvd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid724830e07d [] [] }} ContainerID="82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247" Namespace="calico-system" Pod="whisker-6d76cf5675-cmrvd" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-whisker--6d76cf5675--cmrvd-" Mar 4 01:06:41.297066 containerd[1824]: 2026-03-04 01:06:41.187 [INFO][4844] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247" Namespace="calico-system" Pod="whisker-6d76cf5675-cmrvd" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-whisker--6d76cf5675--cmrvd-eth0" Mar 4 01:06:41.297066 containerd[1824]: 2026-03-04 01:06:41.216 [INFO][4868] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247" HandleID="k8s-pod-network.82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247" Workload="ci--4081.3.6--n--8ef68d175b-k8s-whisker--6d76cf5675--cmrvd-eth0" Mar 4 01:06:41.297066 containerd[1824]: 2026-03-04 01:06:41.233 [INFO][4868] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247" HandleID="k8s-pod-network.82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247" Workload="ci--4081.3.6--n--8ef68d175b-k8s-whisker--6d76cf5675--cmrvd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbaf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8ef68d175b", "pod":"whisker-6d76cf5675-cmrvd", "timestamp":"2026-03-04 01:06:41.216807612 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8ef68d175b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000293340)} Mar 4 01:06:41.297066 containerd[1824]: 2026-03-04 01:06:41.235 [INFO][4868] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:41.297066 containerd[1824]: 2026-03-04 01:06:41.235 [INFO][4868] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:41.297066 containerd[1824]: 2026-03-04 01:06:41.235 [INFO][4868] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8ef68d175b' Mar 4 01:06:41.297066 containerd[1824]: 2026-03-04 01:06:41.237 [INFO][4868] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:41.297066 containerd[1824]: 2026-03-04 01:06:41.242 [INFO][4868] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:41.297066 containerd[1824]: 2026-03-04 01:06:41.245 [INFO][4868] ipam/ipam.go 526: Trying affinity for 192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:41.297066 containerd[1824]: 2026-03-04 01:06:41.247 [INFO][4868] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:41.297066 containerd[1824]: 2026-03-04 01:06:41.250 [INFO][4868] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:41.297066 containerd[1824]: 2026-03-04 01:06:41.250 [INFO][4868] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:41.297066 containerd[1824]: 2026-03-04 01:06:41.252 [INFO][4868] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247 Mar 4 01:06:41.297066 containerd[1824]: 2026-03-04 01:06:41.261 [INFO][4868] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:41.297066 containerd[1824]: 2026-03-04 01:06:41.267 [INFO][4868] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.66/26] block=192.168.36.64/26 handle="k8s-pod-network.82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:41.297066 containerd[1824]: 2026-03-04 01:06:41.267 [INFO][4868] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.66/26] handle="k8s-pod-network.82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:41.297066 containerd[1824]: 2026-03-04 01:06:41.267 [INFO][4868] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:41.297066 containerd[1824]: 2026-03-04 01:06:41.267 [INFO][4868] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.66/26] IPv6=[] ContainerID="82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247" HandleID="k8s-pod-network.82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247" Workload="ci--4081.3.6--n--8ef68d175b-k8s-whisker--6d76cf5675--cmrvd-eth0" Mar 4 01:06:41.297713 containerd[1824]: 2026-03-04 01:06:41.271 [INFO][4844] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247" Namespace="calico-system" Pod="whisker-6d76cf5675-cmrvd" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-whisker--6d76cf5675--cmrvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-whisker--6d76cf5675--cmrvd-eth0", GenerateName:"whisker-6d76cf5675-", Namespace:"calico-system", SelfLink:"", UID:"05780f59-f98a-40b9-b68c-037e596452a4", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 6, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d76cf5675", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"", Pod:"whisker-6d76cf5675-cmrvd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.36.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid724830e07d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:41.297713 containerd[1824]: 2026-03-04 01:06:41.271 [INFO][4844] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.66/32] ContainerID="82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247" Namespace="calico-system" Pod="whisker-6d76cf5675-cmrvd" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-whisker--6d76cf5675--cmrvd-eth0" Mar 4 01:06:41.297713 containerd[1824]: 2026-03-04 01:06:41.272 [INFO][4844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid724830e07d ContainerID="82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247" Namespace="calico-system" Pod="whisker-6d76cf5675-cmrvd" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-whisker--6d76cf5675--cmrvd-eth0" Mar 4 01:06:41.297713 containerd[1824]: 2026-03-04 01:06:41.278 [INFO][4844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247" Namespace="calico-system" Pod="whisker-6d76cf5675-cmrvd" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-whisker--6d76cf5675--cmrvd-eth0" Mar 4 01:06:41.297713 containerd[1824]: 2026-03-04 01:06:41.279 [INFO][4844] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247" Namespace="calico-system" Pod="whisker-6d76cf5675-cmrvd" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-whisker--6d76cf5675--cmrvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-whisker--6d76cf5675--cmrvd-eth0", GenerateName:"whisker-6d76cf5675-", Namespace:"calico-system", SelfLink:"", UID:"05780f59-f98a-40b9-b68c-037e596452a4", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 6, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d76cf5675", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247", Pod:"whisker-6d76cf5675-cmrvd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.36.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid724830e07d", MAC:"8e:a1:d4:4b:4c:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:41.297713 containerd[1824]: 2026-03-04 01:06:41.291 [INFO][4844] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247" Namespace="calico-system" Pod="whisker-6d76cf5675-cmrvd" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-whisker--6d76cf5675--cmrvd-eth0" Mar 4 01:06:41.333421 containerd[1824]: time="2026-03-04T01:06:41.333008797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:06:41.333421 containerd[1824]: time="2026-03-04T01:06:41.333069757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:06:41.333421 containerd[1824]: time="2026-03-04T01:06:41.333084517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:06:41.333421 containerd[1824]: time="2026-03-04T01:06:41.333189597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:06:41.389876 containerd[1824]: time="2026-03-04T01:06:41.389781275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d76cf5675-cmrvd,Uid:05780f59-f98a-40b9-b68c-037e596452a4,Namespace:calico-system,Attempt:0,} returns sandbox id \"82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247\"" Mar 4 01:06:41.415692 systemd-networkd[1398]: cali2166a1afb44: Gained IPv6LL Mar 4 01:06:41.473090 containerd[1824]: time="2026-03-04T01:06:41.473047553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:41.475737 containerd[1824]: time="2026-03-04T01:06:41.475710513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 4 01:06:41.480758 containerd[1824]: time="2026-03-04T01:06:41.480678912Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:41.485918 containerd[1824]: time="2026-03-04T01:06:41.485622592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:41.486885 containerd[1824]: time="2026-03-04T01:06:41.486308112Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.705064481s" Mar 4 01:06:41.486885 containerd[1824]: time="2026-03-04T01:06:41.486339152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 4 01:06:41.487829 containerd[1824]: time="2026-03-04T01:06:41.487809872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 4 01:06:41.496800 containerd[1824]: time="2026-03-04T01:06:41.496772952Z" level=info msg="CreateContainer within sandbox \"0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 4 01:06:41.545636 containerd[1824]: time="2026-03-04T01:06:41.545505390Z" level=info msg="CreateContainer within sandbox \"0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2f690fa8e429c8ffc2f8ced97cfc6890d2e2b651693d14b29b007d23ae4a32ce\"" Mar 4 01:06:41.547390 containerd[1824]: time="2026-03-04T01:06:41.547153150Z" level=info msg="StartContainer for \"2f690fa8e429c8ffc2f8ced97cfc6890d2e2b651693d14b29b007d23ae4a32ce\"" Mar 4 01:06:41.614640 containerd[1824]: time="2026-03-04T01:06:41.614594148Z" level=info msg="StartContainer for \"2f690fa8e429c8ffc2f8ced97cfc6890d2e2b651693d14b29b007d23ae4a32ce\" returns successfully" Mar 4 01:06:42.439550 systemd-networkd[1398]: calid724830e07d: Gained IPv6LL Mar 4 01:06:42.450513 kubelet[3385]: I0304 01:06:42.450209 3385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0493f89-de54-4276-8d41-63add615c949" path="/var/lib/kubelet/pods/a0493f89-de54-4276-8d41-63add615c949/volumes" Mar 4 01:06:42.759477 systemd-networkd[1398]: vxlan.calico: Gained IPv6LL Mar 4 01:06:43.000389 containerd[1824]: time="2026-03-04T01:06:43.000020622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:43.002705 containerd[1824]: time="2026-03-04T01:06:43.002677142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 4 01:06:43.005900 containerd[1824]: time="2026-03-04T01:06:43.005853262Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:43.010080 containerd[1824]: time="2026-03-04T01:06:43.010014622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:43.011392 containerd[1824]: time="2026-03-04T01:06:43.010722862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.52280283s" Mar 4 01:06:43.011392 containerd[1824]: time="2026-03-04T01:06:43.010753982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 4 01:06:43.012623 containerd[1824]: time="2026-03-04T01:06:43.012118862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 4 01:06:43.018846 containerd[1824]: time="2026-03-04T01:06:43.018818302Z" level=info msg="CreateContainer within sandbox \"82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 4 01:06:43.052116 containerd[1824]: time="2026-03-04T01:06:43.052075820Z" level=info msg="CreateContainer within sandbox \"82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6216de12c1b3d6269b8d80a6a49dfeec3208ad5da51be9be5dee753ecc286efa\"" Mar 4 01:06:43.053999 containerd[1824]: time="2026-03-04T01:06:43.053960740Z" level=info msg="StartContainer for \"6216de12c1b3d6269b8d80a6a49dfeec3208ad5da51be9be5dee753ecc286efa\"" Mar 4 01:06:43.112258 containerd[1824]: time="2026-03-04T01:06:43.112216418Z" level=info msg="StartContainer for \"6216de12c1b3d6269b8d80a6a49dfeec3208ad5da51be9be5dee753ecc286efa\" returns successfully" Mar 4 01:06:44.676939 containerd[1824]: time="2026-03-04T01:06:44.676895127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:44.679789 containerd[1824]: time="2026-03-04T01:06:44.679762207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 4 01:06:44.682939 containerd[1824]: time="2026-03-04T01:06:44.682890646Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:44.689938 containerd[1824]: time="2026-03-04T01:06:44.689567286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:44.690418 containerd[1824]: time="2026-03-04T01:06:44.690387406Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.678237944s" Mar 4 01:06:44.690418 containerd[1824]: time="2026-03-04T01:06:44.690417486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 4 01:06:44.691706 containerd[1824]: time="2026-03-04T01:06:44.691617686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 4 01:06:44.698142 containerd[1824]: time="2026-03-04T01:06:44.697963126Z" level=info msg="CreateContainer within sandbox \"0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 4 01:06:44.736719 containerd[1824]: time="2026-03-04T01:06:44.736672965Z" level=info msg="CreateContainer within sandbox \"0fe24ce4411a590ea0dc5538ab31dc49bd671a2eda175b0832ddfcb2f8bae448\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8605fcf6315003bd151670f29b3d21b9d6bb944d81768d6c99db420242ed2de3\"" Mar 4 01:06:44.739578 containerd[1824]: time="2026-03-04T01:06:44.738117285Z" level=info msg="StartContainer for \"8605fcf6315003bd151670f29b3d21b9d6bb944d81768d6c99db420242ed2de3\"" Mar 4 01:06:44.786070 containerd[1824]: time="2026-03-04T01:06:44.786036483Z" level=info msg="StartContainer for \"8605fcf6315003bd151670f29b3d21b9d6bb944d81768d6c99db420242ed2de3\" returns successfully" Mar 4 01:06:45.550514 kubelet[3385]: I0304 01:06:45.550472 3385 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 4 01:06:45.550514 kubelet[3385]: I0304 01:06:45.550514 3385 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 4 01:06:46.446491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4212213518.mount: Deactivated successfully. Mar 4 01:06:46.920166 containerd[1824]: time="2026-03-04T01:06:46.920117612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:46.923024 containerd[1824]: time="2026-03-04T01:06:46.922989452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 4 01:06:46.926127 containerd[1824]: time="2026-03-04T01:06:46.926079052Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:46.930617 containerd[1824]: time="2026-03-04T01:06:46.930572492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:46.931875 containerd[1824]: time="2026-03-04T01:06:46.931281372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.239631246s" Mar 4 01:06:46.931875 containerd[1824]: time="2026-03-04T01:06:46.931317052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 4 01:06:46.940337 containerd[1824]: time="2026-03-04T01:06:46.940303172Z" level=info msg="CreateContainer within sandbox \"82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 4 01:06:46.975339 containerd[1824]: time="2026-03-04T01:06:46.975304011Z" level=info msg="CreateContainer within sandbox \"82b75f7ef22c7a593a52ae771972c6cfaf4408b6f7fed7ea33e433f73d4d3247\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"fda9b81bf763f3ac6430e0fafc25becb1d926f72e4905685e81aea92a2948d7b\"" Mar 4 01:06:46.976158 containerd[1824]: time="2026-03-04T01:06:46.976129771Z" level=info msg="StartContainer for \"fda9b81bf763f3ac6430e0fafc25becb1d926f72e4905685e81aea92a2948d7b\"" Mar 4 01:06:47.034879 containerd[1824]: time="2026-03-04T01:06:47.034832849Z" level=info msg="StartContainer for \"fda9b81bf763f3ac6430e0fafc25becb1d926f72e4905685e81aea92a2948d7b\" returns successfully" Mar 4 01:06:47.714380 kubelet[3385]: I0304 01:06:47.711480 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mzdds" podStartSLOduration=27.800561492 podStartE2EDuration="32.711462106s" podCreationTimestamp="2026-03-04 01:06:15 +0000 UTC" firstStartedPulling="2026-03-04 01:06:39.780582352 +0000 UTC m=+47.478872125" lastFinishedPulling="2026-03-04 01:06:44.691482926 +0000 UTC m=+52.389772739" observedRunningTime="2026-03-04 01:06:45.708915532 +0000 UTC m=+53.407205385" watchObservedRunningTime="2026-03-04 01:06:47.711462106 +0000 UTC m=+55.409751919" Mar 4 01:06:47.714380 kubelet[3385]: I0304 01:06:47.711617 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6d76cf5675-cmrvd" podStartSLOduration=2.171404249 podStartE2EDuration="7.711612506s" podCreationTimestamp="2026-03-04 01:06:40 +0000 UTC" firstStartedPulling="2026-03-04 01:06:41.392065035 +0000 UTC m=+49.090354848" lastFinishedPulling="2026-03-04 01:06:46.932273332 +0000 UTC m=+54.630563105" observedRunningTime="2026-03-04 01:06:47.711603306 +0000 UTC m=+55.409893079" watchObservedRunningTime="2026-03-04 01:06:47.711612506 +0000 UTC m=+55.409902319" Mar 4 01:06:49.448026 containerd[1824]: time="2026-03-04T01:06:49.447972173Z" level=info msg="StopPodSandbox for \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\"" Mar 4 01:06:49.540991 containerd[1824]: 2026-03-04 01:06:49.495 [INFO][5168] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Mar 4 01:06:49.540991 containerd[1824]: 2026-03-04 01:06:49.496 [INFO][5168] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" iface="eth0" netns="/var/run/netns/cni-41df540c-2eba-0615-c75c-a36825ab9ed0" Mar 4 01:06:49.540991 containerd[1824]: 2026-03-04 01:06:49.497 [INFO][5168] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" iface="eth0" netns="/var/run/netns/cni-41df540c-2eba-0615-c75c-a36825ab9ed0" Mar 4 01:06:49.540991 containerd[1824]: 2026-03-04 01:06:49.497 [INFO][5168] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" iface="eth0" netns="/var/run/netns/cni-41df540c-2eba-0615-c75c-a36825ab9ed0" Mar 4 01:06:49.540991 containerd[1824]: 2026-03-04 01:06:49.497 [INFO][5168] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Mar 4 01:06:49.540991 containerd[1824]: 2026-03-04 01:06:49.497 [INFO][5168] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Mar 4 01:06:49.540991 containerd[1824]: 2026-03-04 01:06:49.518 [INFO][5175] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" HandleID="k8s-pod-network.a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0" Mar 4 01:06:49.540991 containerd[1824]: 2026-03-04 01:06:49.518 [INFO][5175] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:49.540991 containerd[1824]: 2026-03-04 01:06:49.519 [INFO][5175] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:49.540991 containerd[1824]: 2026-03-04 01:06:49.531 [WARNING][5175] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" HandleID="k8s-pod-network.a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0" Mar 4 01:06:49.540991 containerd[1824]: 2026-03-04 01:06:49.532 [INFO][5175] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" HandleID="k8s-pod-network.a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0" Mar 4 01:06:49.540991 containerd[1824]: 2026-03-04 01:06:49.534 [INFO][5175] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:49.540991 containerd[1824]: 2026-03-04 01:06:49.538 [INFO][5168] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Mar 4 01:06:49.542600 containerd[1824]: time="2026-03-04T01:06:49.541300613Z" level=info msg="TearDown network for sandbox \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\" successfully" Mar 4 01:06:49.542600 containerd[1824]: time="2026-03-04T01:06:49.541397013Z" level=info msg="StopPodSandbox for \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\" returns successfully" Mar 4 01:06:49.542909 systemd[1]: run-netns-cni\x2d41df540c\x2d2eba\x2d0615\x2dc75c\x2da36825ab9ed0.mount: Deactivated successfully. Mar 4 01:06:49.544597 containerd[1824]: time="2026-03-04T01:06:49.543196413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6594fc5f87-7lc2r,Uid:d9e5a6df-2938-4880-baa9-bc2b5f03ecb3,Namespace:calico-system,Attempt:1,}" Mar 4 01:06:49.682644 systemd-networkd[1398]: calif6162f0b09a: Link UP Mar 4 01:06:49.683543 systemd-networkd[1398]: calif6162f0b09a: Gained carrier Mar 4 01:06:49.707216 containerd[1824]: 2026-03-04 01:06:49.615 [INFO][5182] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0 calico-kube-controllers-6594fc5f87- calico-system d9e5a6df-2938-4880-baa9-bc2b5f03ecb3 969 0 2026-03-04 01:06:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6594fc5f87 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-8ef68d175b calico-kube-controllers-6594fc5f87-7lc2r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif6162f0b09a [] [] }} ContainerID="f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9" Namespace="calico-system" Pod="calico-kube-controllers-6594fc5f87-7lc2r" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-" Mar 4 01:06:49.707216 containerd[1824]: 2026-03-04 01:06:49.615 [INFO][5182] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9" Namespace="calico-system" Pod="calico-kube-controllers-6594fc5f87-7lc2r" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0" Mar 4 01:06:49.707216 containerd[1824]: 2026-03-04 01:06:49.637 [INFO][5194] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9" HandleID="k8s-pod-network.f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0" Mar 4 01:06:49.707216 containerd[1824]: 2026-03-04 01:06:49.647 [INFO][5194] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9" HandleID="k8s-pod-network.f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273c00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8ef68d175b", "pod":"calico-kube-controllers-6594fc5f87-7lc2r", "timestamp":"2026-03-04 01:06:49.637945493 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8ef68d175b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002e3600)} Mar 4 01:06:49.707216 containerd[1824]: 2026-03-04 01:06:49.647 [INFO][5194] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:49.707216 containerd[1824]: 2026-03-04 01:06:49.647 [INFO][5194] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:49.707216 containerd[1824]: 2026-03-04 01:06:49.647 [INFO][5194] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8ef68d175b' Mar 4 01:06:49.707216 containerd[1824]: 2026-03-04 01:06:49.649 [INFO][5194] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:49.707216 containerd[1824]: 2026-03-04 01:06:49.652 [INFO][5194] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:49.707216 containerd[1824]: 2026-03-04 01:06:49.657 [INFO][5194] ipam/ipam.go 526: Trying affinity for 192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:49.707216 containerd[1824]: 2026-03-04 01:06:49.659 [INFO][5194] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:49.707216 containerd[1824]: 2026-03-04 01:06:49.661 [INFO][5194] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:49.707216 containerd[1824]: 2026-03-04 01:06:49.661 [INFO][5194] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:49.707216 containerd[1824]: 2026-03-04 01:06:49.662 [INFO][5194] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9 Mar 4 01:06:49.707216 containerd[1824]: 2026-03-04 01:06:49.669 [INFO][5194] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:49.707216 containerd[1824]: 2026-03-04 01:06:49.675 [INFO][5194] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.67/26] block=192.168.36.64/26 handle="k8s-pod-network.f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:49.707216 containerd[1824]: 2026-03-04 01:06:49.675 [INFO][5194] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.67/26] handle="k8s-pod-network.f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:49.707216 containerd[1824]: 2026-03-04 01:06:49.675 [INFO][5194] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:49.707216 containerd[1824]: 2026-03-04 01:06:49.675 [INFO][5194] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.67/26] IPv6=[] ContainerID="f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9" HandleID="k8s-pod-network.f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0" Mar 4 01:06:49.707745 containerd[1824]: 2026-03-04 01:06:49.678 [INFO][5182] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9" Namespace="calico-system" Pod="calico-kube-controllers-6594fc5f87-7lc2r" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0", GenerateName:"calico-kube-controllers-6594fc5f87-", Namespace:"calico-system", SelfLink:"", UID:"d9e5a6df-2938-4880-baa9-bc2b5f03ecb3", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6594fc5f87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"", Pod:"calico-kube-controllers-6594fc5f87-7lc2r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif6162f0b09a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:49.707745 containerd[1824]: 2026-03-04 01:06:49.678 [INFO][5182] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.67/32] ContainerID="f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9" Namespace="calico-system" Pod="calico-kube-controllers-6594fc5f87-7lc2r" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0" Mar 4 01:06:49.707745 containerd[1824]: 2026-03-04 01:06:49.678 [INFO][5182] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6162f0b09a ContainerID="f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9" Namespace="calico-system" Pod="calico-kube-controllers-6594fc5f87-7lc2r" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0" Mar 4 01:06:49.707745 containerd[1824]: 2026-03-04 01:06:49.684 [INFO][5182] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9" Namespace="calico-system" Pod="calico-kube-controllers-6594fc5f87-7lc2r" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0" Mar 4 01:06:49.707745 containerd[1824]: 2026-03-04 01:06:49.685 [INFO][5182] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9" Namespace="calico-system" Pod="calico-kube-controllers-6594fc5f87-7lc2r" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0", GenerateName:"calico-kube-controllers-6594fc5f87-", Namespace:"calico-system", SelfLink:"", UID:"d9e5a6df-2938-4880-baa9-bc2b5f03ecb3", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6594fc5f87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9", Pod:"calico-kube-controllers-6594fc5f87-7lc2r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif6162f0b09a", MAC:"4a:7d:03:ea:9c:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:49.707745 containerd[1824]: 2026-03-04 01:06:49.701 [INFO][5182] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9" Namespace="calico-system" Pod="calico-kube-controllers-6594fc5f87-7lc2r" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0" Mar 4 01:06:49.732041 containerd[1824]: time="2026-03-04T01:06:49.731946373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:06:49.732041 containerd[1824]: time="2026-03-04T01:06:49.732006013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:06:49.732041 containerd[1824]: time="2026-03-04T01:06:49.732020573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:06:49.732290 containerd[1824]: time="2026-03-04T01:06:49.732107853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:06:49.780299 containerd[1824]: time="2026-03-04T01:06:49.780259492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6594fc5f87-7lc2r,Uid:d9e5a6df-2938-4880-baa9-bc2b5f03ecb3,Namespace:calico-system,Attempt:1,} returns sandbox id \"f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9\"" Mar 4 01:06:49.782384 containerd[1824]: time="2026-03-04T01:06:49.782261932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 4 01:06:50.451740 containerd[1824]: time="2026-03-04T01:06:50.451016371Z" level=info msg="StopPodSandbox for \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\"" Mar 4 01:06:50.452541 containerd[1824]: time="2026-03-04T01:06:50.451606531Z" level=info msg="StopPodSandbox for \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\"" Mar 4 01:06:50.569864 containerd[1824]: 2026-03-04 01:06:50.517 [INFO][5280] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Mar 4 01:06:50.569864 containerd[1824]: 2026-03-04 01:06:50.518 [INFO][5280] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" iface="eth0" netns="/var/run/netns/cni-f17223d5-57d4-228c-0f3c-7fd277071d96" Mar 4 01:06:50.569864 containerd[1824]: 2026-03-04 01:06:50.518 [INFO][5280] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" iface="eth0" netns="/var/run/netns/cni-f17223d5-57d4-228c-0f3c-7fd277071d96" Mar 4 01:06:50.569864 containerd[1824]: 2026-03-04 01:06:50.519 [INFO][5280] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" iface="eth0" netns="/var/run/netns/cni-f17223d5-57d4-228c-0f3c-7fd277071d96" Mar 4 01:06:50.569864 containerd[1824]: 2026-03-04 01:06:50.519 [INFO][5280] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Mar 4 01:06:50.569864 containerd[1824]: 2026-03-04 01:06:50.519 [INFO][5280] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Mar 4 01:06:50.569864 containerd[1824]: 2026-03-04 01:06:50.548 [INFO][5297] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" HandleID="k8s-pod-network.3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0" Mar 4 01:06:50.569864 containerd[1824]: 2026-03-04 01:06:50.549 [INFO][5297] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:50.569864 containerd[1824]: 2026-03-04 01:06:50.549 [INFO][5297] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:50.569864 containerd[1824]: 2026-03-04 01:06:50.561 [WARNING][5297] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" HandleID="k8s-pod-network.3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0" Mar 4 01:06:50.569864 containerd[1824]: 2026-03-04 01:06:50.561 [INFO][5297] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" HandleID="k8s-pod-network.3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0" Mar 4 01:06:50.569864 containerd[1824]: 2026-03-04 01:06:50.562 [INFO][5297] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:50.569864 containerd[1824]: 2026-03-04 01:06:50.567 [INFO][5280] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Mar 4 01:06:50.573130 containerd[1824]: time="2026-03-04T01:06:50.571504611Z" level=info msg="TearDown network for sandbox \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\" successfully" Mar 4 01:06:50.573130 containerd[1824]: time="2026-03-04T01:06:50.571539851Z" level=info msg="StopPodSandbox for \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\" returns successfully" Mar 4 01:06:50.573913 systemd[1]: run-netns-cni\x2df17223d5\x2d57d4\x2d228c\x2d0f3c\x2d7fd277071d96.mount: Deactivated successfully. Mar 4 01:06:50.582447 containerd[1824]: time="2026-03-04T01:06:50.582157931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c58d797df-nrmb5,Uid:43b4ceef-d4ae-420e-b9e4-6a44b6d9ede3,Namespace:calico-system,Attempt:1,}" Mar 4 01:06:50.585655 containerd[1824]: 2026-03-04 01:06:50.524 [INFO][5288] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Mar 4 01:06:50.585655 containerd[1824]: 2026-03-04 01:06:50.525 [INFO][5288] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" iface="eth0" netns="/var/run/netns/cni-40986485-1583-7430-2e65-25b13889e469" Mar 4 01:06:50.585655 containerd[1824]: 2026-03-04 01:06:50.526 [INFO][5288] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" iface="eth0" netns="/var/run/netns/cni-40986485-1583-7430-2e65-25b13889e469" Mar 4 01:06:50.585655 containerd[1824]: 2026-03-04 01:06:50.526 [INFO][5288] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" iface="eth0" netns="/var/run/netns/cni-40986485-1583-7430-2e65-25b13889e469" Mar 4 01:06:50.585655 containerd[1824]: 2026-03-04 01:06:50.526 [INFO][5288] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Mar 4 01:06:50.585655 containerd[1824]: 2026-03-04 01:06:50.526 [INFO][5288] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Mar 4 01:06:50.585655 containerd[1824]: 2026-03-04 01:06:50.560 [INFO][5302] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" HandleID="k8s-pod-network.c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0" Mar 4 01:06:50.585655 containerd[1824]: 2026-03-04 01:06:50.560 [INFO][5302] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:50.585655 containerd[1824]: 2026-03-04 01:06:50.562 [INFO][5302] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:50.585655 containerd[1824]: 2026-03-04 01:06:50.578 [WARNING][5302] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" HandleID="k8s-pod-network.c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0" Mar 4 01:06:50.585655 containerd[1824]: 2026-03-04 01:06:50.578 [INFO][5302] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" HandleID="k8s-pod-network.c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0" Mar 4 01:06:50.585655 containerd[1824]: 2026-03-04 01:06:50.579 [INFO][5302] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:50.585655 containerd[1824]: 2026-03-04 01:06:50.583 [INFO][5288] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Mar 4 01:06:50.586004 containerd[1824]: time="2026-03-04T01:06:50.585793691Z" level=info msg="TearDown network for sandbox \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\" successfully" Mar 4 01:06:50.586004 containerd[1824]: time="2026-03-04T01:06:50.585816291Z" level=info msg="StopPodSandbox for \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\" returns successfully" Mar 4 01:06:50.586321 containerd[1824]: time="2026-03-04T01:06:50.586288571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dz2n2,Uid:381e9cc1-21d8-4ab6-bdd5-52a4f34253d4,Namespace:kube-system,Attempt:1,}" Mar 4 01:06:50.589022 systemd[1]: run-netns-cni\x2d40986485\x2d1583\x2d7430\x2d2e65\x2d25b13889e469.mount: Deactivated successfully. Mar 4 01:06:50.759532 systemd-networkd[1398]: calif6162f0b09a: Gained IPv6LL Mar 4 01:06:50.803209 systemd-networkd[1398]: cali221a77deb41: Link UP Mar 4 01:06:50.805968 systemd-networkd[1398]: cali221a77deb41: Gained carrier Mar 4 01:06:50.830596 containerd[1824]: 2026-03-04 01:06:50.700 [INFO][5311] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0 calico-apiserver-7c58d797df- calico-system 43b4ceef-d4ae-420e-b9e4-6a44b6d9ede3 978 0 2026-03-04 01:06:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c58d797df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-8ef68d175b calico-apiserver-7c58d797df-nrmb5 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali221a77deb41 [] [] }} ContainerID="f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe" Namespace="calico-system" Pod="calico-apiserver-7c58d797df-nrmb5" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-" Mar 4 01:06:50.830596 containerd[1824]: 2026-03-04 01:06:50.701 [INFO][5311] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe" Namespace="calico-system" Pod="calico-apiserver-7c58d797df-nrmb5" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0" Mar 4 01:06:50.830596 containerd[1824]: 2026-03-04 01:06:50.741 [INFO][5334] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe" HandleID="k8s-pod-network.f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0" Mar 4 01:06:50.830596 containerd[1824]: 2026-03-04 01:06:50.755 [INFO][5334] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe" HandleID="k8s-pod-network.f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb4c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8ef68d175b", "pod":"calico-apiserver-7c58d797df-nrmb5", "timestamp":"2026-03-04 01:06:50.74141873 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8ef68d175b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000269600)} Mar 4 01:06:50.830596 containerd[1824]: 2026-03-04 01:06:50.755 [INFO][5334] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:50.830596 containerd[1824]: 2026-03-04 01:06:50.755 [INFO][5334] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:50.830596 containerd[1824]: 2026-03-04 01:06:50.755 [INFO][5334] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8ef68d175b' Mar 4 01:06:50.830596 containerd[1824]: 2026-03-04 01:06:50.757 [INFO][5334] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:50.830596 containerd[1824]: 2026-03-04 01:06:50.764 [INFO][5334] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:50.830596 containerd[1824]: 2026-03-04 01:06:50.770 [INFO][5334] ipam/ipam.go 526: Trying affinity for 192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:50.830596 containerd[1824]: 2026-03-04 01:06:50.772 [INFO][5334] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:50.830596 containerd[1824]: 2026-03-04 01:06:50.776 [INFO][5334] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:50.830596 containerd[1824]: 2026-03-04 01:06:50.776 [INFO][5334] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:50.830596 containerd[1824]: 2026-03-04 01:06:50.777 [INFO][5334] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe Mar 4 01:06:50.830596 containerd[1824]: 2026-03-04 01:06:50.784 [INFO][5334] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:50.830596 containerd[1824]: 2026-03-04 01:06:50.796 [INFO][5334] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.68/26] block=192.168.36.64/26 handle="k8s-pod-network.f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:50.830596 containerd[1824]: 2026-03-04 01:06:50.796 [INFO][5334] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.68/26] handle="k8s-pod-network.f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:50.830596 containerd[1824]: 2026-03-04 01:06:50.796 [INFO][5334] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:50.830596 containerd[1824]: 2026-03-04 01:06:50.796 [INFO][5334] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.68/26] IPv6=[] ContainerID="f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe" HandleID="k8s-pod-network.f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0" Mar 4 01:06:50.831194 containerd[1824]: 2026-03-04 01:06:50.799 [INFO][5311] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe" Namespace="calico-system" Pod="calico-apiserver-7c58d797df-nrmb5" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0", GenerateName:"calico-apiserver-7c58d797df-", Namespace:"calico-system", SelfLink:"", UID:"43b4ceef-d4ae-420e-b9e4-6a44b6d9ede3", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 6, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c58d797df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"", Pod:"calico-apiserver-7c58d797df-nrmb5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali221a77deb41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:50.831194 containerd[1824]: 2026-03-04 01:06:50.799 [INFO][5311] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.68/32] ContainerID="f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe" Namespace="calico-system" Pod="calico-apiserver-7c58d797df-nrmb5" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0" Mar 4 01:06:50.831194 containerd[1824]: 2026-03-04 01:06:50.799 [INFO][5311] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali221a77deb41 ContainerID="f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe" Namespace="calico-system" Pod="calico-apiserver-7c58d797df-nrmb5" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0" Mar 4 01:06:50.831194 containerd[1824]: 2026-03-04 01:06:50.810 [INFO][5311] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe" Namespace="calico-system" Pod="calico-apiserver-7c58d797df-nrmb5" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0" Mar 4 01:06:50.831194 containerd[1824]: 2026-03-04 01:06:50.811 [INFO][5311] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe" Namespace="calico-system" Pod="calico-apiserver-7c58d797df-nrmb5" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0", GenerateName:"calico-apiserver-7c58d797df-", Namespace:"calico-system", SelfLink:"", UID:"43b4ceef-d4ae-420e-b9e4-6a44b6d9ede3", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 6, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c58d797df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe", Pod:"calico-apiserver-7c58d797df-nrmb5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali221a77deb41", MAC:"72:61:72:70:47:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:50.831194 containerd[1824]: 2026-03-04 01:06:50.825 [INFO][5311] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe" Namespace="calico-system" Pod="calico-apiserver-7c58d797df-nrmb5" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0" Mar 4 01:06:50.880671 containerd[1824]: time="2026-03-04T01:06:50.879632370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:06:50.880671 containerd[1824]: time="2026-03-04T01:06:50.879705810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:06:50.880671 containerd[1824]: time="2026-03-04T01:06:50.879720690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:06:50.880671 containerd[1824]: time="2026-03-04T01:06:50.879821690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:06:50.905749 systemd-networkd[1398]: cali6c5791d1168: Link UP Mar 4 01:06:50.906887 systemd-networkd[1398]: cali6c5791d1168: Gained carrier Mar 4 01:06:50.928901 containerd[1824]: 2026-03-04 01:06:50.714 [INFO][5315] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0 coredns-674b8bbfcf- kube-system 381e9cc1-21d8-4ab6-bdd5-52a4f34253d4 979 0 2026-03-04 01:05:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-8ef68d175b coredns-674b8bbfcf-dz2n2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6c5791d1168 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f" Namespace="kube-system" Pod="coredns-674b8bbfcf-dz2n2" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-" Mar 4 01:06:50.928901 containerd[1824]: 2026-03-04 01:06:50.714 [INFO][5315] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f" Namespace="kube-system" Pod="coredns-674b8bbfcf-dz2n2" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0" Mar 4 01:06:50.928901 containerd[1824]: 2026-03-04 01:06:50.741 [INFO][5340] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f" HandleID="k8s-pod-network.163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0" Mar 4 01:06:50.928901 containerd[1824]: 2026-03-04 01:06:50.757 [INFO][5340] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f" HandleID="k8s-pod-network.163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273370), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-8ef68d175b", "pod":"coredns-674b8bbfcf-dz2n2", "timestamp":"2026-03-04 01:06:50.74141917 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8ef68d175b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000200f20)} Mar 4 01:06:50.928901 containerd[1824]: 2026-03-04 01:06:50.757 [INFO][5340] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:50.928901 containerd[1824]: 2026-03-04 01:06:50.796 [INFO][5340] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:50.928901 containerd[1824]: 2026-03-04 01:06:50.796 [INFO][5340] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8ef68d175b' Mar 4 01:06:50.928901 containerd[1824]: 2026-03-04 01:06:50.859 [INFO][5340] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:50.928901 containerd[1824]: 2026-03-04 01:06:50.865 [INFO][5340] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:50.928901 containerd[1824]: 2026-03-04 01:06:50.871 [INFO][5340] ipam/ipam.go 526: Trying affinity for 192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:50.928901 containerd[1824]: 2026-03-04 01:06:50.873 [INFO][5340] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:50.928901 containerd[1824]: 2026-03-04 01:06:50.875 [INFO][5340] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:50.928901 containerd[1824]: 2026-03-04 01:06:50.875 [INFO][5340] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:50.928901 containerd[1824]: 2026-03-04 01:06:50.879 [INFO][5340] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f Mar 4 01:06:50.928901 containerd[1824]: 2026-03-04 01:06:50.885 [INFO][5340] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:50.928901 containerd[1824]: 2026-03-04 01:06:50.896 [INFO][5340] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.69/26] block=192.168.36.64/26 handle="k8s-pod-network.163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:50.928901 containerd[1824]: 2026-03-04 01:06:50.896 [INFO][5340] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.69/26] handle="k8s-pod-network.163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:50.928901 containerd[1824]: 2026-03-04 01:06:50.896 [INFO][5340] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:50.928901 containerd[1824]: 2026-03-04 01:06:50.896 [INFO][5340] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.69/26] IPv6=[] ContainerID="163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f" HandleID="k8s-pod-network.163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0" Mar 4 01:06:50.929838 containerd[1824]: 2026-03-04 01:06:50.901 [INFO][5315] cni-plugin/k8s.go 418: Populated endpoint ContainerID="163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f" Namespace="kube-system" Pod="coredns-674b8bbfcf-dz2n2" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"381e9cc1-21d8-4ab6-bdd5-52a4f34253d4", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 5, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"", Pod:"coredns-674b8bbfcf-dz2n2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c5791d1168", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:50.929838 containerd[1824]: 2026-03-04 01:06:50.901 [INFO][5315] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.69/32] ContainerID="163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f" Namespace="kube-system" Pod="coredns-674b8bbfcf-dz2n2" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0" Mar 4 01:06:50.929838 containerd[1824]: 2026-03-04 01:06:50.901 [INFO][5315] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c5791d1168 ContainerID="163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f" Namespace="kube-system" Pod="coredns-674b8bbfcf-dz2n2" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0" Mar 4 01:06:50.929838 containerd[1824]: 2026-03-04 01:06:50.907 [INFO][5315] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f" Namespace="kube-system" Pod="coredns-674b8bbfcf-dz2n2" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0" Mar 4 01:06:50.929838 containerd[1824]: 2026-03-04 01:06:50.907 [INFO][5315] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f" Namespace="kube-system" Pod="coredns-674b8bbfcf-dz2n2" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"381e9cc1-21d8-4ab6-bdd5-52a4f34253d4", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 5, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f", Pod:"coredns-674b8bbfcf-dz2n2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c5791d1168", MAC:"a6:33:30:0d:45:5d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:50.929838 containerd[1824]: 2026-03-04 01:06:50.925 [INFO][5315] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f" Namespace="kube-system" Pod="coredns-674b8bbfcf-dz2n2" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0" Mar 4 01:06:50.951639 containerd[1824]: time="2026-03-04T01:06:50.951530250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c58d797df-nrmb5,Uid:43b4ceef-d4ae-420e-b9e4-6a44b6d9ede3,Namespace:calico-system,Attempt:1,} returns sandbox id \"f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe\"" Mar 4 01:06:51.001178 containerd[1824]: time="2026-03-04T01:06:51.000871370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:06:51.001178 containerd[1824]: time="2026-03-04T01:06:51.000941010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:06:51.001178 containerd[1824]: time="2026-03-04T01:06:51.000978490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:06:51.001178 containerd[1824]: time="2026-03-04T01:06:51.001126570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:06:51.049943 containerd[1824]: time="2026-03-04T01:06:51.049909449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dz2n2,Uid:381e9cc1-21d8-4ab6-bdd5-52a4f34253d4,Namespace:kube-system,Attempt:1,} returns sandbox id \"163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f\"" Mar 4 01:06:51.062903 containerd[1824]: time="2026-03-04T01:06:51.062863649Z" level=info msg="CreateContainer within sandbox \"163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 01:06:51.096694 containerd[1824]: time="2026-03-04T01:06:51.096650129Z" level=info msg="CreateContainer within sandbox \"163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f441869f9e53a2f002893dde7671e248e9a90e7c5fb9700661adc88434eb7bf5\"" Mar 4 01:06:51.099316 containerd[1824]: time="2026-03-04T01:06:51.097468129Z" level=info msg="StartContainer for \"f441869f9e53a2f002893dde7671e248e9a90e7c5fb9700661adc88434eb7bf5\"" Mar 4 01:06:51.148196 containerd[1824]: time="2026-03-04T01:06:51.147599129Z" level=info msg="StartContainer for \"f441869f9e53a2f002893dde7671e248e9a90e7c5fb9700661adc88434eb7bf5\" returns successfully" Mar 4 01:06:51.736117 kubelet[3385]: I0304 01:06:51.734598 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dz2n2" podStartSLOduration=52.734580568 podStartE2EDuration="52.734580568s" podCreationTimestamp="2026-03-04 01:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:06:51.732751448 +0000 UTC m=+59.431041261" watchObservedRunningTime="2026-03-04 01:06:51.734580568 +0000 UTC m=+59.432870381" Mar 4 01:06:52.436834 containerd[1824]: time="2026-03-04T01:06:52.436712406Z" level=info msg="StopPodSandbox for \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\"" Mar 4 01:06:52.454615 containerd[1824]: time="2026-03-04T01:06:52.454143526Z" level=info msg="StopPodSandbox for \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\"" Mar 4 01:06:52.488328 systemd-networkd[1398]: cali6c5791d1168: Gained IPv6LL Mar 4 01:06:52.615650 systemd-networkd[1398]: cali221a77deb41: Gained IPv6LL Mar 4 01:06:52.624830 containerd[1824]: 2026-03-04 01:06:52.552 [WARNING][5540] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0", GenerateName:"calico-apiserver-7c58d797df-", Namespace:"calico-system", SelfLink:"", UID:"43b4ceef-d4ae-420e-b9e4-6a44b6d9ede3", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 6, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c58d797df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe", Pod:"calico-apiserver-7c58d797df-nrmb5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali221a77deb41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:52.624830 containerd[1824]: 2026-03-04 01:06:52.553 [INFO][5540] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Mar 4 01:06:52.624830 containerd[1824]: 2026-03-04 01:06:52.553 [INFO][5540] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" iface="eth0" netns="" Mar 4 01:06:52.624830 containerd[1824]: 2026-03-04 01:06:52.553 [INFO][5540] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Mar 4 01:06:52.624830 containerd[1824]: 2026-03-04 01:06:52.553 [INFO][5540] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Mar 4 01:06:52.624830 containerd[1824]: 2026-03-04 01:06:52.601 [INFO][5569] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" HandleID="k8s-pod-network.3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0" Mar 4 01:06:52.624830 containerd[1824]: 2026-03-04 01:06:52.601 [INFO][5569] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:52.624830 containerd[1824]: 2026-03-04 01:06:52.601 [INFO][5569] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:52.624830 containerd[1824]: 2026-03-04 01:06:52.613 [WARNING][5569] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" HandleID="k8s-pod-network.3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0" Mar 4 01:06:52.624830 containerd[1824]: 2026-03-04 01:06:52.613 [INFO][5569] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" HandleID="k8s-pod-network.3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0" Mar 4 01:06:52.624830 containerd[1824]: 2026-03-04 01:06:52.615 [INFO][5569] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:52.624830 containerd[1824]: 2026-03-04 01:06:52.622 [INFO][5540] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Mar 4 01:06:52.625225 containerd[1824]: time="2026-03-04T01:06:52.624874966Z" level=info msg="TearDown network for sandbox \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\" successfully" Mar 4 01:06:52.625225 containerd[1824]: time="2026-03-04T01:06:52.624899246Z" level=info msg="StopPodSandbox for \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\" returns successfully" Mar 4 01:06:52.627491 containerd[1824]: time="2026-03-04T01:06:52.626572006Z" level=info msg="RemovePodSandbox for \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\"" Mar 4 01:06:52.650163 containerd[1824]: time="2026-03-04T01:06:52.650021726Z" level=info msg="Forcibly stopping sandbox \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\"" Mar 4 01:06:52.662561 containerd[1824]: 2026-03-04 01:06:52.562 [INFO][5555] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Mar 4 01:06:52.662561 containerd[1824]: 2026-03-04 01:06:52.562 [INFO][5555] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" iface="eth0" netns="/var/run/netns/cni-e871595d-9900-20f2-5bfb-61cdaca9ae4d" Mar 4 01:06:52.662561 containerd[1824]: 2026-03-04 01:06:52.562 [INFO][5555] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" iface="eth0" netns="/var/run/netns/cni-e871595d-9900-20f2-5bfb-61cdaca9ae4d" Mar 4 01:06:52.662561 containerd[1824]: 2026-03-04 01:06:52.563 [INFO][5555] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" iface="eth0" netns="/var/run/netns/cni-e871595d-9900-20f2-5bfb-61cdaca9ae4d" Mar 4 01:06:52.662561 containerd[1824]: 2026-03-04 01:06:52.563 [INFO][5555] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Mar 4 01:06:52.662561 containerd[1824]: 2026-03-04 01:06:52.563 [INFO][5555] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Mar 4 01:06:52.662561 containerd[1824]: 2026-03-04 01:06:52.629 [INFO][5574] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" HandleID="k8s-pod-network.0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0" Mar 4 01:06:52.662561 containerd[1824]: 2026-03-04 01:06:52.629 [INFO][5574] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:52.662561 containerd[1824]: 2026-03-04 01:06:52.630 [INFO][5574] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:52.662561 containerd[1824]: 2026-03-04 01:06:52.647 [WARNING][5574] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" HandleID="k8s-pod-network.0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0" Mar 4 01:06:52.662561 containerd[1824]: 2026-03-04 01:06:52.647 [INFO][5574] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" HandleID="k8s-pod-network.0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0" Mar 4 01:06:52.662561 containerd[1824]: 2026-03-04 01:06:52.650 [INFO][5574] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:52.662561 containerd[1824]: 2026-03-04 01:06:52.656 [INFO][5555] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Mar 4 01:06:52.666056 containerd[1824]: time="2026-03-04T01:06:52.665466286Z" level=info msg="TearDown network for sandbox \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\" successfully" Mar 4 01:06:52.666056 containerd[1824]: time="2026-03-04T01:06:52.665528406Z" level=info msg="StopPodSandbox for \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\" returns successfully" Mar 4 01:06:52.667142 systemd[1]: run-netns-cni\x2de871595d\x2d9900\x2d20f2\x2d5bfb\x2d61cdaca9ae4d.mount: Deactivated successfully. Mar 4 01:06:52.667834 containerd[1824]: time="2026-03-04T01:06:52.667766566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t9nxp,Uid:3d654472-1f4e-4e23-8263-e9fe218626cc,Namespace:kube-system,Attempt:1,}" Mar 4 01:06:52.809273 containerd[1824]: 2026-03-04 01:06:52.740 [WARNING][5595] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0", GenerateName:"calico-apiserver-7c58d797df-", Namespace:"calico-system", SelfLink:"", UID:"43b4ceef-d4ae-420e-b9e4-6a44b6d9ede3", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 6, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c58d797df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe", Pod:"calico-apiserver-7c58d797df-nrmb5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali221a77deb41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:52.809273 containerd[1824]: 2026-03-04 01:06:52.741 [INFO][5595] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Mar 4 01:06:52.809273 containerd[1824]: 2026-03-04 01:06:52.741 [INFO][5595] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" iface="eth0" netns="" Mar 4 01:06:52.809273 containerd[1824]: 2026-03-04 01:06:52.741 [INFO][5595] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Mar 4 01:06:52.809273 containerd[1824]: 2026-03-04 01:06:52.741 [INFO][5595] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Mar 4 01:06:52.809273 containerd[1824]: 2026-03-04 01:06:52.784 [INFO][5611] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" HandleID="k8s-pod-network.3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0" Mar 4 01:06:52.809273 containerd[1824]: 2026-03-04 01:06:52.785 [INFO][5611] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:52.809273 containerd[1824]: 2026-03-04 01:06:52.785 [INFO][5611] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:52.809273 containerd[1824]: 2026-03-04 01:06:52.798 [WARNING][5611] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" HandleID="k8s-pod-network.3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0" Mar 4 01:06:52.809273 containerd[1824]: 2026-03-04 01:06:52.798 [INFO][5611] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" HandleID="k8s-pod-network.3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--nrmb5-eth0" Mar 4 01:06:52.809273 containerd[1824]: 2026-03-04 01:06:52.800 [INFO][5611] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:52.809273 containerd[1824]: 2026-03-04 01:06:52.804 [INFO][5595] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0" Mar 4 01:06:52.809273 containerd[1824]: time="2026-03-04T01:06:52.809242245Z" level=info msg="TearDown network for sandbox \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\" successfully" Mar 4 01:06:52.836630 containerd[1824]: time="2026-03-04T01:06:52.835992445Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:06:52.836630 containerd[1824]: time="2026-03-04T01:06:52.836334085Z" level=info msg="RemovePodSandbox \"3b8fa57ec00460950a649c1eaa3fa1f17c281b5af5394c3158cc6799cb0b59f0\" returns successfully" Mar 4 01:06:52.837249 containerd[1824]: time="2026-03-04T01:06:52.837222165Z" level=info msg="StopPodSandbox for \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\"" Mar 4 01:06:52.870904 systemd-networkd[1398]: cali63968f8fd40: Link UP Mar 4 01:06:52.872091 systemd-networkd[1398]: cali63968f8fd40: Gained carrier Mar 4 01:06:52.900245 containerd[1824]: 2026-03-04 01:06:52.782 [INFO][5600] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0 coredns-674b8bbfcf- kube-system 3d654472-1f4e-4e23-8263-e9fe218626cc 1007 0 2026-03-04 01:05:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-8ef68d175b coredns-674b8bbfcf-t9nxp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali63968f8fd40 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9nxp" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-" Mar 4 01:06:52.900245 containerd[1824]: 2026-03-04 01:06:52.783 [INFO][5600] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9nxp" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0" Mar 4 01:06:52.900245 containerd[1824]: 2026-03-04 01:06:52.819 [INFO][5621] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee" HandleID="k8s-pod-network.92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0" Mar 4 01:06:52.900245 containerd[1824]: 2026-03-04 01:06:52.828 [INFO][5621] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee" HandleID="k8s-pod-network.92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3bd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-8ef68d175b", "pod":"coredns-674b8bbfcf-t9nxp", "timestamp":"2026-03-04 01:06:52.819683965 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8ef68d175b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000228dc0)} Mar 4 01:06:52.900245 containerd[1824]: 2026-03-04 01:06:52.829 [INFO][5621] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:52.900245 containerd[1824]: 2026-03-04 01:06:52.829 [INFO][5621] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:52.900245 containerd[1824]: 2026-03-04 01:06:52.829 [INFO][5621] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8ef68d175b' Mar 4 01:06:52.900245 containerd[1824]: 2026-03-04 01:06:52.830 [INFO][5621] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:52.900245 containerd[1824]: 2026-03-04 01:06:52.834 [INFO][5621] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:52.900245 containerd[1824]: 2026-03-04 01:06:52.838 [INFO][5621] ipam/ipam.go 526: Trying affinity for 192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:52.900245 containerd[1824]: 2026-03-04 01:06:52.840 [INFO][5621] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:52.900245 containerd[1824]: 2026-03-04 01:06:52.842 [INFO][5621] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:52.900245 containerd[1824]: 2026-03-04 01:06:52.842 [INFO][5621] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:52.900245 containerd[1824]: 2026-03-04 01:06:52.844 [INFO][5621] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee Mar 4 01:06:52.900245 containerd[1824]: 2026-03-04 01:06:52.852 [INFO][5621] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:52.900245 containerd[1824]: 2026-03-04 01:06:52.862 [INFO][5621] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.70/26] block=192.168.36.64/26 handle="k8s-pod-network.92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:52.900245 containerd[1824]: 2026-03-04 01:06:52.862 [INFO][5621] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.70/26] handle="k8s-pod-network.92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:52.900245 containerd[1824]: 2026-03-04 01:06:52.862 [INFO][5621] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:52.900245 containerd[1824]: 2026-03-04 01:06:52.862 [INFO][5621] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.70/26] IPv6=[] ContainerID="92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee" HandleID="k8s-pod-network.92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0" Mar 4 01:06:52.901172 containerd[1824]: 2026-03-04 01:06:52.866 [INFO][5600] cni-plugin/k8s.go 418: Populated endpoint ContainerID="92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9nxp" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3d654472-1f4e-4e23-8263-e9fe218626cc", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 5, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"", Pod:"coredns-674b8bbfcf-t9nxp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63968f8fd40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:52.901172 containerd[1824]: 2026-03-04 01:06:52.866 [INFO][5600] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.70/32] ContainerID="92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9nxp" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0" Mar 4 01:06:52.901172 containerd[1824]: 2026-03-04 01:06:52.866 [INFO][5600] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63968f8fd40 ContainerID="92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9nxp" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0" Mar 4 01:06:52.901172 containerd[1824]: 2026-03-04 01:06:52.873 [INFO][5600] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9nxp" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0" Mar 4 01:06:52.901172 containerd[1824]: 2026-03-04 01:06:52.876 [INFO][5600] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9nxp" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3d654472-1f4e-4e23-8263-e9fe218626cc", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 5, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee", Pod:"coredns-674b8bbfcf-t9nxp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63968f8fd40", MAC:"fe:04:18:7f:f7:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:52.901172 containerd[1824]: 2026-03-04 01:06:52.894 [INFO][5600] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9nxp" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0" Mar 4 01:06:52.941398 containerd[1824]: time="2026-03-04T01:06:52.940930605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:52.945053 containerd[1824]: time="2026-03-04T01:06:52.945020085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 4 01:06:52.948496 containerd[1824]: time="2026-03-04T01:06:52.948465405Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:52.948689 containerd[1824]: time="2026-03-04T01:06:52.948139685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:06:52.948689 containerd[1824]: time="2026-03-04T01:06:52.948205645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:06:52.948689 containerd[1824]: time="2026-03-04T01:06:52.948217205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:06:52.948689 containerd[1824]: time="2026-03-04T01:06:52.948311645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:06:52.965926 containerd[1824]: time="2026-03-04T01:06:52.965196285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:52.973428 containerd[1824]: time="2026-03-04T01:06:52.966542005Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.184248193s" Mar 4 01:06:52.973428 containerd[1824]: time="2026-03-04T01:06:52.970650605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 4 01:06:52.983794 containerd[1824]: time="2026-03-04T01:06:52.983592365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 4 01:06:52.997908 containerd[1824]: time="2026-03-04T01:06:52.997870365Z" level=info msg="CreateContainer within sandbox \"f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 4 01:06:53.034375 containerd[1824]: time="2026-03-04T01:06:53.034327445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t9nxp,Uid:3d654472-1f4e-4e23-8263-e9fe218626cc,Namespace:kube-system,Attempt:1,} returns sandbox id \"92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee\"" Mar 4 01:06:53.047623 containerd[1824]: 2026-03-04 01:06:52.926 [WARNING][5636] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-whisker--5d6b8d5546--vcjv4-eth0" Mar 4 01:06:53.047623 containerd[1824]: 2026-03-04 01:06:52.927 [INFO][5636] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Mar 4 01:06:53.047623 containerd[1824]: 2026-03-04 01:06:52.928 [INFO][5636] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" iface="eth0" netns="" Mar 4 01:06:53.047623 containerd[1824]: 2026-03-04 01:06:52.928 [INFO][5636] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Mar 4 01:06:53.047623 containerd[1824]: 2026-03-04 01:06:52.928 [INFO][5636] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Mar 4 01:06:53.047623 containerd[1824]: 2026-03-04 01:06:53.020 [INFO][5659] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" HandleID="k8s-pod-network.13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Workload="ci--4081.3.6--n--8ef68d175b-k8s-whisker--5d6b8d5546--vcjv4-eth0" Mar 4 01:06:53.047623 containerd[1824]: 2026-03-04 01:06:53.020 [INFO][5659] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:53.047623 containerd[1824]: 2026-03-04 01:06:53.021 [INFO][5659] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:53.047623 containerd[1824]: 2026-03-04 01:06:53.036 [WARNING][5659] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" HandleID="k8s-pod-network.13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Workload="ci--4081.3.6--n--8ef68d175b-k8s-whisker--5d6b8d5546--vcjv4-eth0" Mar 4 01:06:53.047623 containerd[1824]: 2026-03-04 01:06:53.036 [INFO][5659] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" HandleID="k8s-pod-network.13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Workload="ci--4081.3.6--n--8ef68d175b-k8s-whisker--5d6b8d5546--vcjv4-eth0" Mar 4 01:06:53.047623 containerd[1824]: 2026-03-04 01:06:53.037 [INFO][5659] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:53.047623 containerd[1824]: 2026-03-04 01:06:53.043 [INFO][5636] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Mar 4 01:06:53.047979 containerd[1824]: time="2026-03-04T01:06:53.047651085Z" level=info msg="TearDown network for sandbox \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\" successfully" Mar 4 01:06:53.047979 containerd[1824]: time="2026-03-04T01:06:53.047739805Z" level=info msg="StopPodSandbox for \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\" returns successfully" Mar 4 01:06:53.047979 containerd[1824]: time="2026-03-04T01:06:53.047715325Z" level=info msg="CreateContainer within sandbox \"92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 01:06:53.048138 containerd[1824]: time="2026-03-04T01:06:53.048101245Z" level=info msg="RemovePodSandbox for \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\"" Mar 4 01:06:53.048249 containerd[1824]: time="2026-03-04T01:06:53.048131445Z" level=info msg="Forcibly stopping sandbox \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\"" Mar 4 01:06:53.058758 containerd[1824]: time="2026-03-04T01:06:53.058725085Z" level=info msg="CreateContainer within sandbox \"f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a30d31ca93a96f360947d3d46651fe6cea431a5729525c13ce9615936c9237bb\"" Mar 4 01:06:53.061055 containerd[1824]: time="2026-03-04T01:06:53.060965205Z" level=info msg="StartContainer for \"a30d31ca93a96f360947d3d46651fe6cea431a5729525c13ce9615936c9237bb\"" Mar 4 01:06:53.096641 containerd[1824]: time="2026-03-04T01:06:53.096541085Z" level=info msg="CreateContainer within sandbox \"92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fa9038eba87a9ca61ae14026ce7a0c735d8959181843a80a98ce0e328fab9781\"" Mar 4 01:06:53.098291 containerd[1824]: time="2026-03-04T01:06:53.098252845Z" level=info msg="StartContainer for \"fa9038eba87a9ca61ae14026ce7a0c735d8959181843a80a98ce0e328fab9781\"" Mar 4 01:06:53.148595 containerd[1824]: time="2026-03-04T01:06:53.148205285Z" level=info msg="StartContainer for \"a30d31ca93a96f360947d3d46651fe6cea431a5729525c13ce9615936c9237bb\" returns successfully" Mar 4 01:06:53.156201 containerd[1824]: 2026-03-04 01:06:53.084 [WARNING][5723] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-whisker--5d6b8d5546--vcjv4-eth0" Mar 4 01:06:53.156201 containerd[1824]: 2026-03-04 01:06:53.085 [INFO][5723] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Mar 4 01:06:53.156201 containerd[1824]: 2026-03-04 01:06:53.085 [INFO][5723] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" iface="eth0" netns="" Mar 4 01:06:53.156201 containerd[1824]: 2026-03-04 01:06:53.085 [INFO][5723] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Mar 4 01:06:53.156201 containerd[1824]: 2026-03-04 01:06:53.085 [INFO][5723] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Mar 4 01:06:53.156201 containerd[1824]: 2026-03-04 01:06:53.129 [INFO][5745] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" HandleID="k8s-pod-network.13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Workload="ci--4081.3.6--n--8ef68d175b-k8s-whisker--5d6b8d5546--vcjv4-eth0" Mar 4 01:06:53.156201 containerd[1824]: 2026-03-04 01:06:53.130 [INFO][5745] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:53.156201 containerd[1824]: 2026-03-04 01:06:53.130 [INFO][5745] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:53.156201 containerd[1824]: 2026-03-04 01:06:53.145 [WARNING][5745] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" HandleID="k8s-pod-network.13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Workload="ci--4081.3.6--n--8ef68d175b-k8s-whisker--5d6b8d5546--vcjv4-eth0" Mar 4 01:06:53.156201 containerd[1824]: 2026-03-04 01:06:53.145 [INFO][5745] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" HandleID="k8s-pod-network.13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Workload="ci--4081.3.6--n--8ef68d175b-k8s-whisker--5d6b8d5546--vcjv4-eth0" Mar 4 01:06:53.156201 containerd[1824]: 2026-03-04 01:06:53.147 [INFO][5745] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:53.156201 containerd[1824]: 2026-03-04 01:06:53.153 [INFO][5723] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b" Mar 4 01:06:53.157203 containerd[1824]: time="2026-03-04T01:06:53.156179885Z" level=info msg="TearDown network for sandbox \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\" successfully" Mar 4 01:06:53.168449 containerd[1824]: time="2026-03-04T01:06:53.168377845Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:06:53.175624 containerd[1824]: time="2026-03-04T01:06:53.175585444Z" level=info msg="RemovePodSandbox \"13ff1ec141f6592ede3be13aef9490de350912f7d8d9378d79734c06fad6e68b\" returns successfully" Mar 4 01:06:53.175734 containerd[1824]: time="2026-03-04T01:06:53.175665044Z" level=info msg="StartContainer for \"fa9038eba87a9ca61ae14026ce7a0c735d8959181843a80a98ce0e328fab9781\" returns successfully" Mar 4 01:06:53.176597 containerd[1824]: time="2026-03-04T01:06:53.176566724Z" level=info msg="StopPodSandbox for \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\"" Mar 4 01:06:53.298247 containerd[1824]: 2026-03-04 01:06:53.230 [WARNING][5818] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0", GenerateName:"calico-kube-controllers-6594fc5f87-", Namespace:"calico-system", SelfLink:"", UID:"d9e5a6df-2938-4880-baa9-bc2b5f03ecb3", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6594fc5f87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9", Pod:"calico-kube-controllers-6594fc5f87-7lc2r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif6162f0b09a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:53.298247 containerd[1824]: 2026-03-04 01:06:53.231 [INFO][5818] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Mar 4 01:06:53.298247 containerd[1824]: 2026-03-04 01:06:53.231 [INFO][5818] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" iface="eth0" netns="" Mar 4 01:06:53.298247 containerd[1824]: 2026-03-04 01:06:53.232 [INFO][5818] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Mar 4 01:06:53.298247 containerd[1824]: 2026-03-04 01:06:53.232 [INFO][5818] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Mar 4 01:06:53.298247 containerd[1824]: 2026-03-04 01:06:53.271 [INFO][5830] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" HandleID="k8s-pod-network.a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0" Mar 4 01:06:53.298247 containerd[1824]: 2026-03-04 01:06:53.272 [INFO][5830] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:53.298247 containerd[1824]: 2026-03-04 01:06:53.272 [INFO][5830] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:53.298247 containerd[1824]: 2026-03-04 01:06:53.282 [WARNING][5830] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" HandleID="k8s-pod-network.a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0" Mar 4 01:06:53.298247 containerd[1824]: 2026-03-04 01:06:53.282 [INFO][5830] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" HandleID="k8s-pod-network.a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0" Mar 4 01:06:53.298247 containerd[1824]: 2026-03-04 01:06:53.283 [INFO][5830] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:53.298247 containerd[1824]: 2026-03-04 01:06:53.288 [INFO][5818] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Mar 4 01:06:53.298247 containerd[1824]: time="2026-03-04T01:06:53.298237244Z" level=info msg="TearDown network for sandbox \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\" successfully" Mar 4 01:06:53.298731 containerd[1824]: time="2026-03-04T01:06:53.298259524Z" level=info msg="StopPodSandbox for \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\" returns successfully" Mar 4 01:06:53.300028 containerd[1824]: time="2026-03-04T01:06:53.298753004Z" level=info msg="RemovePodSandbox for \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\"" Mar 4 01:06:53.300028 containerd[1824]: time="2026-03-04T01:06:53.298782164Z" level=info msg="Forcibly stopping sandbox \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\"" Mar 4 01:06:53.389679 containerd[1824]: 2026-03-04 01:06:53.353 [WARNING][5848] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0", GenerateName:"calico-kube-controllers-6594fc5f87-", Namespace:"calico-system", SelfLink:"", UID:"d9e5a6df-2938-4880-baa9-bc2b5f03ecb3", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 6, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6594fc5f87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"f641bb3f255fd3f449428d0cd4d698bfe4dc12dd66bbb7c542d0b848f5c0d0f9", Pod:"calico-kube-controllers-6594fc5f87-7lc2r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.36.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif6162f0b09a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:53.389679 containerd[1824]: 2026-03-04 01:06:53.353 [INFO][5848] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Mar 4 01:06:53.389679 containerd[1824]: 2026-03-04 01:06:53.353 [INFO][5848] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" iface="eth0" netns="" Mar 4 01:06:53.389679 containerd[1824]: 2026-03-04 01:06:53.353 [INFO][5848] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Mar 4 01:06:53.389679 containerd[1824]: 2026-03-04 01:06:53.353 [INFO][5848] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Mar 4 01:06:53.389679 containerd[1824]: 2026-03-04 01:06:53.373 [INFO][5856] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" HandleID="k8s-pod-network.a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0" Mar 4 01:06:53.389679 containerd[1824]: 2026-03-04 01:06:53.374 [INFO][5856] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:53.389679 containerd[1824]: 2026-03-04 01:06:53.374 [INFO][5856] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:53.389679 containerd[1824]: 2026-03-04 01:06:53.382 [WARNING][5856] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" HandleID="k8s-pod-network.a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0" Mar 4 01:06:53.389679 containerd[1824]: 2026-03-04 01:06:53.382 [INFO][5856] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" HandleID="k8s-pod-network.a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--kube--controllers--6594fc5f87--7lc2r-eth0" Mar 4 01:06:53.389679 containerd[1824]: 2026-03-04 01:06:53.383 [INFO][5856] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:53.389679 containerd[1824]: 2026-03-04 01:06:53.386 [INFO][5848] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d" Mar 4 01:06:53.390087 containerd[1824]: time="2026-03-04T01:06:53.389693804Z" level=info msg="TearDown network for sandbox \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\" successfully" Mar 4 01:06:53.400568 containerd[1824]: time="2026-03-04T01:06:53.400528484Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:06:53.400667 containerd[1824]: time="2026-03-04T01:06:53.400603404Z" level=info msg="RemovePodSandbox \"a020e9d30b019f9c7f88eef9fbae2d4d0594d5224ff571501710c77c0464807d\" returns successfully" Mar 4 01:06:53.401116 containerd[1824]: time="2026-03-04T01:06:53.401093644Z" level=info msg="StopPodSandbox for \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\"" Mar 4 01:06:53.449889 containerd[1824]: time="2026-03-04T01:06:53.448613084Z" level=info msg="StopPodSandbox for \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\"" Mar 4 01:06:53.449889 containerd[1824]: time="2026-03-04T01:06:53.448850164Z" level=info msg="StopPodSandbox for \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\"" Mar 4 01:06:53.489896 containerd[1824]: 2026-03-04 01:06:53.434 [WARNING][5871] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"381e9cc1-21d8-4ab6-bdd5-52a4f34253d4", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 5, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f", Pod:"coredns-674b8bbfcf-dz2n2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c5791d1168", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:53.489896 containerd[1824]: 2026-03-04 01:06:53.434 [INFO][5871] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Mar 4 01:06:53.489896 containerd[1824]: 2026-03-04 01:06:53.434 [INFO][5871] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" iface="eth0" netns="" Mar 4 01:06:53.489896 containerd[1824]: 2026-03-04 01:06:53.434 [INFO][5871] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Mar 4 01:06:53.489896 containerd[1824]: 2026-03-04 01:06:53.434 [INFO][5871] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Mar 4 01:06:53.489896 containerd[1824]: 2026-03-04 01:06:53.467 [INFO][5878] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" HandleID="k8s-pod-network.c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0" Mar 4 01:06:53.489896 containerd[1824]: 2026-03-04 01:06:53.467 [INFO][5878] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:53.489896 containerd[1824]: 2026-03-04 01:06:53.467 [INFO][5878] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:53.489896 containerd[1824]: 2026-03-04 01:06:53.477 [WARNING][5878] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" HandleID="k8s-pod-network.c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0" Mar 4 01:06:53.489896 containerd[1824]: 2026-03-04 01:06:53.477 [INFO][5878] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" HandleID="k8s-pod-network.c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0" Mar 4 01:06:53.489896 containerd[1824]: 2026-03-04 01:06:53.480 [INFO][5878] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:53.489896 containerd[1824]: 2026-03-04 01:06:53.488 [INFO][5871] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Mar 4 01:06:53.490295 containerd[1824]: time="2026-03-04T01:06:53.489936764Z" level=info msg="TearDown network for sandbox \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\" successfully" Mar 4 01:06:53.490295 containerd[1824]: time="2026-03-04T01:06:53.489961244Z" level=info msg="StopPodSandbox for \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\" returns successfully" Mar 4 01:06:53.490970 containerd[1824]: time="2026-03-04T01:06:53.490716204Z" level=info msg="RemovePodSandbox for \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\"" Mar 4 01:06:53.490970 containerd[1824]: time="2026-03-04T01:06:53.490745804Z" level=info msg="Forcibly stopping sandbox \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\"" Mar 4 01:06:53.609470 containerd[1824]: 2026-03-04 01:06:53.529 [INFO][5900] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Mar 4 01:06:53.609470 containerd[1824]: 2026-03-04 01:06:53.529 [INFO][5900] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" iface="eth0" netns="/var/run/netns/cni-e7725a2c-5407-1c6b-9dfa-5a577a684f6a" Mar 4 01:06:53.609470 containerd[1824]: 2026-03-04 01:06:53.530 [INFO][5900] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" iface="eth0" netns="/var/run/netns/cni-e7725a2c-5407-1c6b-9dfa-5a577a684f6a" Mar 4 01:06:53.609470 containerd[1824]: 2026-03-04 01:06:53.530 [INFO][5900] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" iface="eth0" netns="/var/run/netns/cni-e7725a2c-5407-1c6b-9dfa-5a577a684f6a" Mar 4 01:06:53.609470 containerd[1824]: 2026-03-04 01:06:53.530 [INFO][5900] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Mar 4 01:06:53.609470 containerd[1824]: 2026-03-04 01:06:53.530 [INFO][5900] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Mar 4 01:06:53.609470 containerd[1824]: 2026-03-04 01:06:53.583 [INFO][5928] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" HandleID="k8s-pod-network.084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0" Mar 4 01:06:53.609470 containerd[1824]: 2026-03-04 01:06:53.583 [INFO][5928] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:53.609470 containerd[1824]: 2026-03-04 01:06:53.584 [INFO][5928] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:53.609470 containerd[1824]: 2026-03-04 01:06:53.597 [WARNING][5928] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" HandleID="k8s-pod-network.084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0" Mar 4 01:06:53.609470 containerd[1824]: 2026-03-04 01:06:53.597 [INFO][5928] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" HandleID="k8s-pod-network.084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0" Mar 4 01:06:53.609470 containerd[1824]: 2026-03-04 01:06:53.601 [INFO][5928] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:53.609470 containerd[1824]: 2026-03-04 01:06:53.605 [INFO][5900] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Mar 4 01:06:53.610378 containerd[1824]: time="2026-03-04T01:06:53.609691443Z" level=info msg="TearDown network for sandbox \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\" successfully" Mar 4 01:06:53.610378 containerd[1824]: time="2026-03-04T01:06:53.609718923Z" level=info msg="StopPodSandbox for \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\" returns successfully" Mar 4 01:06:53.610770 containerd[1824]: time="2026-03-04T01:06:53.610436283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c58d797df-d2knn,Uid:f7716632-24cc-4b2d-9197-7d4214b114df,Namespace:calico-system,Attempt:1,}" Mar 4 01:06:53.629858 containerd[1824]: 2026-03-04 01:06:53.567 [INFO][5904] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Mar 4 01:06:53.629858 containerd[1824]: 2026-03-04 01:06:53.568 [INFO][5904] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" iface="eth0" netns="/var/run/netns/cni-8424f8a9-cfeb-13e6-529a-639bee6d42c7" Mar 4 01:06:53.629858 containerd[1824]: 2026-03-04 01:06:53.570 [INFO][5904] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" iface="eth0" netns="/var/run/netns/cni-8424f8a9-cfeb-13e6-529a-639bee6d42c7" Mar 4 01:06:53.629858 containerd[1824]: 2026-03-04 01:06:53.570 [INFO][5904] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" iface="eth0" netns="/var/run/netns/cni-8424f8a9-cfeb-13e6-529a-639bee6d42c7" Mar 4 01:06:53.629858 containerd[1824]: 2026-03-04 01:06:53.570 [INFO][5904] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Mar 4 01:06:53.629858 containerd[1824]: 2026-03-04 01:06:53.570 [INFO][5904] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Mar 4 01:06:53.629858 containerd[1824]: 2026-03-04 01:06:53.612 [INFO][5935] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" HandleID="k8s-pod-network.df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Workload="ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0" Mar 4 01:06:53.629858 containerd[1824]: 2026-03-04 01:06:53.612 [INFO][5935] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:53.629858 containerd[1824]: 2026-03-04 01:06:53.612 [INFO][5935] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:53.629858 containerd[1824]: 2026-03-04 01:06:53.622 [WARNING][5935] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" HandleID="k8s-pod-network.df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Workload="ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0" Mar 4 01:06:53.629858 containerd[1824]: 2026-03-04 01:06:53.623 [INFO][5935] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" HandleID="k8s-pod-network.df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Workload="ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0" Mar 4 01:06:53.629858 containerd[1824]: 2026-03-04 01:06:53.624 [INFO][5935] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:53.629858 containerd[1824]: 2026-03-04 01:06:53.626 [INFO][5904] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Mar 4 01:06:53.639580 containerd[1824]: 2026-03-04 01:06:53.569 [WARNING][5920] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"381e9cc1-21d8-4ab6-bdd5-52a4f34253d4", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 5, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"163de35f14fe17703fe06757cb264695b1b205a79e033c80e673c67d2178a23f", Pod:"coredns-674b8bbfcf-dz2n2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c5791d1168", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:53.639580 containerd[1824]: 2026-03-04 01:06:53.569 [INFO][5920] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Mar 4 01:06:53.639580 containerd[1824]: 2026-03-04 01:06:53.569 [INFO][5920] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" iface="eth0" netns="" Mar 4 01:06:53.639580 containerd[1824]: 2026-03-04 01:06:53.570 [INFO][5920] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Mar 4 01:06:53.639580 containerd[1824]: 2026-03-04 01:06:53.570 [INFO][5920] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Mar 4 01:06:53.639580 containerd[1824]: 2026-03-04 01:06:53.613 [INFO][5936] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" HandleID="k8s-pod-network.c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0" Mar 4 01:06:53.639580 containerd[1824]: 2026-03-04 01:06:53.613 [INFO][5936] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:53.639580 containerd[1824]: 2026-03-04 01:06:53.624 [INFO][5936] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:53.639580 containerd[1824]: 2026-03-04 01:06:53.634 [WARNING][5936] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" HandleID="k8s-pod-network.c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0" Mar 4 01:06:53.639580 containerd[1824]: 2026-03-04 01:06:53.634 [INFO][5936] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" HandleID="k8s-pod-network.c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--dz2n2-eth0" Mar 4 01:06:53.639580 containerd[1824]: 2026-03-04 01:06:53.635 [INFO][5936] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:53.639580 containerd[1824]: 2026-03-04 01:06:53.637 [INFO][5920] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75" Mar 4 01:06:53.669257 systemd[1]: run-netns-cni\x2de7725a2c\x2d5407\x2d1c6b\x2d9dfa\x2d5a577a684f6a.mount: Deactivated successfully. Mar 4 01:06:53.670773 containerd[1824]: time="2026-03-04T01:06:53.670685203Z" level=info msg="TearDown network for sandbox \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\" successfully" Mar 4 01:06:53.679398 systemd[1]: run-netns-cni\x2d8424f8a9\x2dcfeb\x2d13e6\x2d529a\x2d639bee6d42c7.mount: Deactivated successfully. Mar 4 01:06:53.684010 containerd[1824]: time="2026-03-04T01:06:53.683928363Z" level=info msg="TearDown network for sandbox \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\" successfully" Mar 4 01:06:53.684010 containerd[1824]: time="2026-03-04T01:06:53.684007523Z" level=info msg="StopPodSandbox for \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\" returns successfully" Mar 4 01:06:53.684879 containerd[1824]: time="2026-03-04T01:06:53.684691443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-q7bfd,Uid:ea0036f7-27e7-42ae-8dc8-5a1007c59806,Namespace:calico-system,Attempt:1,}" Mar 4 01:06:53.769006 kubelet[3385]: I0304 01:06:53.767353 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6594fc5f87-7lc2r" podStartSLOduration=35.57375809 podStartE2EDuration="38.767336243s" podCreationTimestamp="2026-03-04 01:06:15 +0000 UTC" firstStartedPulling="2026-03-04 01:06:49.781975932 +0000 UTC m=+57.480265745" lastFinishedPulling="2026-03-04 01:06:52.975554085 +0000 UTC m=+60.673843898" observedRunningTime="2026-03-04 01:06:53.752529283 +0000 UTC m=+61.450819096" watchObservedRunningTime="2026-03-04 01:06:53.767336243 +0000 UTC m=+61.465626056" Mar 4 01:06:53.770191 kubelet[3385]: I0304 01:06:53.769885 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-t9nxp" podStartSLOduration=54.769869683 podStartE2EDuration="54.769869683s" podCreationTimestamp="2026-03-04 01:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:06:53.769714483 +0000 UTC m=+61.468004296" watchObservedRunningTime="2026-03-04 01:06:53.769869683 +0000 UTC m=+61.468159496" Mar 4 01:06:53.789951 systemd[1]: run-containerd-runc-k8s.io-a30d31ca93a96f360947d3d46651fe6cea431a5729525c13ce9615936c9237bb-runc.ZnqvVY.mount: Deactivated successfully. Mar 4 01:06:54.096015 containerd[1824]: time="2026-03-04T01:06:54.095971562Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:06:54.096163 containerd[1824]: time="2026-03-04T01:06:54.096049762Z" level=info msg="RemovePodSandbox \"c06b50321ed65d964c83562f2fbced97d36ecea82c118383602a40fe0ff2dd75\" returns successfully" Mar 4 01:06:54.253086 systemd-networkd[1398]: caliba523b532fe: Link UP Mar 4 01:06:54.253270 systemd-networkd[1398]: caliba523b532fe: Gained carrier Mar 4 01:06:54.278478 containerd[1824]: 2026-03-04 01:06:54.159 [INFO][5973] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0 calico-apiserver-7c58d797df- calico-system f7716632-24cc-4b2d-9197-7d4214b114df 1022 0 2026-03-04 01:06:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c58d797df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-8ef68d175b calico-apiserver-7c58d797df-d2knn eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] caliba523b532fe [] [] }} ContainerID="2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4" Namespace="calico-system" Pod="calico-apiserver-7c58d797df-d2knn" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-" Mar 4 01:06:54.278478 containerd[1824]: 2026-03-04 01:06:54.159 [INFO][5973] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4" Namespace="calico-system" Pod="calico-apiserver-7c58d797df-d2knn" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0" Mar 4 01:06:54.278478 containerd[1824]: 2026-03-04 01:06:54.191 [INFO][5994] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4" HandleID="k8s-pod-network.2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0" Mar 4 01:06:54.278478 containerd[1824]: 2026-03-04 01:06:54.204 [INFO][5994] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4" HandleID="k8s-pod-network.2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000376290), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8ef68d175b", "pod":"calico-apiserver-7c58d797df-d2knn", "timestamp":"2026-03-04 01:06:54.191551962 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8ef68d175b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400021ec60)} Mar 4 01:06:54.278478 containerd[1824]: 2026-03-04 01:06:54.204 [INFO][5994] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:54.278478 containerd[1824]: 2026-03-04 01:06:54.204 [INFO][5994] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:54.278478 containerd[1824]: 2026-03-04 01:06:54.204 [INFO][5994] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8ef68d175b' Mar 4 01:06:54.278478 containerd[1824]: 2026-03-04 01:06:54.207 [INFO][5994] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:54.278478 containerd[1824]: 2026-03-04 01:06:54.211 [INFO][5994] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:54.278478 containerd[1824]: 2026-03-04 01:06:54.217 [INFO][5994] ipam/ipam.go 526: Trying affinity for 192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:54.278478 containerd[1824]: 2026-03-04 01:06:54.218 [INFO][5994] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:54.278478 containerd[1824]: 2026-03-04 01:06:54.222 [INFO][5994] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:54.278478 containerd[1824]: 2026-03-04 01:06:54.223 [INFO][5994] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:54.278478 containerd[1824]: 2026-03-04 01:06:54.226 [INFO][5994] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4 Mar 4 01:06:54.278478 containerd[1824]: 2026-03-04 01:06:54.235 [INFO][5994] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:54.278478 containerd[1824]: 2026-03-04 01:06:54.245 [INFO][5994] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.71/26] block=192.168.36.64/26 handle="k8s-pod-network.2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:54.278478 containerd[1824]: 2026-03-04 01:06:54.245 [INFO][5994] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.71/26] handle="k8s-pod-network.2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:54.278478 containerd[1824]: 2026-03-04 01:06:54.245 [INFO][5994] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:54.278478 containerd[1824]: 2026-03-04 01:06:54.245 [INFO][5994] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.71/26] IPv6=[] ContainerID="2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4" HandleID="k8s-pod-network.2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0" Mar 4 01:06:54.279122 containerd[1824]: 2026-03-04 01:06:54.248 [INFO][5973] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4" Namespace="calico-system" Pod="calico-apiserver-7c58d797df-d2knn" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0", GenerateName:"calico-apiserver-7c58d797df-", Namespace:"calico-system", SelfLink:"", UID:"f7716632-24cc-4b2d-9197-7d4214b114df", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 6, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c58d797df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"", Pod:"calico-apiserver-7c58d797df-d2knn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliba523b532fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:54.279122 containerd[1824]: 2026-03-04 01:06:54.248 [INFO][5973] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.71/32] ContainerID="2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4" Namespace="calico-system" Pod="calico-apiserver-7c58d797df-d2knn" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0" Mar 4 01:06:54.279122 containerd[1824]: 2026-03-04 01:06:54.248 [INFO][5973] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba523b532fe ContainerID="2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4" Namespace="calico-system" Pod="calico-apiserver-7c58d797df-d2knn" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0" Mar 4 01:06:54.279122 containerd[1824]: 2026-03-04 01:06:54.250 [INFO][5973] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4" Namespace="calico-system" Pod="calico-apiserver-7c58d797df-d2knn" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0" Mar 4 01:06:54.279122 containerd[1824]: 2026-03-04 01:06:54.251 [INFO][5973] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4" Namespace="calico-system" Pod="calico-apiserver-7c58d797df-d2knn" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0", GenerateName:"calico-apiserver-7c58d797df-", Namespace:"calico-system", SelfLink:"", UID:"f7716632-24cc-4b2d-9197-7d4214b114df", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 6, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c58d797df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4", Pod:"calico-apiserver-7c58d797df-d2knn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliba523b532fe", MAC:"82:1f:a0:e2:55:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:54.279122 containerd[1824]: 2026-03-04 01:06:54.272 [INFO][5973] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4" Namespace="calico-system" Pod="calico-apiserver-7c58d797df-d2knn" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0" Mar 4 01:06:54.339102 containerd[1824]: time="2026-03-04T01:06:54.338725122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:06:54.339102 containerd[1824]: time="2026-03-04T01:06:54.338783362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:06:54.339102 containerd[1824]: time="2026-03-04T01:06:54.338797122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:06:54.339102 containerd[1824]: time="2026-03-04T01:06:54.339056282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:06:54.356653 systemd-networkd[1398]: cali22469ecea05: Link UP Mar 4 01:06:54.357581 systemd-networkd[1398]: cali22469ecea05: Gained carrier Mar 4 01:06:54.382284 containerd[1824]: 2026-03-04 01:06:54.187 [INFO][5982] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0 goldmane-5b85766d88- calico-system ea0036f7-27e7-42ae-8dc8-5a1007c59806 1023 0 2026-03-04 01:06:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-8ef68d175b goldmane-5b85766d88-q7bfd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali22469ecea05 [] [] }} ContainerID="eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39" Namespace="calico-system" Pod="goldmane-5b85766d88-q7bfd" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-" Mar 4 01:06:54.382284 containerd[1824]: 2026-03-04 01:06:54.187 [INFO][5982] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39" Namespace="calico-system" Pod="goldmane-5b85766d88-q7bfd" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0" Mar 4 01:06:54.382284 containerd[1824]: 2026-03-04 01:06:54.227 [INFO][6003] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39" HandleID="k8s-pod-network.eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39" Workload="ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0" Mar 4 01:06:54.382284 containerd[1824]: 2026-03-04 01:06:54.238 [INFO][6003] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39" HandleID="k8s-pod-network.eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39" Workload="ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8ef68d175b", "pod":"goldmane-5b85766d88-q7bfd", "timestamp":"2026-03-04 01:06:54.227930042 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8ef68d175b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001842c0)} Mar 4 01:06:54.382284 containerd[1824]: 2026-03-04 01:06:54.238 [INFO][6003] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:06:54.382284 containerd[1824]: 2026-03-04 01:06:54.245 [INFO][6003] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:06:54.382284 containerd[1824]: 2026-03-04 01:06:54.246 [INFO][6003] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8ef68d175b' Mar 4 01:06:54.382284 containerd[1824]: 2026-03-04 01:06:54.307 [INFO][6003] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:54.382284 containerd[1824]: 2026-03-04 01:06:54.314 [INFO][6003] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:54.382284 containerd[1824]: 2026-03-04 01:06:54.319 [INFO][6003] ipam/ipam.go 526: Trying affinity for 192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:54.382284 containerd[1824]: 2026-03-04 01:06:54.321 [INFO][6003] ipam/ipam.go 160: Attempting to load block cidr=192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:54.382284 containerd[1824]: 2026-03-04 01:06:54.323 [INFO][6003] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.36.64/26 host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:54.382284 containerd[1824]: 2026-03-04 01:06:54.323 [INFO][6003] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.36.64/26 handle="k8s-pod-network.eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:54.382284 containerd[1824]: 2026-03-04 01:06:54.325 [INFO][6003] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39 Mar 4 01:06:54.382284 containerd[1824]: 2026-03-04 01:06:54.333 [INFO][6003] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.36.64/26 handle="k8s-pod-network.eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:54.382284 containerd[1824]: 2026-03-04 01:06:54.345 [INFO][6003] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.36.72/26] block=192.168.36.64/26 handle="k8s-pod-network.eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:54.382284 containerd[1824]: 2026-03-04 01:06:54.345 [INFO][6003] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.36.72/26] handle="k8s-pod-network.eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39" host="ci-4081.3.6-n-8ef68d175b" Mar 4 01:06:54.382284 containerd[1824]: 2026-03-04 01:06:54.345 [INFO][6003] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:06:54.382284 containerd[1824]: 2026-03-04 01:06:54.345 [INFO][6003] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.36.72/26] IPv6=[] ContainerID="eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39" HandleID="k8s-pod-network.eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39" Workload="ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0" Mar 4 01:06:54.385054 containerd[1824]: 2026-03-04 01:06:54.347 [INFO][5982] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39" Namespace="calico-system" Pod="goldmane-5b85766d88-q7bfd" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"ea0036f7-27e7-42ae-8dc8-5a1007c59806", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 6, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"", Pod:"goldmane-5b85766d88-q7bfd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali22469ecea05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:54.385054 containerd[1824]: 2026-03-04 01:06:54.347 [INFO][5982] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.36.72/32] ContainerID="eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39" Namespace="calico-system" Pod="goldmane-5b85766d88-q7bfd" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0" Mar 4 01:06:54.385054 containerd[1824]: 2026-03-04 01:06:54.347 [INFO][5982] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22469ecea05 ContainerID="eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39" Namespace="calico-system" Pod="goldmane-5b85766d88-q7bfd" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0" Mar 4 01:06:54.385054 containerd[1824]: 2026-03-04 01:06:54.359 [INFO][5982] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39" Namespace="calico-system" Pod="goldmane-5b85766d88-q7bfd" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0" Mar 4 01:06:54.385054 containerd[1824]: 2026-03-04 01:06:54.359 [INFO][5982] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39" Namespace="calico-system" Pod="goldmane-5b85766d88-q7bfd" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"ea0036f7-27e7-42ae-8dc8-5a1007c59806", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 6, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39", Pod:"goldmane-5b85766d88-q7bfd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali22469ecea05", MAC:"22:86:79:98:37:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:06:54.385054 containerd[1824]: 2026-03-04 01:06:54.379 [INFO][5982] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39" Namespace="calico-system" Pod="goldmane-5b85766d88-q7bfd" WorkloadEndpoint="ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0" Mar 4 01:06:54.411934 containerd[1824]: time="2026-03-04T01:06:54.411204802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:06:54.411934 containerd[1824]: time="2026-03-04T01:06:54.411254442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:06:54.411934 containerd[1824]: time="2026-03-04T01:06:54.411269802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:06:54.411934 containerd[1824]: time="2026-03-04T01:06:54.411346882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:06:54.417130 containerd[1824]: time="2026-03-04T01:06:54.416215442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c58d797df-d2knn,Uid:f7716632-24cc-4b2d-9197-7d4214b114df,Namespace:calico-system,Attempt:1,} returns sandbox id \"2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4\"" Mar 4 01:06:54.485505 containerd[1824]: time="2026-03-04T01:06:54.485467361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-q7bfd,Uid:ea0036f7-27e7-42ae-8dc8-5a1007c59806,Namespace:calico-system,Attempt:1,} returns sandbox id \"eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39\"" Mar 4 01:06:54.599506 systemd-networkd[1398]: cali63968f8fd40: Gained IPv6LL Mar 4 01:06:55.687575 systemd-networkd[1398]: caliba523b532fe: Gained IPv6LL Mar 4 01:06:56.135784 systemd-networkd[1398]: cali22469ecea05: Gained IPv6LL Mar 4 01:06:56.609028 containerd[1824]: time="2026-03-04T01:06:56.608975756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:56.611928 containerd[1824]: time="2026-03-04T01:06:56.611770436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 4 01:06:56.614794 containerd[1824]: time="2026-03-04T01:06:56.614743996Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:56.619179 containerd[1824]: time="2026-03-04T01:06:56.619115276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:56.619934 containerd[1824]: time="2026-03-04T01:06:56.619823996Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 3.635858871s" Mar 4 01:06:56.619934 containerd[1824]: time="2026-03-04T01:06:56.619856036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 4 01:06:56.622779 containerd[1824]: time="2026-03-04T01:06:56.622680916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 4 01:06:56.628627 containerd[1824]: time="2026-03-04T01:06:56.628600876Z" level=info msg="CreateContainer within sandbox \"f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 4 01:06:56.665597 containerd[1824]: time="2026-03-04T01:06:56.665554396Z" level=info msg="CreateContainer within sandbox \"f63c926b9d2418e94fd153b4d99f7f6f6a192a1f1f3c6b5aa8a5bd34e1c9f3fe\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2ccdd14e5f3675143749c54ecf4e427869461ccd3eecd66ce44fe12fae9a663e\"" Mar 4 01:06:56.666293 containerd[1824]: time="2026-03-04T01:06:56.665991756Z" level=info msg="StartContainer for \"2ccdd14e5f3675143749c54ecf4e427869461ccd3eecd66ce44fe12fae9a663e\"" Mar 4 01:06:56.731430 containerd[1824]: time="2026-03-04T01:06:56.731170836Z" level=info msg="StartContainer for \"2ccdd14e5f3675143749c54ecf4e427869461ccd3eecd66ce44fe12fae9a663e\" returns successfully" Mar 4 01:06:56.777813 kubelet[3385]: I0304 01:06:56.777480 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7c58d797df-nrmb5" podStartSLOduration=38.13226985 podStartE2EDuration="43.777464916s" podCreationTimestamp="2026-03-04 01:06:13 +0000 UTC" firstStartedPulling="2026-03-04 01:06:50.97612553 +0000 UTC m=+58.674415303" lastFinishedPulling="2026-03-04 01:06:56.621320596 +0000 UTC m=+64.319610369" observedRunningTime="2026-03-04 01:06:56.775464036 +0000 UTC m=+64.473753849" watchObservedRunningTime="2026-03-04 01:06:56.777464916 +0000 UTC m=+64.475754729" Mar 4 01:06:56.942457 containerd[1824]: time="2026-03-04T01:06:56.942286676Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:06:56.946233 containerd[1824]: time="2026-03-04T01:06:56.945782596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 4 01:06:56.949060 containerd[1824]: time="2026-03-04T01:06:56.949014396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 326.28372ms" Mar 4 01:06:56.949199 containerd[1824]: time="2026-03-04T01:06:56.949159716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 4 01:06:56.950512 containerd[1824]: time="2026-03-04T01:06:56.950491916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 4 01:06:56.956422 containerd[1824]: time="2026-03-04T01:06:56.956392756Z" level=info msg="CreateContainer within sandbox \"2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 4 01:06:56.994292 containerd[1824]: time="2026-03-04T01:06:56.994248116Z" level=info msg="CreateContainer within sandbox \"2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1a339a40e7ff22c8b0f0b66993ffe62171bb39e246e811a99be3a6898ce55ff4\"" Mar 4 01:06:56.995460 containerd[1824]: time="2026-03-04T01:06:56.995436796Z" level=info msg="StartContainer for \"1a339a40e7ff22c8b0f0b66993ffe62171bb39e246e811a99be3a6898ce55ff4\"" Mar 4 01:06:57.058516 containerd[1824]: time="2026-03-04T01:06:57.058470635Z" level=info msg="StartContainer for \"1a339a40e7ff22c8b0f0b66993ffe62171bb39e246e811a99be3a6898ce55ff4\" returns successfully" Mar 4 01:06:57.780481 kubelet[3385]: I0304 01:06:57.780175 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7c58d797df-d2knn" podStartSLOduration=42.251596632 podStartE2EDuration="44.780156266s" podCreationTimestamp="2026-03-04 01:06:13 +0000 UTC" firstStartedPulling="2026-03-04 01:06:54.421321602 +0000 UTC m=+62.119611375" lastFinishedPulling="2026-03-04 01:06:56.949881196 +0000 UTC m=+64.648171009" observedRunningTime="2026-03-04 01:06:57.780116506 +0000 UTC m=+65.478406319" watchObservedRunningTime="2026-03-04 01:06:57.780156266 +0000 UTC m=+65.478446119" Mar 4 01:06:58.766571 kubelet[3385]: I0304 01:06:58.766537 3385 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:06:59.830747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2628811110.mount: Deactivated successfully. Mar 4 01:07:00.476903 containerd[1824]: time="2026-03-04T01:07:00.476854615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:07:00.479594 containerd[1824]: time="2026-03-04T01:07:00.479401895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 4 01:07:00.483110 containerd[1824]: time="2026-03-04T01:07:00.482768535Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:07:00.487770 containerd[1824]: time="2026-03-04T01:07:00.487742734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:07:00.488519 containerd[1824]: time="2026-03-04T01:07:00.488489694Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.537886778s" Mar 4 01:07:00.488615 containerd[1824]: time="2026-03-04T01:07:00.488600494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 4 01:07:00.497622 containerd[1824]: time="2026-03-04T01:07:00.497595094Z" level=info msg="CreateContainer within sandbox \"eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 4 01:07:00.530050 containerd[1824]: time="2026-03-04T01:07:00.530005334Z" level=info msg="CreateContainer within sandbox \"eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ff0f529866540d99049cb374fde4f2db1d44418fc0afa0740946a55b9e6b6926\"" Mar 4 01:07:00.531687 containerd[1824]: time="2026-03-04T01:07:00.531401294Z" level=info msg="StartContainer for \"ff0f529866540d99049cb374fde4f2db1d44418fc0afa0740946a55b9e6b6926\"" Mar 4 01:07:00.597250 containerd[1824]: time="2026-03-04T01:07:00.596975892Z" level=info msg="StartContainer for \"ff0f529866540d99049cb374fde4f2db1d44418fc0afa0740946a55b9e6b6926\" returns successfully" Mar 4 01:07:00.873246 kubelet[3385]: I0304 01:07:00.872059 3385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-q7bfd" podStartSLOduration=41.869608954 podStartE2EDuration="47.872042167s" podCreationTimestamp="2026-03-04 01:06:13 +0000 UTC" firstStartedPulling="2026-03-04 01:06:54.487036761 +0000 UTC m=+62.185326574" lastFinishedPulling="2026-03-04 01:07:00.489469974 +0000 UTC m=+68.187759787" observedRunningTime="2026-03-04 01:07:00.792939289 +0000 UTC m=+68.491229142" watchObservedRunningTime="2026-03-04 01:07:00.872042167 +0000 UTC m=+68.570331940" Mar 4 01:07:15.985701 systemd[1]: run-containerd-runc-k8s.io-ff0f529866540d99049cb374fde4f2db1d44418fc0afa0740946a55b9e6b6926-runc.wlHYWO.mount: Deactivated successfully. Mar 4 01:07:23.634277 kubelet[3385]: I0304 01:07:23.633521 3385 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:07:53.755564 systemd[1]: run-containerd-runc-k8s.io-a30d31ca93a96f360947d3d46651fe6cea431a5729525c13ce9615936c9237bb-runc.4pttE4.mount: Deactivated successfully. Mar 4 01:07:54.102437 containerd[1824]: time="2026-03-04T01:07:54.102368144Z" level=info msg="StopPodSandbox for \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\"" Mar 4 01:07:54.169977 containerd[1824]: 2026-03-04 01:07:54.137 [WARNING][6528] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"ea0036f7-27e7-42ae-8dc8-5a1007c59806", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 6, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39", Pod:"goldmane-5b85766d88-q7bfd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali22469ecea05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:07:54.169977 containerd[1824]: 2026-03-04 01:07:54.138 [INFO][6528] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Mar 4 01:07:54.169977 containerd[1824]: 2026-03-04 01:07:54.138 [INFO][6528] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" iface="eth0" netns="" Mar 4 01:07:54.169977 containerd[1824]: 2026-03-04 01:07:54.138 [INFO][6528] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Mar 4 01:07:54.169977 containerd[1824]: 2026-03-04 01:07:54.138 [INFO][6528] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Mar 4 01:07:54.169977 containerd[1824]: 2026-03-04 01:07:54.155 [INFO][6535] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" HandleID="k8s-pod-network.df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Workload="ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0" Mar 4 01:07:54.169977 containerd[1824]: 2026-03-04 01:07:54.155 [INFO][6535] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:07:54.169977 containerd[1824]: 2026-03-04 01:07:54.155 [INFO][6535] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:07:54.169977 containerd[1824]: 2026-03-04 01:07:54.164 [WARNING][6535] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" HandleID="k8s-pod-network.df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Workload="ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0" Mar 4 01:07:54.169977 containerd[1824]: 2026-03-04 01:07:54.164 [INFO][6535] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" HandleID="k8s-pod-network.df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Workload="ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0" Mar 4 01:07:54.169977 containerd[1824]: 2026-03-04 01:07:54.165 [INFO][6535] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:07:54.169977 containerd[1824]: 2026-03-04 01:07:54.167 [INFO][6528] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Mar 4 01:07:54.169977 containerd[1824]: time="2026-03-04T01:07:54.169639622Z" level=info msg="TearDown network for sandbox \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\" successfully" Mar 4 01:07:54.169977 containerd[1824]: time="2026-03-04T01:07:54.169664462Z" level=info msg="StopPodSandbox for \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\" returns successfully" Mar 4 01:07:54.170849 containerd[1824]: time="2026-03-04T01:07:54.170274142Z" level=info msg="RemovePodSandbox for \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\"" Mar 4 01:07:54.170849 containerd[1824]: time="2026-03-04T01:07:54.170302302Z" level=info msg="Forcibly stopping sandbox \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\"" Mar 4 01:07:54.242972 containerd[1824]: 2026-03-04 01:07:54.204 [WARNING][6549] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"ea0036f7-27e7-42ae-8dc8-5a1007c59806", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 6, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"eb161d0d1308331435becc37ecac70d9a262211963904e8a34e73c9664b3cd39", Pod:"goldmane-5b85766d88-q7bfd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.36.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali22469ecea05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:07:54.242972 containerd[1824]: 2026-03-04 01:07:54.204 [INFO][6549] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Mar 4 01:07:54.242972 containerd[1824]: 2026-03-04 01:07:54.204 [INFO][6549] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" iface="eth0" netns="" Mar 4 01:07:54.242972 containerd[1824]: 2026-03-04 01:07:54.204 [INFO][6549] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Mar 4 01:07:54.242972 containerd[1824]: 2026-03-04 01:07:54.204 [INFO][6549] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Mar 4 01:07:54.242972 containerd[1824]: 2026-03-04 01:07:54.225 [INFO][6556] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" HandleID="k8s-pod-network.df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Workload="ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0" Mar 4 01:07:54.242972 containerd[1824]: 2026-03-04 01:07:54.225 [INFO][6556] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:07:54.242972 containerd[1824]: 2026-03-04 01:07:54.225 [INFO][6556] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:07:54.242972 containerd[1824]: 2026-03-04 01:07:54.235 [WARNING][6556] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" HandleID="k8s-pod-network.df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Workload="ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0" Mar 4 01:07:54.242972 containerd[1824]: 2026-03-04 01:07:54.235 [INFO][6556] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" HandleID="k8s-pod-network.df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Workload="ci--4081.3.6--n--8ef68d175b-k8s-goldmane--5b85766d88--q7bfd-eth0" Mar 4 01:07:54.242972 containerd[1824]: 2026-03-04 01:07:54.237 [INFO][6556] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:07:54.242972 containerd[1824]: 2026-03-04 01:07:54.239 [INFO][6549] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1" Mar 4 01:07:54.242972 containerd[1824]: time="2026-03-04T01:07:54.242554140Z" level=info msg="TearDown network for sandbox \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\" successfully" Mar 4 01:07:54.254797 containerd[1824]: time="2026-03-04T01:07:54.254608259Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:07:54.254797 containerd[1824]: time="2026-03-04T01:07:54.254712579Z" level=info msg="RemovePodSandbox \"df4a1c9aa347d250f2e6c0c2086e8e454864e8fd90a80e3c15aa9bd780a9b2c1\" returns successfully" Mar 4 01:07:54.255437 containerd[1824]: time="2026-03-04T01:07:54.255089739Z" level=info msg="StopPodSandbox for \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\"" Mar 4 01:07:54.318184 containerd[1824]: 2026-03-04 01:07:54.285 [WARNING][6570] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0", GenerateName:"calico-apiserver-7c58d797df-", Namespace:"calico-system", SelfLink:"", UID:"f7716632-24cc-4b2d-9197-7d4214b114df", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 6, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c58d797df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4", Pod:"calico-apiserver-7c58d797df-d2knn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliba523b532fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:07:54.318184 containerd[1824]: 2026-03-04 01:07:54.286 [INFO][6570] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Mar 4 01:07:54.318184 containerd[1824]: 2026-03-04 01:07:54.286 [INFO][6570] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" iface="eth0" netns="" Mar 4 01:07:54.318184 containerd[1824]: 2026-03-04 01:07:54.286 [INFO][6570] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Mar 4 01:07:54.318184 containerd[1824]: 2026-03-04 01:07:54.286 [INFO][6570] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Mar 4 01:07:54.318184 containerd[1824]: 2026-03-04 01:07:54.304 [INFO][6577] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" HandleID="k8s-pod-network.084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0" Mar 4 01:07:54.318184 containerd[1824]: 2026-03-04 01:07:54.304 [INFO][6577] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:07:54.318184 containerd[1824]: 2026-03-04 01:07:54.304 [INFO][6577] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:07:54.318184 containerd[1824]: 2026-03-04 01:07:54.312 [WARNING][6577] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" HandleID="k8s-pod-network.084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0" Mar 4 01:07:54.318184 containerd[1824]: 2026-03-04 01:07:54.313 [INFO][6577] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" HandleID="k8s-pod-network.084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0" Mar 4 01:07:54.318184 containerd[1824]: 2026-03-04 01:07:54.314 [INFO][6577] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:07:54.318184 containerd[1824]: 2026-03-04 01:07:54.316 [INFO][6570] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Mar 4 01:07:54.318814 containerd[1824]: time="2026-03-04T01:07:54.318511297Z" level=info msg="TearDown network for sandbox \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\" successfully" Mar 4 01:07:54.318814 containerd[1824]: time="2026-03-04T01:07:54.318536497Z" level=info msg="StopPodSandbox for \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\" returns successfully" Mar 4 01:07:54.319303 containerd[1824]: time="2026-03-04T01:07:54.318975977Z" level=info msg="RemovePodSandbox for \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\"" Mar 4 01:07:54.319303 containerd[1824]: time="2026-03-04T01:07:54.319002017Z" level=info msg="Forcibly stopping sandbox \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\"" Mar 4 01:07:54.385027 containerd[1824]: 2026-03-04 01:07:54.353 [WARNING][6591] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0", GenerateName:"calico-apiserver-7c58d797df-", Namespace:"calico-system", SelfLink:"", UID:"f7716632-24cc-4b2d-9197-7d4214b114df", ResourceVersion:"1146", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 6, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c58d797df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"2099c1f9465888eff99dc74186251bbc3335be98fc79bc664204dd1d9bfdfdb4", Pod:"calico-apiserver-7c58d797df-d2knn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.36.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliba523b532fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:07:54.385027 containerd[1824]: 2026-03-04 01:07:54.353 [INFO][6591] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Mar 4 01:07:54.385027 containerd[1824]: 2026-03-04 01:07:54.353 [INFO][6591] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" iface="eth0" netns="" Mar 4 01:07:54.385027 containerd[1824]: 2026-03-04 01:07:54.353 [INFO][6591] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Mar 4 01:07:54.385027 containerd[1824]: 2026-03-04 01:07:54.353 [INFO][6591] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Mar 4 01:07:54.385027 containerd[1824]: 2026-03-04 01:07:54.371 [INFO][6599] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" HandleID="k8s-pod-network.084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0" Mar 4 01:07:54.385027 containerd[1824]: 2026-03-04 01:07:54.371 [INFO][6599] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:07:54.385027 containerd[1824]: 2026-03-04 01:07:54.371 [INFO][6599] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:07:54.385027 containerd[1824]: 2026-03-04 01:07:54.380 [WARNING][6599] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" HandleID="k8s-pod-network.084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0" Mar 4 01:07:54.385027 containerd[1824]: 2026-03-04 01:07:54.380 [INFO][6599] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" HandleID="k8s-pod-network.084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Workload="ci--4081.3.6--n--8ef68d175b-k8s-calico--apiserver--7c58d797df--d2knn-eth0" Mar 4 01:07:54.385027 containerd[1824]: 2026-03-04 01:07:54.381 [INFO][6599] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:07:54.385027 containerd[1824]: 2026-03-04 01:07:54.383 [INFO][6591] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc" Mar 4 01:07:54.385027 containerd[1824]: time="2026-03-04T01:07:54.385006775Z" level=info msg="TearDown network for sandbox \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\" successfully" Mar 4 01:07:54.393416 containerd[1824]: time="2026-03-04T01:07:54.393340015Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:07:54.393416 containerd[1824]: time="2026-03-04T01:07:54.393446735Z" level=info msg="RemovePodSandbox \"084fa165a4fd1643a1c86c1b6fba6b9a3f1bd5de4cc4c86794fc4ddf5a8f18bc\" returns successfully" Mar 4 01:07:54.394304 containerd[1824]: time="2026-03-04T01:07:54.393996975Z" level=info msg="StopPodSandbox for \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\"" Mar 4 01:07:54.465966 containerd[1824]: 2026-03-04 01:07:54.427 [WARNING][6613] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3d654472-1f4e-4e23-8263-e9fe218626cc", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 5, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee", Pod:"coredns-674b8bbfcf-t9nxp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63968f8fd40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:07:54.465966 containerd[1824]: 2026-03-04 01:07:54.428 [INFO][6613] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Mar 4 01:07:54.465966 containerd[1824]: 2026-03-04 01:07:54.428 [INFO][6613] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" iface="eth0" netns="" Mar 4 01:07:54.465966 containerd[1824]: 2026-03-04 01:07:54.428 [INFO][6613] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Mar 4 01:07:54.465966 containerd[1824]: 2026-03-04 01:07:54.428 [INFO][6613] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Mar 4 01:07:54.465966 containerd[1824]: 2026-03-04 01:07:54.450 [INFO][6620] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" HandleID="k8s-pod-network.0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0" Mar 4 01:07:54.465966 containerd[1824]: 2026-03-04 01:07:54.451 [INFO][6620] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:07:54.465966 containerd[1824]: 2026-03-04 01:07:54.451 [INFO][6620] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:07:54.465966 containerd[1824]: 2026-03-04 01:07:54.459 [WARNING][6620] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" HandleID="k8s-pod-network.0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0" Mar 4 01:07:54.465966 containerd[1824]: 2026-03-04 01:07:54.459 [INFO][6620] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" HandleID="k8s-pod-network.0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0" Mar 4 01:07:54.465966 containerd[1824]: 2026-03-04 01:07:54.461 [INFO][6620] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:07:54.465966 containerd[1824]: 2026-03-04 01:07:54.463 [INFO][6613] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Mar 4 01:07:54.466877 containerd[1824]: time="2026-03-04T01:07:54.466006412Z" level=info msg="TearDown network for sandbox \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\" successfully" Mar 4 01:07:54.466877 containerd[1824]: time="2026-03-04T01:07:54.466032892Z" level=info msg="StopPodSandbox for \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\" returns successfully" Mar 4 01:07:54.466877 containerd[1824]: time="2026-03-04T01:07:54.466452052Z" level=info msg="RemovePodSandbox for \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\"" Mar 4 01:07:54.466877 containerd[1824]: time="2026-03-04T01:07:54.466481772Z" level=info msg="Forcibly stopping sandbox \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\"" Mar 4 01:07:54.536706 containerd[1824]: 2026-03-04 01:07:54.503 [WARNING][6635] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3d654472-1f4e-4e23-8263-e9fe218626cc", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 1, 5, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8ef68d175b", ContainerID:"92f2fa1ed3acea691c7316495f69a26b4a709b77b14d78d3fd91066b0d63b9ee", Pod:"coredns-674b8bbfcf-t9nxp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.36.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63968f8fd40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 01:07:54.536706 containerd[1824]: 2026-03-04 01:07:54.504 [INFO][6635] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Mar 4 01:07:54.536706 containerd[1824]: 2026-03-04 01:07:54.505 [INFO][6635] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" iface="eth0" netns="" Mar 4 01:07:54.536706 containerd[1824]: 2026-03-04 01:07:54.505 [INFO][6635] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Mar 4 01:07:54.536706 containerd[1824]: 2026-03-04 01:07:54.505 [INFO][6635] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Mar 4 01:07:54.536706 containerd[1824]: 2026-03-04 01:07:54.523 [INFO][6642] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" HandleID="k8s-pod-network.0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0" Mar 4 01:07:54.536706 containerd[1824]: 2026-03-04 01:07:54.523 [INFO][6642] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 01:07:54.536706 containerd[1824]: 2026-03-04 01:07:54.523 [INFO][6642] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 01:07:54.536706 containerd[1824]: 2026-03-04 01:07:54.532 [WARNING][6642] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" HandleID="k8s-pod-network.0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0" Mar 4 01:07:54.536706 containerd[1824]: 2026-03-04 01:07:54.532 [INFO][6642] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" HandleID="k8s-pod-network.0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Workload="ci--4081.3.6--n--8ef68d175b-k8s-coredns--674b8bbfcf--t9nxp-eth0" Mar 4 01:07:54.536706 containerd[1824]: 2026-03-04 01:07:54.533 [INFO][6642] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 01:07:54.536706 containerd[1824]: 2026-03-04 01:07:54.535 [INFO][6635] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12" Mar 4 01:07:54.537106 containerd[1824]: time="2026-03-04T01:07:54.536744810Z" level=info msg="TearDown network for sandbox \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\" successfully" Mar 4 01:07:54.544219 containerd[1824]: time="2026-03-04T01:07:54.544180010Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:07:54.544319 containerd[1824]: time="2026-03-04T01:07:54.544256010Z" level=info msg="RemovePodSandbox \"0a83f6ae457c60bc996c79c1ea3b8f8ebceda059a3b1cdb922d6ba3fe38dfd12\" returns successfully" Mar 4 01:08:07.960867 systemd[1]: Started sshd@7-10.200.20.12:22-10.200.16.10:44244.service - OpenSSH per-connection server daemon (10.200.16.10:44244). Mar 4 01:08:08.449156 sshd[6690]: Accepted publickey for core from 10.200.16.10 port 44244 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:08:08.451991 sshd[6690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:08.458349 systemd-logind[1785]: New session 10 of user core. Mar 4 01:08:08.463714 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 4 01:08:08.889632 sshd[6690]: pam_unix(sshd:session): session closed for user core Mar 4 01:08:08.893377 systemd-logind[1785]: Session 10 logged out. Waiting for processes to exit. Mar 4 01:08:08.894151 systemd[1]: sshd@7-10.200.20.12:22-10.200.16.10:44244.service: Deactivated successfully. Mar 4 01:08:08.897594 systemd[1]: session-10.scope: Deactivated successfully. Mar 4 01:08:08.898354 systemd-logind[1785]: Removed session 10. Mar 4 01:08:13.974607 systemd[1]: Started sshd@8-10.200.20.12:22-10.200.16.10:43246.service - OpenSSH per-connection server daemon (10.200.16.10:43246). Mar 4 01:08:14.458325 sshd[6725]: Accepted publickey for core from 10.200.16.10 port 43246 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:08:14.459672 sshd[6725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:14.464775 systemd-logind[1785]: New session 11 of user core. Mar 4 01:08:14.469817 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 4 01:08:14.871520 sshd[6725]: pam_unix(sshd:session): session closed for user core Mar 4 01:08:14.876119 systemd[1]: sshd@8-10.200.20.12:22-10.200.16.10:43246.service: Deactivated successfully. Mar 4 01:08:14.878695 systemd-logind[1785]: Session 11 logged out. Waiting for processes to exit. Mar 4 01:08:14.878966 systemd[1]: session-11.scope: Deactivated successfully. Mar 4 01:08:14.880598 systemd-logind[1785]: Removed session 11. Mar 4 01:08:15.887148 systemd[1]: run-containerd-runc-k8s.io-a30d31ca93a96f360947d3d46651fe6cea431a5729525c13ce9615936c9237bb-runc.9AwPF8.mount: Deactivated successfully. Mar 4 01:08:19.957590 systemd[1]: Started sshd@9-10.200.20.12:22-10.200.16.10:33602.service - OpenSSH per-connection server daemon (10.200.16.10:33602). Mar 4 01:08:20.444475 sshd[6820]: Accepted publickey for core from 10.200.16.10 port 33602 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:08:20.446247 sshd[6820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:20.452709 systemd-logind[1785]: New session 12 of user core. Mar 4 01:08:20.459665 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 4 01:08:20.858924 sshd[6820]: pam_unix(sshd:session): session closed for user core Mar 4 01:08:20.862793 systemd[1]: sshd@9-10.200.20.12:22-10.200.16.10:33602.service: Deactivated successfully. Mar 4 01:08:20.863408 systemd-logind[1785]: Session 12 logged out. Waiting for processes to exit. Mar 4 01:08:20.865978 systemd[1]: session-12.scope: Deactivated successfully. Mar 4 01:08:20.867770 systemd-logind[1785]: Removed session 12. Mar 4 01:08:25.945602 systemd[1]: Started sshd@10-10.200.20.12:22-10.200.16.10:33616.service - OpenSSH per-connection server daemon (10.200.16.10:33616). Mar 4 01:08:26.430278 sshd[6874]: Accepted publickey for core from 10.200.16.10 port 33616 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:08:26.431651 sshd[6874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:26.435973 systemd-logind[1785]: New session 13 of user core. Mar 4 01:08:26.442604 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 4 01:08:26.866107 sshd[6874]: pam_unix(sshd:session): session closed for user core Mar 4 01:08:26.868812 systemd-logind[1785]: Session 13 logged out. Waiting for processes to exit. Mar 4 01:08:26.871674 systemd[1]: sshd@10-10.200.20.12:22-10.200.16.10:33616.service: Deactivated successfully. Mar 4 01:08:26.874311 systemd[1]: session-13.scope: Deactivated successfully. Mar 4 01:08:26.877317 systemd-logind[1785]: Removed session 13. Mar 4 01:08:26.951653 systemd[1]: Started sshd@11-10.200.20.12:22-10.200.16.10:33622.service - OpenSSH per-connection server daemon (10.200.16.10:33622). Mar 4 01:08:27.439509 sshd[6900]: Accepted publickey for core from 10.200.16.10 port 33622 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:08:27.440868 sshd[6900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:27.444747 systemd-logind[1785]: New session 14 of user core. Mar 4 01:08:27.448607 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 4 01:08:27.884581 sshd[6900]: pam_unix(sshd:session): session closed for user core Mar 4 01:08:27.888046 systemd[1]: sshd@11-10.200.20.12:22-10.200.16.10:33622.service: Deactivated successfully. Mar 4 01:08:27.890525 systemd[1]: session-14.scope: Deactivated successfully. Mar 4 01:08:27.890653 systemd-logind[1785]: Session 14 logged out. Waiting for processes to exit. Mar 4 01:08:27.892760 systemd-logind[1785]: Removed session 14. Mar 4 01:08:27.968585 systemd[1]: Started sshd@12-10.200.20.12:22-10.200.16.10:33630.service - OpenSSH per-connection server daemon (10.200.16.10:33630). Mar 4 01:08:28.452829 sshd[6911]: Accepted publickey for core from 10.200.16.10 port 33630 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:08:28.454291 sshd[6911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:28.458140 systemd-logind[1785]: New session 15 of user core. Mar 4 01:08:28.464654 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 4 01:08:28.864628 sshd[6911]: pam_unix(sshd:session): session closed for user core Mar 4 01:08:28.869834 systemd[1]: sshd@12-10.200.20.12:22-10.200.16.10:33630.service: Deactivated successfully. Mar 4 01:08:28.870114 systemd-logind[1785]: Session 15 logged out. Waiting for processes to exit. Mar 4 01:08:28.872625 systemd[1]: session-15.scope: Deactivated successfully. Mar 4 01:08:28.874011 systemd-logind[1785]: Removed session 15. Mar 4 01:08:33.949775 systemd[1]: Started sshd@13-10.200.20.12:22-10.200.16.10:53248.service - OpenSSH per-connection server daemon (10.200.16.10:53248). Mar 4 01:08:34.437102 sshd[6947]: Accepted publickey for core from 10.200.16.10 port 53248 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:08:34.437983 sshd[6947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:34.442319 systemd-logind[1785]: New session 16 of user core. Mar 4 01:08:34.449525 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 4 01:08:34.850593 sshd[6947]: pam_unix(sshd:session): session closed for user core Mar 4 01:08:34.853692 systemd-logind[1785]: Session 16 logged out. Waiting for processes to exit. Mar 4 01:08:34.855196 systemd[1]: sshd@13-10.200.20.12:22-10.200.16.10:53248.service: Deactivated successfully. Mar 4 01:08:34.859668 systemd[1]: session-16.scope: Deactivated successfully. Mar 4 01:08:34.860999 systemd-logind[1785]: Removed session 16. Mar 4 01:08:34.936621 systemd[1]: Started sshd@14-10.200.20.12:22-10.200.16.10:53256.service - OpenSSH per-connection server daemon (10.200.16.10:53256). Mar 4 01:08:35.423991 sshd[6961]: Accepted publickey for core from 10.200.16.10 port 53256 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:08:35.425349 sshd[6961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:35.429755 systemd-logind[1785]: New session 17 of user core. Mar 4 01:08:35.434620 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 4 01:08:35.954924 sshd[6961]: pam_unix(sshd:session): session closed for user core Mar 4 01:08:35.958181 systemd[1]: sshd@14-10.200.20.12:22-10.200.16.10:53256.service: Deactivated successfully. Mar 4 01:08:35.964484 systemd[1]: session-17.scope: Deactivated successfully. Mar 4 01:08:35.965509 systemd-logind[1785]: Session 17 logged out. Waiting for processes to exit. Mar 4 01:08:35.966960 systemd-logind[1785]: Removed session 17. Mar 4 01:08:36.038698 systemd[1]: Started sshd@15-10.200.20.12:22-10.200.16.10:53270.service - OpenSSH per-connection server daemon (10.200.16.10:53270). Mar 4 01:08:36.521415 sshd[6973]: Accepted publickey for core from 10.200.16.10 port 53270 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:08:36.522654 sshd[6973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:36.526242 systemd-logind[1785]: New session 18 of user core. Mar 4 01:08:36.534649 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 4 01:08:37.806244 sshd[6973]: pam_unix(sshd:session): session closed for user core Mar 4 01:08:37.812034 systemd-logind[1785]: Session 18 logged out. Waiting for processes to exit. Mar 4 01:08:37.812507 systemd[1]: sshd@15-10.200.20.12:22-10.200.16.10:53270.service: Deactivated successfully. Mar 4 01:08:37.815571 systemd[1]: session-18.scope: Deactivated successfully. Mar 4 01:08:37.817043 systemd-logind[1785]: Removed session 18. Mar 4 01:08:37.891698 systemd[1]: Started sshd@16-10.200.20.12:22-10.200.16.10:53276.service - OpenSSH per-connection server daemon (10.200.16.10:53276). Mar 4 01:08:38.378133 sshd[7004]: Accepted publickey for core from 10.200.16.10 port 53276 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:08:38.378969 sshd[7004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:38.382970 systemd-logind[1785]: New session 19 of user core. Mar 4 01:08:38.387712 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 4 01:08:38.901339 sshd[7004]: pam_unix(sshd:session): session closed for user core Mar 4 01:08:38.905302 systemd[1]: sshd@16-10.200.20.12:22-10.200.16.10:53276.service: Deactivated successfully. Mar 4 01:08:38.908355 systemd[1]: session-19.scope: Deactivated successfully. Mar 4 01:08:38.908892 systemd-logind[1785]: Session 19 logged out. Waiting for processes to exit. Mar 4 01:08:38.911099 systemd-logind[1785]: Removed session 19. Mar 4 01:08:39.007589 systemd[1]: Started sshd@17-10.200.20.12:22-10.200.16.10:53282.service - OpenSSH per-connection server daemon (10.200.16.10:53282). Mar 4 01:08:39.493149 sshd[7016]: Accepted publickey for core from 10.200.16.10 port 53282 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:08:39.495962 sshd[7016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:39.505188 systemd-logind[1785]: New session 20 of user core. Mar 4 01:08:39.512665 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 4 01:08:39.928885 sshd[7016]: pam_unix(sshd:session): session closed for user core Mar 4 01:08:39.934444 systemd[1]: sshd@17-10.200.20.12:22-10.200.16.10:53282.service: Deactivated successfully. Mar 4 01:08:39.937864 systemd-logind[1785]: Session 20 logged out. Waiting for processes to exit. Mar 4 01:08:39.938800 systemd[1]: session-20.scope: Deactivated successfully. Mar 4 01:08:39.940270 systemd-logind[1785]: Removed session 20. Mar 4 01:08:45.017153 systemd[1]: Started sshd@18-10.200.20.12:22-10.200.16.10:57368.service - OpenSSH per-connection server daemon (10.200.16.10:57368). Mar 4 01:08:45.499966 sshd[7056]: Accepted publickey for core from 10.200.16.10 port 57368 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:08:45.501442 sshd[7056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:45.505727 systemd-logind[1785]: New session 21 of user core. Mar 4 01:08:45.513786 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 4 01:08:45.911638 sshd[7056]: pam_unix(sshd:session): session closed for user core Mar 4 01:08:45.916664 systemd-logind[1785]: Session 21 logged out. Waiting for processes to exit. Mar 4 01:08:45.917251 systemd[1]: sshd@18-10.200.20.12:22-10.200.16.10:57368.service: Deactivated successfully. Mar 4 01:08:45.921743 systemd[1]: session-21.scope: Deactivated successfully. Mar 4 01:08:45.922835 systemd-logind[1785]: Removed session 21. Mar 4 01:08:50.991591 systemd[1]: Started sshd@19-10.200.20.12:22-10.200.16.10:38474.service - OpenSSH per-connection server daemon (10.200.16.10:38474). Mar 4 01:08:51.438058 sshd[7069]: Accepted publickey for core from 10.200.16.10 port 38474 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:08:51.438924 sshd[7069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:51.442957 systemd-logind[1785]: New session 22 of user core. Mar 4 01:08:51.447673 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 4 01:08:51.830568 sshd[7069]: pam_unix(sshd:session): session closed for user core Mar 4 01:08:51.833551 systemd[1]: sshd@19-10.200.20.12:22-10.200.16.10:38474.service: Deactivated successfully. Mar 4 01:08:51.837271 systemd-logind[1785]: Session 22 logged out. Waiting for processes to exit. Mar 4 01:08:51.838084 systemd[1]: session-22.scope: Deactivated successfully. Mar 4 01:08:51.840152 systemd-logind[1785]: Removed session 22. Mar 4 01:08:56.917700 systemd[1]: Started sshd@20-10.200.20.12:22-10.200.16.10:38482.service - OpenSSH per-connection server daemon (10.200.16.10:38482). Mar 4 01:08:57.403053 sshd[7104]: Accepted publickey for core from 10.200.16.10 port 38482 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:08:57.404492 sshd[7104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:08:57.408320 systemd-logind[1785]: New session 23 of user core. Mar 4 01:08:57.414691 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 4 01:08:57.811163 sshd[7104]: pam_unix(sshd:session): session closed for user core Mar 4 01:08:57.814501 systemd[1]: sshd@20-10.200.20.12:22-10.200.16.10:38482.service: Deactivated successfully. Mar 4 01:08:57.818403 systemd[1]: session-23.scope: Deactivated successfully. Mar 4 01:08:57.819135 systemd-logind[1785]: Session 23 logged out. Waiting for processes to exit. Mar 4 01:08:57.819969 systemd-logind[1785]: Removed session 23. Mar 4 01:09:02.895576 systemd[1]: Started sshd@21-10.200.20.12:22-10.200.16.10:58026.service - OpenSSH per-connection server daemon (10.200.16.10:58026). Mar 4 01:09:03.389619 sshd[7139]: Accepted publickey for core from 10.200.16.10 port 58026 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:09:03.391516 sshd[7139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:09:03.396896 systemd-logind[1785]: New session 24 of user core. Mar 4 01:09:03.398640 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 4 01:09:03.804009 sshd[7139]: pam_unix(sshd:session): session closed for user core Mar 4 01:09:03.806845 systemd[1]: sshd@21-10.200.20.12:22-10.200.16.10:58026.service: Deactivated successfully. Mar 4 01:09:03.811539 systemd[1]: session-24.scope: Deactivated successfully. Mar 4 01:09:03.813887 systemd-logind[1785]: Session 24 logged out. Waiting for processes to exit. Mar 4 01:09:03.814883 systemd-logind[1785]: Removed session 24. Mar 4 01:09:08.890575 systemd[1]: Started sshd@22-10.200.20.12:22-10.200.16.10:58040.service - OpenSSH per-connection server daemon (10.200.16.10:58040). Mar 4 01:09:09.371797 sshd[7152]: Accepted publickey for core from 10.200.16.10 port 58040 ssh2: RSA SHA256:HLwtV5Q6+Nrm97iUXPNyxNazhhYdwDT8OGVrGRHoNr4 Mar 4 01:09:09.372808 sshd[7152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:09:09.376507 systemd-logind[1785]: New session 25 of user core. Mar 4 01:09:09.381475 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 4 01:09:09.772560 sshd[7152]: pam_unix(sshd:session): session closed for user core Mar 4 01:09:09.779643 systemd[1]: sshd@22-10.200.20.12:22-10.200.16.10:58040.service: Deactivated successfully. Mar 4 01:09:09.786836 systemd-logind[1785]: Session 25 logged out. Waiting for processes to exit. Mar 4 01:09:09.787004 systemd[1]: session-25.scope: Deactivated successfully. Mar 4 01:09:09.789419 systemd-logind[1785]: Removed session 25.