Mar 10 00:48:09.197520 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 10 00:48:09.197544 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Mar 9 23:01:00 -00 2026 Mar 10 00:48:09.197552 kernel: KASLR enabled Mar 10 00:48:09.197558 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 10 00:48:09.197566 kernel: printk: bootconsole [pl11] enabled Mar 10 00:48:09.197571 kernel: efi: EFI v2.7 by EDK II Mar 10 00:48:09.197579 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 10 00:48:09.197585 kernel: random: crng init done Mar 10 00:48:09.197591 kernel: ACPI: Early table checksum verification disabled Mar 10 00:48:09.197597 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 10 00:48:09.197603 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 00:48:09.197609 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 00:48:09.197617 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 10 00:48:09.197623 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 00:48:09.197631 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 00:48:09.197637 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 00:48:09.197644 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 00:48:09.197653 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 00:48:09.197659 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 00:48:09.197665 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 10 00:48:09.197672 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 00:48:09.197678 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 10 00:48:09.197684 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 10 00:48:09.197690 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 10 00:48:09.197697 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 10 00:48:09.197703 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 10 00:48:09.197710 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 10 00:48:09.197716 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 10 00:48:09.197724 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 10 00:48:09.197731 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 10 00:48:09.197738 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 10 00:48:09.197744 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 10 00:48:09.197751 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 10 00:48:09.197757 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 10 00:48:09.197763 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 10 00:48:09.197769 kernel: Zone ranges: Mar 10 00:48:09.197776 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 10 00:48:09.197782 kernel: DMA32 empty Mar 10 00:48:09.197788 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 10 00:48:09.197795 kernel: Movable zone start for each node Mar 10 00:48:09.197805 kernel: Early memory node ranges Mar 10 00:48:09.197812 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 10 00:48:09.197819 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 10 00:48:09.197826 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 10 00:48:09.197833 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 10 00:48:09.197841 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 10 00:48:09.197847 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 10 00:48:09.197854 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 10 00:48:09.197861 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 10 00:48:09.197867 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 10 00:48:09.197874 kernel: psci: probing for conduit method from ACPI. Mar 10 00:48:09.197881 kernel: psci: PSCIv1.1 detected in firmware. Mar 10 00:48:09.197888 kernel: psci: Using standard PSCI v0.2 function IDs Mar 10 00:48:09.197895 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 10 00:48:09.199654 kernel: psci: SMC Calling Convention v1.4 Mar 10 00:48:09.199663 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 10 00:48:09.199670 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 10 00:48:09.199683 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 10 00:48:09.199690 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 10 00:48:09.199697 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 10 00:48:09.199704 kernel: Detected PIPT I-cache on CPU0 Mar 10 00:48:09.199711 kernel: CPU features: detected: GIC system register CPU interface Mar 10 00:48:09.199718 kernel: CPU features: detected: Hardware dirty bit management Mar 10 00:48:09.199725 kernel: CPU features: detected: Spectre-BHB Mar 10 00:48:09.199732 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 10 00:48:09.199738 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 10 00:48:09.199745 kernel: CPU features: detected: ARM erratum 1418040 Mar 10 00:48:09.199752 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 10 00:48:09.199761 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 10 00:48:09.199768 kernel: alternatives: applying boot alternatives Mar 10 00:48:09.199777 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1cb2f7ec5607d2e9d8553783fe2055fabeb5d47c96b1d8f75c394b81aeedb17c Mar 10 00:48:09.199784 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 10 00:48:09.199791 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 10 00:48:09.199798 kernel: Fallback order for Node 0: 0 Mar 10 00:48:09.199805 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 10 00:48:09.199812 kernel: Policy zone: Normal Mar 10 00:48:09.199819 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 10 00:48:09.199825 kernel: software IO TLB: area num 2. Mar 10 00:48:09.199832 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 10 00:48:09.199841 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 10 00:48:09.199848 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 10 00:48:09.199855 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 10 00:48:09.199863 kernel: rcu: RCU event tracing is enabled. Mar 10 00:48:09.199870 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 10 00:48:09.199877 kernel: Trampoline variant of Tasks RCU enabled. Mar 10 00:48:09.199884 kernel: Tracing variant of Tasks RCU enabled. Mar 10 00:48:09.199891 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 10 00:48:09.199909 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 10 00:48:09.199916 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 10 00:48:09.199923 kernel: GICv3: 960 SPIs implemented Mar 10 00:48:09.199932 kernel: GICv3: 0 Extended SPIs implemented Mar 10 00:48:09.199939 kernel: Root IRQ handler: gic_handle_irq Mar 10 00:48:09.199946 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 10 00:48:09.199953 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 10 00:48:09.199959 kernel: ITS: No ITS available, not enabling LPIs Mar 10 00:48:09.199967 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 10 00:48:09.199974 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 10 00:48:09.199981 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 10 00:48:09.199988 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 10 00:48:09.199995 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 10 00:48:09.200002 kernel: Console: colour dummy device 80x25 Mar 10 00:48:09.200011 kernel: printk: console [tty1] enabled Mar 10 00:48:09.200018 kernel: ACPI: Core revision 20230628 Mar 10 00:48:09.200026 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 10 00:48:09.200033 kernel: pid_max: default: 32768 minimum: 301 Mar 10 00:48:09.200040 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 10 00:48:09.200047 kernel: landlock: Up and running. Mar 10 00:48:09.200054 kernel: SELinux: Initializing. Mar 10 00:48:09.200061 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 00:48:09.200068 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 00:48:09.200077 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 10 00:48:09.200084 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 10 00:48:09.200092 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 10 00:48:09.200099 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 10 00:48:09.200106 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 10 00:48:09.200113 kernel: rcu: Hierarchical SRCU implementation. Mar 10 00:48:09.200120 kernel: rcu: Max phase no-delay instances is 400. Mar 10 00:48:09.200127 kernel: Remapping and enabling EFI services. Mar 10 00:48:09.200142 kernel: smp: Bringing up secondary CPUs ... Mar 10 00:48:09.200149 kernel: Detected PIPT I-cache on CPU1 Mar 10 00:48:09.200156 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 10 00:48:09.200164 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 10 00:48:09.200173 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 10 00:48:09.200180 kernel: smp: Brought up 1 node, 2 CPUs Mar 10 00:48:09.200188 kernel: SMP: Total of 2 processors activated. Mar 10 00:48:09.200195 kernel: CPU features: detected: 32-bit EL0 Support Mar 10 00:48:09.200203 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 10 00:48:09.200212 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 10 00:48:09.200220 kernel: CPU features: detected: CRC32 instructions Mar 10 00:48:09.200227 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 10 00:48:09.200235 kernel: CPU features: detected: LSE atomic instructions Mar 10 00:48:09.200243 kernel: CPU features: detected: Privileged Access Never Mar 10 00:48:09.200250 kernel: CPU: All CPU(s) started at EL1 Mar 10 00:48:09.200258 kernel: alternatives: applying system-wide alternatives Mar 10 00:48:09.200265 kernel: devtmpfs: initialized Mar 10 00:48:09.200273 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 10 00:48:09.200282 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 10 00:48:09.200290 kernel: pinctrl core: initialized pinctrl subsystem Mar 10 00:48:09.200297 kernel: SMBIOS 3.1.0 present. Mar 10 00:48:09.200305 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 10 00:48:09.200312 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 10 00:48:09.200320 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 10 00:48:09.200328 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 10 00:48:09.200335 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 10 00:48:09.200343 kernel: audit: initializing netlink subsys (disabled) Mar 10 00:48:09.200352 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 10 00:48:09.200359 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 10 00:48:09.200367 kernel: cpuidle: using governor menu Mar 10 00:48:09.200374 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 10 00:48:09.200382 kernel: ASID allocator initialised with 32768 entries Mar 10 00:48:09.200389 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 10 00:48:09.200396 kernel: Serial: AMBA PL011 UART driver Mar 10 00:48:09.200404 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 10 00:48:09.200411 kernel: Modules: 0 pages in range for non-PLT usage Mar 10 00:48:09.200421 kernel: Modules: 509008 pages in range for PLT usage Mar 10 00:48:09.200428 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 10 00:48:09.200436 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 10 00:48:09.200443 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 10 00:48:09.200451 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 10 00:48:09.200458 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 10 00:48:09.200466 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 10 00:48:09.200473 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 10 00:48:09.200481 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 10 00:48:09.200490 kernel: ACPI: Added _OSI(Module Device) Mar 10 00:48:09.200498 kernel: ACPI: Added _OSI(Processor Device) Mar 10 00:48:09.200506 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 10 00:48:09.200513 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 10 00:48:09.200521 kernel: ACPI: Interpreter enabled Mar 10 00:48:09.200529 kernel: ACPI: Using GIC for interrupt routing Mar 10 00:48:09.200536 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 10 00:48:09.200544 kernel: printk: console [ttyAMA0] enabled Mar 10 00:48:09.200551 kernel: printk: bootconsole [pl11] disabled Mar 10 00:48:09.200561 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 10 00:48:09.200568 kernel: iommu: Default domain type: Translated Mar 10 00:48:09.200576 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 10 00:48:09.200584 kernel: efivars: Registered efivars operations Mar 10 00:48:09.200591 kernel: vgaarb: loaded Mar 10 00:48:09.200599 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 10 00:48:09.200606 kernel: VFS: Disk quotas dquot_6.6.0 Mar 10 00:48:09.200614 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 10 00:48:09.200621 kernel: pnp: PnP ACPI init Mar 10 00:48:09.200631 kernel: pnp: PnP ACPI: found 0 devices Mar 10 00:48:09.200639 kernel: NET: Registered PF_INET protocol family Mar 10 00:48:09.200646 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 10 00:48:09.200654 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 10 00:48:09.200662 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 10 00:48:09.200670 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 10 00:48:09.200677 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 10 00:48:09.200685 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 10 00:48:09.200693 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 00:48:09.200702 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 00:48:09.200709 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 10 00:48:09.200717 kernel: PCI: CLS 0 bytes, default 64 Mar 10 00:48:09.200724 kernel: kvm [1]: HYP mode not available Mar 10 00:48:09.200732 kernel: Initialise system trusted keyrings Mar 10 00:48:09.200740 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 10 00:48:09.200747 kernel: Key type asymmetric registered Mar 10 00:48:09.200755 kernel: Asymmetric key parser 'x509' registered Mar 10 00:48:09.200762 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 10 00:48:09.200771 kernel: io scheduler mq-deadline registered Mar 10 00:48:09.200779 kernel: io scheduler kyber registered Mar 10 00:48:09.200786 kernel: io scheduler bfq registered Mar 10 00:48:09.200794 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 10 00:48:09.200802 kernel: thunder_xcv, ver 1.0 Mar 10 00:48:09.200809 kernel: thunder_bgx, ver 1.0 Mar 10 00:48:09.200817 kernel: nicpf, ver 1.0 Mar 10 00:48:09.200824 kernel: nicvf, ver 1.0 Mar 10 00:48:09.201001 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 10 00:48:09.201082 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-10T00:48:08 UTC (1773103688) Mar 10 00:48:09.201093 kernel: efifb: probing for efifb Mar 10 00:48:09.201101 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 10 00:48:09.201109 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 10 00:48:09.201116 kernel: efifb: scrolling: redraw Mar 10 00:48:09.201123 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 10 00:48:09.201131 kernel: Console: switching to colour frame buffer device 128x48 Mar 10 00:48:09.201138 kernel: fb0: EFI VGA frame buffer device Mar 10 00:48:09.201148 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 10 00:48:09.201155 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 10 00:48:09.201163 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 10 00:48:09.201171 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 10 00:48:09.201178 kernel: watchdog: Hard watchdog permanently disabled Mar 10 00:48:09.201186 kernel: NET: Registered PF_INET6 protocol family Mar 10 00:48:09.201193 kernel: Segment Routing with IPv6 Mar 10 00:48:09.201201 kernel: In-situ OAM (IOAM) with IPv6 Mar 10 00:48:09.201208 kernel: NET: Registered PF_PACKET protocol family Mar 10 00:48:09.201218 kernel: Key type dns_resolver registered Mar 10 00:48:09.201227 kernel: registered taskstats version 1 Mar 10 00:48:09.201234 kernel: Loading compiled-in X.509 certificates Mar 10 00:48:09.201242 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 7b495b3b5b313bb2164465cfb0bfaed4a10d01c4' Mar 10 00:48:09.201249 kernel: Key type .fscrypt registered Mar 10 00:48:09.201256 kernel: Key type fscrypt-provisioning registered Mar 10 00:48:09.201264 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 10 00:48:09.201271 kernel: ima: Allocated hash algorithm: sha1 Mar 10 00:48:09.201278 kernel: ima: No architecture policies found Mar 10 00:48:09.201288 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 10 00:48:09.201296 kernel: clk: Disabling unused clocks Mar 10 00:48:09.201303 kernel: Freeing unused kernel memory: 39424K Mar 10 00:48:09.201310 kernel: Run /init as init process Mar 10 00:48:09.201318 kernel: with arguments: Mar 10 00:48:09.201325 kernel: /init Mar 10 00:48:09.201332 kernel: with environment: Mar 10 00:48:09.201340 kernel: HOME=/ Mar 10 00:48:09.201347 kernel: TERM=linux Mar 10 00:48:09.201357 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 10 00:48:09.201368 systemd[1]: Detected virtualization microsoft. Mar 10 00:48:09.201376 systemd[1]: Detected architecture arm64. Mar 10 00:48:09.201387 systemd[1]: Running in initrd. Mar 10 00:48:09.201395 systemd[1]: No hostname configured, using default hostname. Mar 10 00:48:09.201403 systemd[1]: Hostname set to . Mar 10 00:48:09.201411 systemd[1]: Initializing machine ID from random generator. Mar 10 00:48:09.201420 systemd[1]: Queued start job for default target initrd.target. Mar 10 00:48:09.201428 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 00:48:09.201437 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 00:48:09.201445 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 10 00:48:09.201453 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 00:48:09.201461 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 10 00:48:09.201470 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 10 00:48:09.201479 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 10 00:48:09.201489 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 10 00:48:09.201498 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 00:48:09.201506 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 00:48:09.201514 systemd[1]: Reached target paths.target - Path Units. Mar 10 00:48:09.201522 systemd[1]: Reached target slices.target - Slice Units. Mar 10 00:48:09.201530 systemd[1]: Reached target swap.target - Swaps. Mar 10 00:48:09.201538 systemd[1]: Reached target timers.target - Timer Units. Mar 10 00:48:09.201546 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 00:48:09.201556 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 00:48:09.201564 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 10 00:48:09.201572 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 10 00:48:09.201580 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 00:48:09.201588 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 00:48:09.201597 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 00:48:09.201606 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 00:48:09.201614 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 10 00:48:09.201623 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 00:48:09.201632 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 10 00:48:09.201640 systemd[1]: Starting systemd-fsck-usr.service... Mar 10 00:48:09.201648 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 00:48:09.201656 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 00:48:09.201684 systemd-journald[217]: Collecting audit messages is disabled. Mar 10 00:48:09.201706 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 00:48:09.201715 systemd-journald[217]: Journal started Mar 10 00:48:09.201733 systemd-journald[217]: Runtime Journal (/run/log/journal/0cd6bb6788a74acf96e9759219920968) is 8.0M, max 78.5M, 70.5M free. Mar 10 00:48:09.210742 systemd-modules-load[218]: Inserted module 'overlay' Mar 10 00:48:09.228629 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 00:48:09.237913 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 10 00:48:09.238923 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 10 00:48:09.251514 kernel: Bridge firewalling registered Mar 10 00:48:09.246285 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 10 00:48:09.247202 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 00:48:09.257246 systemd[1]: Finished systemd-fsck-usr.service. Mar 10 00:48:09.264799 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 00:48:09.274514 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 00:48:09.294252 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 00:48:09.305430 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 00:48:09.318252 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 10 00:48:09.342109 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 00:48:09.348253 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 00:48:09.359165 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 00:48:09.363973 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 10 00:48:09.383069 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 00:48:09.402121 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 10 00:48:09.410067 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 00:48:09.430839 dracut-cmdline[252]: dracut-dracut-053 Mar 10 00:48:09.439381 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 00:48:09.457355 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1cb2f7ec5607d2e9d8553783fe2055fabeb5d47c96b1d8f75c394b81aeedb17c Mar 10 00:48:09.453563 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 00:48:09.492459 systemd-resolved[253]: Positive Trust Anchors: Mar 10 00:48:09.492469 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 00:48:09.492501 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 00:48:09.495742 systemd-resolved[253]: Defaulting to hostname 'linux'. Mar 10 00:48:09.496669 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 00:48:09.506231 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 00:48:09.602918 kernel: SCSI subsystem initialized Mar 10 00:48:09.612913 kernel: Loading iSCSI transport class v2.0-870. Mar 10 00:48:09.620930 kernel: iscsi: registered transport (tcp) Mar 10 00:48:09.636974 kernel: iscsi: registered transport (qla4xxx) Mar 10 00:48:09.637035 kernel: QLogic iSCSI HBA Driver Mar 10 00:48:09.671500 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 10 00:48:09.686052 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 10 00:48:09.715949 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 10 00:48:09.716016 kernel: device-mapper: uevent: version 1.0.3 Mar 10 00:48:09.721032 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 10 00:48:09.772920 kernel: raid6: neonx8 gen() 15793 MB/s Mar 10 00:48:09.786905 kernel: raid6: neonx4 gen() 15691 MB/s Mar 10 00:48:09.805904 kernel: raid6: neonx2 gen() 13322 MB/s Mar 10 00:48:09.825917 kernel: raid6: neonx1 gen() 10489 MB/s Mar 10 00:48:09.844902 kernel: raid6: int64x8 gen() 6977 MB/s Mar 10 00:48:09.863903 kernel: raid6: int64x4 gen() 7369 MB/s Mar 10 00:48:09.883903 kernel: raid6: int64x2 gen() 6142 MB/s Mar 10 00:48:09.905613 kernel: raid6: int64x1 gen() 5069 MB/s Mar 10 00:48:09.905624 kernel: raid6: using algorithm neonx8 gen() 15793 MB/s Mar 10 00:48:09.928020 kernel: raid6: .... xor() 11992 MB/s, rmw enabled Mar 10 00:48:09.928030 kernel: raid6: using neon recovery algorithm Mar 10 00:48:09.938409 kernel: xor: measuring software checksum speed Mar 10 00:48:09.938427 kernel: 8regs : 19836 MB/sec Mar 10 00:48:09.942026 kernel: 32regs : 19585 MB/sec Mar 10 00:48:09.944824 kernel: arm64_neon : 27061 MB/sec Mar 10 00:48:09.948076 kernel: xor: using function: arm64_neon (27061 MB/sec) Mar 10 00:48:09.998918 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 10 00:48:10.008409 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 10 00:48:10.024018 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 00:48:10.043476 systemd-udevd[438]: Using default interface naming scheme 'v255'. Mar 10 00:48:10.047884 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 00:48:10.069076 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 10 00:48:10.085693 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Mar 10 00:48:10.115355 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 00:48:10.127384 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 00:48:10.165548 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 00:48:10.181258 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 10 00:48:10.210933 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 10 00:48:10.222281 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 00:48:10.236025 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 00:48:10.244780 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 00:48:10.274201 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 10 00:48:10.295173 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 00:48:10.295361 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 00:48:10.320389 kernel: hv_vmbus: Vmbus version:5.3 Mar 10 00:48:10.320412 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 10 00:48:10.315561 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 00:48:10.331644 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 00:48:10.336067 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 00:48:10.388715 kernel: hv_vmbus: registering driver hv_netvsc Mar 10 00:48:10.388737 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 10 00:48:10.388747 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 10 00:48:10.388757 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 10 00:48:10.388767 kernel: hv_vmbus: registering driver hv_storvsc Mar 10 00:48:10.388777 kernel: PTP clock support registered Mar 10 00:48:10.388786 kernel: scsi host0: storvsc_host_t Mar 10 00:48:10.354111 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 00:48:10.407007 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 10 00:48:10.407052 kernel: hv_vmbus: registering driver hid_hyperv Mar 10 00:48:10.407062 kernel: scsi host1: storvsc_host_t Mar 10 00:48:10.383273 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 00:48:10.438371 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 10 00:48:10.438401 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 10 00:48:10.438563 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 10 00:48:10.391703 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 10 00:48:10.428023 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 00:48:10.428186 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 00:48:10.448746 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 00:48:10.470189 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 00:48:10.505398 kernel: hv_utils: Registering HyperV Utility Driver Mar 10 00:48:10.505456 kernel: hv_vmbus: registering driver hv_utils Mar 10 00:48:10.499765 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 00:48:10.514967 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 10 00:48:10.515157 kernel: hv_utils: Heartbeat IC version 3.0 Mar 10 00:48:10.521113 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 10 00:48:10.521164 kernel: hv_utils: Shutdown IC version 3.2 Mar 10 00:48:10.931123 kernel: hv_utils: TimeSync IC version 4.0 Mar 10 00:48:10.931210 systemd-resolved[253]: Clock change detected. Flushing caches. Mar 10 00:48:10.938805 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 10 00:48:10.939086 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 00:48:10.974706 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 10 00:48:10.974932 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#141 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 10 00:48:10.975036 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 10 00:48:10.982708 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 10 00:48:10.983919 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 00:48:11.004364 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 10 00:48:11.004597 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 10 00:48:11.012942 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 10 00:48:11.012991 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 10 00:48:11.031719 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#166 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 10 00:48:11.155196 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 10 00:48:11.173623 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (489) Mar 10 00:48:11.173658 kernel: BTRFS: device fsid a63952f9-44dc-43e8-8cbd-3c9d9ba5a4f4 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (490) Mar 10 00:48:11.193434 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 10 00:48:11.208707 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 10 00:48:11.214201 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 10 00:48:11.232379 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 10 00:48:11.247816 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 10 00:48:11.272652 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 10 00:48:12.287658 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 10 00:48:12.287994 disk-uuid[596]: The operation has completed successfully. Mar 10 00:48:12.355030 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 10 00:48:12.356774 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 10 00:48:12.386751 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 10 00:48:12.397477 sh[709]: Success Mar 10 00:48:12.414656 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 10 00:48:12.497436 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 10 00:48:12.519777 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 10 00:48:12.527780 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 10 00:48:12.559824 kernel: BTRFS info (device dm-0): first mount of filesystem a63952f9-44dc-43e8-8cbd-3c9d9ba5a4f4 Mar 10 00:48:12.559890 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 10 00:48:12.565351 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 10 00:48:12.569693 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 10 00:48:12.573279 kernel: BTRFS info (device dm-0): using free space tree Mar 10 00:48:12.655389 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 10 00:48:12.659715 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 10 00:48:12.675813 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 10 00:48:12.680783 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 10 00:48:12.711207 kernel: BTRFS info (device sda6): first mount of filesystem da6e29ba-b4b7-4235-b73d-7fb31890cd2f Mar 10 00:48:12.711240 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 10 00:48:12.714834 kernel: BTRFS info (device sda6): using free space tree Mar 10 00:48:12.729655 kernel: BTRFS info (device sda6): auto enabling async discard Mar 10 00:48:12.739078 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 10 00:48:12.748378 kernel: BTRFS info (device sda6): last unmount of filesystem da6e29ba-b4b7-4235-b73d-7fb31890cd2f Mar 10 00:48:12.757555 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 10 00:48:12.767162 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 10 00:48:12.813550 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 00:48:12.828781 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 00:48:12.861714 systemd-networkd[896]: lo: Link UP Mar 10 00:48:12.861721 systemd-networkd[896]: lo: Gained carrier Mar 10 00:48:12.862456 systemd-networkd[896]: Enumeration completed Mar 10 00:48:12.863110 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 00:48:12.865595 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 00:48:12.865598 systemd-networkd[896]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 00:48:12.866393 systemd-networkd[896]: eth0: Link UP Mar 10 00:48:12.866519 systemd-networkd[896]: eth0: Gained carrier Mar 10 00:48:12.866527 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 00:48:12.874615 systemd[1]: Reached target network.target - Network. Mar 10 00:48:12.920444 systemd-networkd[896]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 10 00:48:13.035088 ignition[832]: Ignition 2.19.0 Mar 10 00:48:13.035098 ignition[832]: Stage: fetch-offline Mar 10 00:48:13.038945 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 00:48:13.035139 ignition[832]: no configs at "/usr/lib/ignition/base.d" Mar 10 00:48:13.035147 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 00:48:13.035236 ignition[832]: parsed url from cmdline: "" Mar 10 00:48:13.035240 ignition[832]: no config URL provided Mar 10 00:48:13.035245 ignition[832]: reading system config file "/usr/lib/ignition/user.ign" Mar 10 00:48:13.035251 ignition[832]: no config at "/usr/lib/ignition/user.ign" Mar 10 00:48:13.035256 ignition[832]: failed to fetch config: resource requires networking Mar 10 00:48:13.070788 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 10 00:48:13.035431 ignition[832]: Ignition finished successfully Mar 10 00:48:13.087458 ignition[905]: Ignition 2.19.0 Mar 10 00:48:13.087465 ignition[905]: Stage: fetch Mar 10 00:48:13.087693 ignition[905]: no configs at "/usr/lib/ignition/base.d" Mar 10 00:48:13.087706 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 00:48:13.087812 ignition[905]: parsed url from cmdline: "" Mar 10 00:48:13.087815 ignition[905]: no config URL provided Mar 10 00:48:13.087820 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Mar 10 00:48:13.087827 ignition[905]: no config at "/usr/lib/ignition/user.ign" Mar 10 00:48:13.087852 ignition[905]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 10 00:48:13.179911 ignition[905]: GET result: OK Mar 10 00:48:13.179992 ignition[905]: config has been read from IMDS userdata Mar 10 00:48:13.180035 ignition[905]: parsing config with SHA512: 0b17dfc4a97a06d00bfe0fa707ac8d03e5593c85d28db381031152fc91daba4e4ef46997d925f675bffcd117b851ed36e1a3192904d262d0eb6bfd1a7ccf4033 Mar 10 00:48:13.183724 unknown[905]: fetched base config from "system" Mar 10 00:48:13.184086 ignition[905]: fetch: fetch complete Mar 10 00:48:13.183731 unknown[905]: fetched base config from "system" Mar 10 00:48:13.184091 ignition[905]: fetch: fetch passed Mar 10 00:48:13.183737 unknown[905]: fetched user config from "azure" Mar 10 00:48:13.184131 ignition[905]: Ignition finished successfully Mar 10 00:48:13.187710 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 10 00:48:13.209778 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 10 00:48:13.226233 ignition[912]: Ignition 2.19.0 Mar 10 00:48:13.226241 ignition[912]: Stage: kargs Mar 10 00:48:13.226408 ignition[912]: no configs at "/usr/lib/ignition/base.d" Mar 10 00:48:13.234821 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 10 00:48:13.226417 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 00:48:13.227295 ignition[912]: kargs: kargs passed Mar 10 00:48:13.227337 ignition[912]: Ignition finished successfully Mar 10 00:48:13.255901 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 10 00:48:13.268309 ignition[918]: Ignition 2.19.0 Mar 10 00:48:13.268318 ignition[918]: Stage: disks Mar 10 00:48:13.268497 ignition[918]: no configs at "/usr/lib/ignition/base.d" Mar 10 00:48:13.274334 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 10 00:48:13.268506 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 00:48:13.279299 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 10 00:48:13.271910 ignition[918]: disks: disks passed Mar 10 00:48:13.287488 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 10 00:48:13.271961 ignition[918]: Ignition finished successfully Mar 10 00:48:13.297327 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 00:48:13.306554 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 00:48:13.315875 systemd[1]: Reached target basic.target - Basic System. Mar 10 00:48:13.339886 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 10 00:48:13.377042 systemd-fsck[927]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 10 00:48:13.383408 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 10 00:48:13.395849 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 10 00:48:13.456651 kernel: EXT4-fs (sda9): mounted filesystem 844b1bba-9a09-4570-b8c9-fc554c45e730 r/w with ordered data mode. Quota mode: none. Mar 10 00:48:13.457300 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 10 00:48:13.461930 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 10 00:48:13.485712 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 00:48:13.496753 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 10 00:48:13.519880 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (938) Mar 10 00:48:13.519931 kernel: BTRFS info (device sda6): first mount of filesystem da6e29ba-b4b7-4235-b73d-7fb31890cd2f Mar 10 00:48:13.519893 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 10 00:48:13.538232 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 10 00:48:13.538255 kernel: BTRFS info (device sda6): using free space tree Mar 10 00:48:13.532861 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 10 00:48:13.532896 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 00:48:13.544540 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 10 00:48:13.574659 kernel: BTRFS info (device sda6): auto enabling async discard Mar 10 00:48:13.574807 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 10 00:48:13.581065 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 00:48:13.708295 coreos-metadata[940]: Mar 10 00:48:13.708 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 10 00:48:13.717685 coreos-metadata[940]: Mar 10 00:48:13.717 INFO Fetch successful Mar 10 00:48:13.717685 coreos-metadata[940]: Mar 10 00:48:13.717 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 10 00:48:13.734261 coreos-metadata[940]: Mar 10 00:48:13.734 INFO Fetch successful Mar 10 00:48:13.740408 coreos-metadata[940]: Mar 10 00:48:13.740 INFO wrote hostname ci-4081.3.6-n-9b959526b1 to /sysroot/etc/hostname Mar 10 00:48:13.748510 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 10 00:48:13.777161 initrd-setup-root[967]: cut: /sysroot/etc/passwd: No such file or directory Mar 10 00:48:13.789966 initrd-setup-root[974]: cut: /sysroot/etc/group: No such file or directory Mar 10 00:48:13.798428 initrd-setup-root[981]: cut: /sysroot/etc/shadow: No such file or directory Mar 10 00:48:13.806453 initrd-setup-root[988]: cut: /sysroot/etc/gshadow: No such file or directory Mar 10 00:48:13.960946 kernel: hv_netvsc 7ced8d86-7753-7ced-8d86-77537ced8d86 eth0: VF slot 1 added Mar 10 00:48:13.972387 kernel: hv_vmbus: registering driver hv_pci Mar 10 00:48:13.972449 kernel: hv_pci 47babfe7-6771-409d-a406-1ba9adcbb034: PCI VMBus probing: Using version 0x10004 Mar 10 00:48:13.983395 kernel: hv_pci 47babfe7-6771-409d-a406-1ba9adcbb034: PCI host bridge to bus 6771:00 Mar 10 00:48:13.983624 kernel: pci_bus 6771:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 10 00:48:13.983743 kernel: pci_bus 6771:00: No busn resource found for root bus, will use [bus 00-ff] Mar 10 00:48:13.993663 kernel: pci 6771:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 10 00:48:14.000679 kernel: pci 6771:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 10 00:48:14.005676 kernel: pci 6771:00:02.0: enabling Extended Tags Mar 10 00:48:14.021685 kernel: pci 6771:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6771:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 10 00:48:14.031224 kernel: pci_bus 6771:00: busn_res: [bus 00-ff] end is updated to 00 Mar 10 00:48:14.031424 kernel: pci 6771:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 10 00:48:14.070103 kernel: mlx5_core 6771:00:02.0: enabling device (0000 -> 0002) Mar 10 00:48:14.076656 kernel: mlx5_core 6771:00:02.0: firmware version: 16.30.5026 Mar 10 00:48:14.136144 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 10 00:48:14.155463 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 10 00:48:14.167875 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 10 00:48:14.183896 kernel: BTRFS info (device sda6): last unmount of filesystem da6e29ba-b4b7-4235-b73d-7fb31890cd2f Mar 10 00:48:14.183533 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 10 00:48:14.217919 ignition[1061]: INFO : Ignition 2.19.0 Mar 10 00:48:14.221926 ignition[1061]: INFO : Stage: mount Mar 10 00:48:14.221926 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 00:48:14.221926 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 00:48:14.221926 ignition[1061]: INFO : mount: mount passed Mar 10 00:48:14.221926 ignition[1061]: INFO : Ignition finished successfully Mar 10 00:48:14.224528 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 10 00:48:14.229362 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 10 00:48:14.251894 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 10 00:48:14.270626 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 00:48:14.298646 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1077) Mar 10 00:48:14.309385 kernel: BTRFS info (device sda6): first mount of filesystem da6e29ba-b4b7-4235-b73d-7fb31890cd2f Mar 10 00:48:14.309427 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 10 00:48:14.312866 kernel: BTRFS info (device sda6): using free space tree Mar 10 00:48:14.319643 kernel: BTRFS info (device sda6): auto enabling async discard Mar 10 00:48:14.321507 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 00:48:14.345324 ignition[1096]: INFO : Ignition 2.19.0 Mar 10 00:48:14.345324 ignition[1096]: INFO : Stage: files Mar 10 00:48:14.351517 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 00:48:14.351517 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 00:48:14.351517 ignition[1096]: DEBUG : files: compiled without relabeling support, skipping Mar 10 00:48:14.371000 ignition[1096]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 10 00:48:14.371000 ignition[1096]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 10 00:48:14.371000 ignition[1096]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 10 00:48:14.400493 kernel: hv_netvsc 7ced8d86-7753-7ced-8d86-77537ced8d86 eth0: VF registering: eth1 Mar 10 00:48:14.400652 kernel: mlx5_core 6771:00:02.0 eth1: joined to eth0 Mar 10 00:48:14.400786 kernel: mlx5_core 6771:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 10 00:48:14.387800 unknown[1096]: wrote ssh authorized keys file for user: core Mar 10 00:48:14.408981 kernel: mlx5_core 6771:00:02.0 enP26481s1: renamed from eth1 Mar 10 00:48:14.409163 ignition[1096]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 10 00:48:14.409163 ignition[1096]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 10 00:48:14.409163 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 10 00:48:14.409163 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 10 00:48:14.421000 systemd-networkd[896]: eth1: Interface name change detected, renamed to enP26481s1. Mar 10 00:48:14.460734 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 10 00:48:14.521651 kernel: mlx5_core 6771:00:02.0 enP26481s1: Link up Mar 10 00:48:14.521606 systemd-networkd[896]: enP26481s1: Link UP Mar 10 00:48:14.565205 systemd-networkd[896]: enP26481s1: Gained carrier Mar 10 00:48:14.569561 kernel: hv_netvsc 7ced8d86-7753-7ced-8d86-77537ced8d86 eth0: Data path switched to VF: enP26481s1 Mar 10 00:48:14.639800 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Mar 10 00:48:14.930747 systemd-networkd[896]: eth0: Gained IPv6LL Mar 10 00:48:15.120464 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 10 00:48:15.573581 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 10 00:48:15.573581 ignition[1096]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 10 00:48:15.592712 ignition[1096]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 00:48:15.592712 ignition[1096]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 00:48:15.592712 ignition[1096]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 10 00:48:15.592712 ignition[1096]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 10 00:48:15.592712 ignition[1096]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 10 00:48:15.592712 ignition[1096]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 10 00:48:15.592712 ignition[1096]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 10 00:48:15.592712 ignition[1096]: INFO : files: files passed Mar 10 00:48:15.592712 ignition[1096]: INFO : Ignition finished successfully Mar 10 00:48:15.590061 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 10 00:48:15.618945 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 10 00:48:15.632826 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 10 00:48:15.642472 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 10 00:48:15.702577 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 00:48:15.702577 initrd-setup-root-after-ignition[1124]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 10 00:48:15.642559 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 10 00:48:15.729838 initrd-setup-root-after-ignition[1128]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 00:48:15.668661 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 00:48:15.677325 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 10 00:48:15.702878 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 10 00:48:15.738969 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 10 00:48:15.739129 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 10 00:48:15.749098 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 10 00:48:15.758273 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 10 00:48:15.768550 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 10 00:48:15.787901 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 10 00:48:15.813739 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 00:48:15.833903 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 10 00:48:15.849045 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 10 00:48:15.854267 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 00:48:15.864150 systemd[1]: Stopped target timers.target - Timer Units. Mar 10 00:48:15.873201 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 10 00:48:15.873324 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 00:48:15.886140 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 10 00:48:15.890756 systemd[1]: Stopped target basic.target - Basic System. Mar 10 00:48:15.899518 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 10 00:48:15.908518 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 00:48:15.917195 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 10 00:48:15.926324 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 10 00:48:15.936008 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 00:48:15.945910 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 10 00:48:15.954701 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 10 00:48:15.964353 systemd[1]: Stopped target swap.target - Swaps. Mar 10 00:48:15.972111 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 10 00:48:15.972229 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 10 00:48:15.983940 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 10 00:48:15.988881 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 00:48:15.998591 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 10 00:48:15.998663 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 00:48:16.008379 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 10 00:48:16.008493 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 10 00:48:16.023060 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 10 00:48:16.023193 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 00:48:16.032731 systemd[1]: ignition-files.service: Deactivated successfully. Mar 10 00:48:16.032822 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 10 00:48:16.043296 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 10 00:48:16.043394 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 10 00:48:16.098805 ignition[1149]: INFO : Ignition 2.19.0 Mar 10 00:48:16.098805 ignition[1149]: INFO : Stage: umount Mar 10 00:48:16.098805 ignition[1149]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 00:48:16.098805 ignition[1149]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 00:48:16.098805 ignition[1149]: INFO : umount: umount passed Mar 10 00:48:16.098805 ignition[1149]: INFO : Ignition finished successfully Mar 10 00:48:16.072931 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 10 00:48:16.081475 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 10 00:48:16.086184 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 00:48:16.100852 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 10 00:48:16.108000 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 10 00:48:16.108139 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 00:48:16.115929 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 10 00:48:16.116025 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 00:48:16.131622 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 10 00:48:16.133555 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 10 00:48:16.146318 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 10 00:48:16.146612 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 10 00:48:16.160764 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 10 00:48:16.161241 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 10 00:48:16.161279 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 10 00:48:16.174573 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 10 00:48:16.174733 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 10 00:48:16.183418 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 10 00:48:16.183473 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 10 00:48:16.192007 systemd[1]: Stopped target network.target - Network. Mar 10 00:48:16.202875 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 10 00:48:16.202947 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 00:48:16.208072 systemd[1]: Stopped target paths.target - Path Units. Mar 10 00:48:16.216385 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 10 00:48:16.220667 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 00:48:16.226156 systemd[1]: Stopped target slices.target - Slice Units. Mar 10 00:48:16.233917 systemd[1]: Stopped target sockets.target - Socket Units. Mar 10 00:48:16.242575 systemd[1]: iscsid.socket: Deactivated successfully. Mar 10 00:48:16.242626 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 00:48:16.253530 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 10 00:48:16.253583 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 00:48:16.264024 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 10 00:48:16.264075 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 10 00:48:16.272295 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 10 00:48:16.272338 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 10 00:48:16.281323 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 10 00:48:16.291423 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 10 00:48:16.310709 systemd-networkd[896]: eth0: DHCPv6 lease lost Mar 10 00:48:16.315912 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 10 00:48:16.316109 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 10 00:48:16.323999 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 10 00:48:16.324139 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 10 00:48:16.336905 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 10 00:48:16.504445 kernel: hv_netvsc 7ced8d86-7753-7ced-8d86-77537ced8d86 eth0: Data path switched from VF: enP26481s1 Mar 10 00:48:16.336954 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 10 00:48:16.364189 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 10 00:48:16.370848 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 10 00:48:16.370920 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 00:48:16.380432 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 10 00:48:16.380477 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 10 00:48:16.389284 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 10 00:48:16.389326 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 10 00:48:16.397837 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 10 00:48:16.397879 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 00:48:16.407915 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 00:48:16.442003 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 10 00:48:16.443160 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 00:48:16.453341 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 10 00:48:16.453478 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 10 00:48:16.461041 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 10 00:48:16.461076 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 00:48:16.470485 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 10 00:48:16.470540 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 10 00:48:16.483658 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 10 00:48:16.483725 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 10 00:48:16.506055 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 00:48:16.506157 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 00:48:16.530878 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 10 00:48:16.542759 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 10 00:48:16.542828 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 00:48:16.553968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 00:48:16.554020 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 00:48:16.563852 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 10 00:48:16.563958 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 10 00:48:16.572356 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 10 00:48:16.572447 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 10 00:48:16.582881 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 10 00:48:16.582976 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 10 00:48:16.594245 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 10 00:48:16.602978 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 10 00:48:16.603067 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 10 00:48:16.625905 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 10 00:48:16.751562 systemd[1]: Switching root. Mar 10 00:48:16.814663 systemd-journald[217]: Journal stopped Mar 10 00:48:09.197520 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 10 00:48:09.197544 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Mar 9 23:01:00 -00 2026 Mar 10 00:48:09.197552 kernel: KASLR enabled Mar 10 00:48:09.197558 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 10 00:48:09.197566 kernel: printk: bootconsole [pl11] enabled Mar 10 00:48:09.197571 kernel: efi: EFI v2.7 by EDK II Mar 10 00:48:09.197579 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 10 00:48:09.197585 kernel: random: crng init done Mar 10 00:48:09.197591 kernel: ACPI: Early table checksum verification disabled Mar 10 00:48:09.197597 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 10 00:48:09.197603 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 00:48:09.197609 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 00:48:09.197617 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 10 00:48:09.197623 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 00:48:09.197631 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 00:48:09.197637 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 00:48:09.197644 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 00:48:09.197653 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 00:48:09.197659 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 00:48:09.197665 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 10 00:48:09.197672 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 10 00:48:09.197678 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 10 00:48:09.197684 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 10 00:48:09.197690 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 10 00:48:09.197697 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 10 00:48:09.197703 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 10 00:48:09.197710 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 10 00:48:09.197716 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 10 00:48:09.197724 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 10 00:48:09.197731 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 10 00:48:09.197738 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 10 00:48:09.197744 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 10 00:48:09.197751 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 10 00:48:09.197757 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 10 00:48:09.197763 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 10 00:48:09.197769 kernel: Zone ranges: Mar 10 00:48:09.197776 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 10 00:48:09.197782 kernel: DMA32 empty Mar 10 00:48:09.197788 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 10 00:48:09.197795 kernel: Movable zone start for each node Mar 10 00:48:09.197805 kernel: Early memory node ranges Mar 10 00:48:09.197812 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 10 00:48:09.197819 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 10 00:48:09.197826 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 10 00:48:09.197833 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 10 00:48:09.197841 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 10 00:48:09.197847 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 10 00:48:09.197854 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 10 00:48:09.197861 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 10 00:48:09.197867 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 10 00:48:09.197874 kernel: psci: probing for conduit method from ACPI. Mar 10 00:48:09.197881 kernel: psci: PSCIv1.1 detected in firmware. Mar 10 00:48:09.197888 kernel: psci: Using standard PSCI v0.2 function IDs Mar 10 00:48:09.197895 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 10 00:48:09.199654 kernel: psci: SMC Calling Convention v1.4 Mar 10 00:48:09.199663 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 10 00:48:09.199670 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 10 00:48:09.199683 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 10 00:48:09.199690 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 10 00:48:09.199697 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 10 00:48:09.199704 kernel: Detected PIPT I-cache on CPU0 Mar 10 00:48:09.199711 kernel: CPU features: detected: GIC system register CPU interface Mar 10 00:48:09.199718 kernel: CPU features: detected: Hardware dirty bit management Mar 10 00:48:09.199725 kernel: CPU features: detected: Spectre-BHB Mar 10 00:48:09.199732 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 10 00:48:09.199738 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 10 00:48:09.199745 kernel: CPU features: detected: ARM erratum 1418040 Mar 10 00:48:09.199752 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 10 00:48:09.199761 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 10 00:48:09.199768 kernel: alternatives: applying boot alternatives Mar 10 00:48:09.199777 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1cb2f7ec5607d2e9d8553783fe2055fabeb5d47c96b1d8f75c394b81aeedb17c Mar 10 00:48:09.199784 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 10 00:48:09.199791 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 10 00:48:09.199798 kernel: Fallback order for Node 0: 0 Mar 10 00:48:09.199805 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 10 00:48:09.199812 kernel: Policy zone: Normal Mar 10 00:48:09.199819 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 10 00:48:09.199825 kernel: software IO TLB: area num 2. Mar 10 00:48:09.199832 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 10 00:48:09.199841 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 10 00:48:09.199848 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 10 00:48:09.199855 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 10 00:48:09.199863 kernel: rcu: RCU event tracing is enabled. Mar 10 00:48:09.199870 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 10 00:48:09.199877 kernel: Trampoline variant of Tasks RCU enabled. Mar 10 00:48:09.199884 kernel: Tracing variant of Tasks RCU enabled. Mar 10 00:48:09.199891 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 10 00:48:09.199909 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 10 00:48:09.199916 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 10 00:48:09.199923 kernel: GICv3: 960 SPIs implemented Mar 10 00:48:09.199932 kernel: GICv3: 0 Extended SPIs implemented Mar 10 00:48:09.199939 kernel: Root IRQ handler: gic_handle_irq Mar 10 00:48:09.199946 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 10 00:48:09.199953 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 10 00:48:09.199959 kernel: ITS: No ITS available, not enabling LPIs Mar 10 00:48:09.199967 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 10 00:48:09.199974 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 10 00:48:09.199981 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 10 00:48:09.199988 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 10 00:48:09.199995 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 10 00:48:09.200002 kernel: Console: colour dummy device 80x25 Mar 10 00:48:09.200011 kernel: printk: console [tty1] enabled Mar 10 00:48:09.200018 kernel: ACPI: Core revision 20230628 Mar 10 00:48:09.200026 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 10 00:48:09.200033 kernel: pid_max: default: 32768 minimum: 301 Mar 10 00:48:09.200040 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 10 00:48:09.200047 kernel: landlock: Up and running. Mar 10 00:48:09.200054 kernel: SELinux: Initializing. Mar 10 00:48:09.200061 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 00:48:09.200068 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 00:48:09.200077 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 10 00:48:09.200084 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 10 00:48:09.200092 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 10 00:48:09.200099 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 10 00:48:09.200106 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 10 00:48:09.200113 kernel: rcu: Hierarchical SRCU implementation. Mar 10 00:48:09.200120 kernel: rcu: Max phase no-delay instances is 400. Mar 10 00:48:09.200127 kernel: Remapping and enabling EFI services. Mar 10 00:48:09.200142 kernel: smp: Bringing up secondary CPUs ... Mar 10 00:48:09.200149 kernel: Detected PIPT I-cache on CPU1 Mar 10 00:48:09.200156 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 10 00:48:09.200164 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 10 00:48:09.200173 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 10 00:48:09.200180 kernel: smp: Brought up 1 node, 2 CPUs Mar 10 00:48:09.200188 kernel: SMP: Total of 2 processors activated. Mar 10 00:48:09.200195 kernel: CPU features: detected: 32-bit EL0 Support Mar 10 00:48:09.200203 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 10 00:48:09.200212 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 10 00:48:09.200220 kernel: CPU features: detected: CRC32 instructions Mar 10 00:48:09.200227 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 10 00:48:09.200235 kernel: CPU features: detected: LSE atomic instructions Mar 10 00:48:09.200243 kernel: CPU features: detected: Privileged Access Never Mar 10 00:48:09.200250 kernel: CPU: All CPU(s) started at EL1 Mar 10 00:48:09.200258 kernel: alternatives: applying system-wide alternatives Mar 10 00:48:09.200265 kernel: devtmpfs: initialized Mar 10 00:48:09.200273 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 10 00:48:09.200282 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 10 00:48:09.200290 kernel: pinctrl core: initialized pinctrl subsystem Mar 10 00:48:09.200297 kernel: SMBIOS 3.1.0 present. Mar 10 00:48:09.200305 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 10 00:48:09.200312 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 10 00:48:09.200320 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 10 00:48:09.200328 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 10 00:48:09.200335 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 10 00:48:09.200343 kernel: audit: initializing netlink subsys (disabled) Mar 10 00:48:09.200352 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 10 00:48:09.200359 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 10 00:48:09.200367 kernel: cpuidle: using governor menu Mar 10 00:48:09.200374 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 10 00:48:09.200382 kernel: ASID allocator initialised with 32768 entries Mar 10 00:48:09.200389 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 10 00:48:09.200396 kernel: Serial: AMBA PL011 UART driver Mar 10 00:48:09.200404 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 10 00:48:09.200411 kernel: Modules: 0 pages in range for non-PLT usage Mar 10 00:48:09.200421 kernel: Modules: 509008 pages in range for PLT usage Mar 10 00:48:09.200428 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 10 00:48:09.200436 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 10 00:48:09.200443 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 10 00:48:09.200451 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 10 00:48:09.200458 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 10 00:48:09.200466 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 10 00:48:09.200473 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 10 00:48:09.200481 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 10 00:48:09.200490 kernel: ACPI: Added _OSI(Module Device) Mar 10 00:48:09.200498 kernel: ACPI: Added _OSI(Processor Device) Mar 10 00:48:09.200506 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 10 00:48:09.200513 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 10 00:48:09.200521 kernel: ACPI: Interpreter enabled Mar 10 00:48:09.200529 kernel: ACPI: Using GIC for interrupt routing Mar 10 00:48:09.200536 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 10 00:48:09.200544 kernel: printk: console [ttyAMA0] enabled Mar 10 00:48:09.200551 kernel: printk: bootconsole [pl11] disabled Mar 10 00:48:09.200561 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 10 00:48:09.200568 kernel: iommu: Default domain type: Translated Mar 10 00:48:09.200576 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 10 00:48:09.200584 kernel: efivars: Registered efivars operations Mar 10 00:48:09.200591 kernel: vgaarb: loaded Mar 10 00:48:09.200599 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 10 00:48:09.200606 kernel: VFS: Disk quotas dquot_6.6.0 Mar 10 00:48:09.200614 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 10 00:48:09.200621 kernel: pnp: PnP ACPI init Mar 10 00:48:09.200631 kernel: pnp: PnP ACPI: found 0 devices Mar 10 00:48:09.200639 kernel: NET: Registered PF_INET protocol family Mar 10 00:48:09.200646 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 10 00:48:09.200654 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 10 00:48:09.200662 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 10 00:48:09.200670 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 10 00:48:09.200677 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 10 00:48:09.200685 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 10 00:48:09.200693 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 00:48:09.200702 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 00:48:09.200709 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 10 00:48:09.200717 kernel: PCI: CLS 0 bytes, default 64 Mar 10 00:48:09.200724 kernel: kvm [1]: HYP mode not available Mar 10 00:48:09.200732 kernel: Initialise system trusted keyrings Mar 10 00:48:09.200740 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 10 00:48:09.200747 kernel: Key type asymmetric registered Mar 10 00:48:09.200755 kernel: Asymmetric key parser 'x509' registered Mar 10 00:48:09.200762 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 10 00:48:09.200771 kernel: io scheduler mq-deadline registered Mar 10 00:48:09.200779 kernel: io scheduler kyber registered Mar 10 00:48:09.200786 kernel: io scheduler bfq registered Mar 10 00:48:09.200794 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 10 00:48:09.200802 kernel: thunder_xcv, ver 1.0 Mar 10 00:48:09.200809 kernel: thunder_bgx, ver 1.0 Mar 10 00:48:09.200817 kernel: nicpf, ver 1.0 Mar 10 00:48:09.200824 kernel: nicvf, ver 1.0 Mar 10 00:48:09.201001 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 10 00:48:09.201082 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-10T00:48:08 UTC (1773103688) Mar 10 00:48:09.201093 kernel: efifb: probing for efifb Mar 10 00:48:09.201101 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 10 00:48:09.201109 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 10 00:48:09.201116 kernel: efifb: scrolling: redraw Mar 10 00:48:09.201123 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 10 00:48:09.201131 kernel: Console: switching to colour frame buffer device 128x48 Mar 10 00:48:09.201138 kernel: fb0: EFI VGA frame buffer device Mar 10 00:48:09.201148 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 10 00:48:09.201155 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 10 00:48:09.201163 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 10 00:48:09.201171 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 10 00:48:09.201178 kernel: watchdog: Hard watchdog permanently disabled Mar 10 00:48:09.201186 kernel: NET: Registered PF_INET6 protocol family Mar 10 00:48:09.201193 kernel: Segment Routing with IPv6 Mar 10 00:48:09.201201 kernel: In-situ OAM (IOAM) with IPv6 Mar 10 00:48:09.201208 kernel: NET: Registered PF_PACKET protocol family Mar 10 00:48:09.201218 kernel: Key type dns_resolver registered Mar 10 00:48:09.201227 kernel: registered taskstats version 1 Mar 10 00:48:09.201234 kernel: Loading compiled-in X.509 certificates Mar 10 00:48:09.201242 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 7b495b3b5b313bb2164465cfb0bfaed4a10d01c4' Mar 10 00:48:09.201249 kernel: Key type .fscrypt registered Mar 10 00:48:09.201256 kernel: Key type fscrypt-provisioning registered Mar 10 00:48:09.201264 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 10 00:48:09.201271 kernel: ima: Allocated hash algorithm: sha1 Mar 10 00:48:09.201278 kernel: ima: No architecture policies found Mar 10 00:48:09.201288 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 10 00:48:09.201296 kernel: clk: Disabling unused clocks Mar 10 00:48:09.201303 kernel: Freeing unused kernel memory: 39424K Mar 10 00:48:09.201310 kernel: Run /init as init process Mar 10 00:48:09.201318 kernel: with arguments: Mar 10 00:48:09.201325 kernel: /init Mar 10 00:48:09.201332 kernel: with environment: Mar 10 00:48:09.201340 kernel: HOME=/ Mar 10 00:48:09.201347 kernel: TERM=linux Mar 10 00:48:09.201357 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 10 00:48:09.201368 systemd[1]: Detected virtualization microsoft. Mar 10 00:48:09.201376 systemd[1]: Detected architecture arm64. Mar 10 00:48:09.201387 systemd[1]: Running in initrd. Mar 10 00:48:09.201395 systemd[1]: No hostname configured, using default hostname. Mar 10 00:48:09.201403 systemd[1]: Hostname set to . Mar 10 00:48:09.201411 systemd[1]: Initializing machine ID from random generator. Mar 10 00:48:09.201420 systemd[1]: Queued start job for default target initrd.target. Mar 10 00:48:09.201428 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 00:48:09.201437 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 00:48:09.201445 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 10 00:48:09.201453 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 00:48:09.201461 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 10 00:48:09.201470 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 10 00:48:09.201479 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 10 00:48:09.201489 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 10 00:48:09.201498 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 00:48:09.201506 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 00:48:09.201514 systemd[1]: Reached target paths.target - Path Units. Mar 10 00:48:09.201522 systemd[1]: Reached target slices.target - Slice Units. Mar 10 00:48:09.201530 systemd[1]: Reached target swap.target - Swaps. Mar 10 00:48:09.201538 systemd[1]: Reached target timers.target - Timer Units. Mar 10 00:48:09.201546 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 00:48:09.201556 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 00:48:09.201564 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 10 00:48:09.201572 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 10 00:48:09.201580 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 00:48:09.201588 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 00:48:09.201597 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 00:48:09.201606 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 00:48:09.201614 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 10 00:48:09.201623 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 00:48:09.201632 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 10 00:48:09.201640 systemd[1]: Starting systemd-fsck-usr.service... Mar 10 00:48:09.201648 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 00:48:09.201656 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 00:48:09.201684 systemd-journald[217]: Collecting audit messages is disabled. Mar 10 00:48:09.201706 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 00:48:09.201715 systemd-journald[217]: Journal started Mar 10 00:48:09.201733 systemd-journald[217]: Runtime Journal (/run/log/journal/0cd6bb6788a74acf96e9759219920968) is 8.0M, max 78.5M, 70.5M free. Mar 10 00:48:09.210742 systemd-modules-load[218]: Inserted module 'overlay' Mar 10 00:48:09.228629 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 00:48:09.237913 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 10 00:48:09.238923 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 10 00:48:09.251514 kernel: Bridge firewalling registered Mar 10 00:48:09.246285 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 10 00:48:09.247202 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 00:48:09.257246 systemd[1]: Finished systemd-fsck-usr.service. Mar 10 00:48:09.264799 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 00:48:09.274514 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 00:48:09.294252 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 00:48:09.305430 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 00:48:09.318252 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 10 00:48:09.342109 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 00:48:09.348253 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 00:48:09.359165 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 00:48:09.363973 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 10 00:48:09.383069 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 00:48:09.402121 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 10 00:48:09.410067 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 00:48:09.430839 dracut-cmdline[252]: dracut-dracut-053 Mar 10 00:48:09.439381 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 00:48:09.457355 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1cb2f7ec5607d2e9d8553783fe2055fabeb5d47c96b1d8f75c394b81aeedb17c Mar 10 00:48:09.453563 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 00:48:09.492459 systemd-resolved[253]: Positive Trust Anchors: Mar 10 00:48:09.492469 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 00:48:09.492501 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 00:48:09.495742 systemd-resolved[253]: Defaulting to hostname 'linux'. Mar 10 00:48:09.496669 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 00:48:09.506231 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 00:48:09.602918 kernel: SCSI subsystem initialized Mar 10 00:48:09.612913 kernel: Loading iSCSI transport class v2.0-870. Mar 10 00:48:09.620930 kernel: iscsi: registered transport (tcp) Mar 10 00:48:09.636974 kernel: iscsi: registered transport (qla4xxx) Mar 10 00:48:09.637035 kernel: QLogic iSCSI HBA Driver Mar 10 00:48:09.671500 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 10 00:48:09.686052 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 10 00:48:09.715949 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 10 00:48:09.716016 kernel: device-mapper: uevent: version 1.0.3 Mar 10 00:48:09.721032 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 10 00:48:09.772920 kernel: raid6: neonx8 gen() 15793 MB/s Mar 10 00:48:09.786905 kernel: raid6: neonx4 gen() 15691 MB/s Mar 10 00:48:09.805904 kernel: raid6: neonx2 gen() 13322 MB/s Mar 10 00:48:09.825917 kernel: raid6: neonx1 gen() 10489 MB/s Mar 10 00:48:09.844902 kernel: raid6: int64x8 gen() 6977 MB/s Mar 10 00:48:09.863903 kernel: raid6: int64x4 gen() 7369 MB/s Mar 10 00:48:09.883903 kernel: raid6: int64x2 gen() 6142 MB/s Mar 10 00:48:09.905613 kernel: raid6: int64x1 gen() 5069 MB/s Mar 10 00:48:09.905624 kernel: raid6: using algorithm neonx8 gen() 15793 MB/s Mar 10 00:48:09.928020 kernel: raid6: .... xor() 11992 MB/s, rmw enabled Mar 10 00:48:09.928030 kernel: raid6: using neon recovery algorithm Mar 10 00:48:09.938409 kernel: xor: measuring software checksum speed Mar 10 00:48:09.938427 kernel: 8regs : 19836 MB/sec Mar 10 00:48:09.942026 kernel: 32regs : 19585 MB/sec Mar 10 00:48:09.944824 kernel: arm64_neon : 27061 MB/sec Mar 10 00:48:09.948076 kernel: xor: using function: arm64_neon (27061 MB/sec) Mar 10 00:48:09.998918 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 10 00:48:10.008409 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 10 00:48:10.024018 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 00:48:10.043476 systemd-udevd[438]: Using default interface naming scheme 'v255'. Mar 10 00:48:10.047884 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 00:48:10.069076 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 10 00:48:10.085693 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Mar 10 00:48:10.115355 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 00:48:10.127384 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 00:48:10.165548 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 00:48:10.181258 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 10 00:48:10.210933 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 10 00:48:10.222281 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 00:48:10.236025 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 00:48:10.244780 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 00:48:10.274201 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 10 00:48:10.295173 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 00:48:10.295361 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 00:48:10.320389 kernel: hv_vmbus: Vmbus version:5.3 Mar 10 00:48:10.320412 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 10 00:48:10.315561 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 00:48:10.331644 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 00:48:10.336067 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 00:48:10.388715 kernel: hv_vmbus: registering driver hv_netvsc Mar 10 00:48:10.388737 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 10 00:48:10.388747 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 10 00:48:10.388757 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 10 00:48:10.388767 kernel: hv_vmbus: registering driver hv_storvsc Mar 10 00:48:10.388777 kernel: PTP clock support registered Mar 10 00:48:10.388786 kernel: scsi host0: storvsc_host_t Mar 10 00:48:10.354111 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 00:48:10.407007 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 10 00:48:10.407052 kernel: hv_vmbus: registering driver hid_hyperv Mar 10 00:48:10.407062 kernel: scsi host1: storvsc_host_t Mar 10 00:48:10.383273 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 00:48:10.438371 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 10 00:48:10.438401 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 10 00:48:10.438563 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 10 00:48:10.391703 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 10 00:48:10.428023 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 00:48:10.428186 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 00:48:10.448746 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 00:48:10.470189 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 00:48:10.505398 kernel: hv_utils: Registering HyperV Utility Driver Mar 10 00:48:10.505456 kernel: hv_vmbus: registering driver hv_utils Mar 10 00:48:10.499765 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 00:48:10.514967 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 10 00:48:10.515157 kernel: hv_utils: Heartbeat IC version 3.0 Mar 10 00:48:10.521113 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 10 00:48:10.521164 kernel: hv_utils: Shutdown IC version 3.2 Mar 10 00:48:10.931123 kernel: hv_utils: TimeSync IC version 4.0 Mar 10 00:48:10.931210 systemd-resolved[253]: Clock change detected. Flushing caches. Mar 10 00:48:10.938805 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 10 00:48:10.939086 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 00:48:10.974706 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 10 00:48:10.974932 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#141 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 10 00:48:10.975036 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 10 00:48:10.982708 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 10 00:48:10.983919 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 00:48:11.004364 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 10 00:48:11.004597 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 10 00:48:11.012942 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 10 00:48:11.012991 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 10 00:48:11.031719 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#166 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 10 00:48:11.155196 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 10 00:48:11.173623 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (489) Mar 10 00:48:11.173658 kernel: BTRFS: device fsid a63952f9-44dc-43e8-8cbd-3c9d9ba5a4f4 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (490) Mar 10 00:48:11.193434 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 10 00:48:11.208707 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 10 00:48:11.214201 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 10 00:48:11.232379 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 10 00:48:11.247816 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 10 00:48:11.272652 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 10 00:48:12.287658 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 10 00:48:12.287994 disk-uuid[596]: The operation has completed successfully. Mar 10 00:48:12.355030 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 10 00:48:12.356774 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 10 00:48:12.386751 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 10 00:48:12.397477 sh[709]: Success Mar 10 00:48:12.414656 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 10 00:48:12.497436 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 10 00:48:12.519777 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 10 00:48:12.527780 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 10 00:48:12.559824 kernel: BTRFS info (device dm-0): first mount of filesystem a63952f9-44dc-43e8-8cbd-3c9d9ba5a4f4 Mar 10 00:48:12.559890 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 10 00:48:12.565351 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 10 00:48:12.569693 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 10 00:48:12.573279 kernel: BTRFS info (device dm-0): using free space tree Mar 10 00:48:12.655389 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 10 00:48:12.659715 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 10 00:48:12.675813 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 10 00:48:12.680783 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 10 00:48:12.711207 kernel: BTRFS info (device sda6): first mount of filesystem da6e29ba-b4b7-4235-b73d-7fb31890cd2f Mar 10 00:48:12.711240 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 10 00:48:12.714834 kernel: BTRFS info (device sda6): using free space tree Mar 10 00:48:12.729655 kernel: BTRFS info (device sda6): auto enabling async discard Mar 10 00:48:12.739078 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 10 00:48:12.748378 kernel: BTRFS info (device sda6): last unmount of filesystem da6e29ba-b4b7-4235-b73d-7fb31890cd2f Mar 10 00:48:12.757555 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 10 00:48:12.767162 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 10 00:48:12.813550 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 00:48:12.828781 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 00:48:12.861714 systemd-networkd[896]: lo: Link UP Mar 10 00:48:12.861721 systemd-networkd[896]: lo: Gained carrier Mar 10 00:48:12.862456 systemd-networkd[896]: Enumeration completed Mar 10 00:48:12.863110 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 00:48:12.865595 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 00:48:12.865598 systemd-networkd[896]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 00:48:12.866393 systemd-networkd[896]: eth0: Link UP Mar 10 00:48:12.866519 systemd-networkd[896]: eth0: Gained carrier Mar 10 00:48:12.866527 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 00:48:12.874615 systemd[1]: Reached target network.target - Network. Mar 10 00:48:12.920444 systemd-networkd[896]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 10 00:48:13.035088 ignition[832]: Ignition 2.19.0 Mar 10 00:48:13.035098 ignition[832]: Stage: fetch-offline Mar 10 00:48:13.038945 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 00:48:13.035139 ignition[832]: no configs at "/usr/lib/ignition/base.d" Mar 10 00:48:13.035147 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 00:48:13.035236 ignition[832]: parsed url from cmdline: "" Mar 10 00:48:13.035240 ignition[832]: no config URL provided Mar 10 00:48:13.035245 ignition[832]: reading system config file "/usr/lib/ignition/user.ign" Mar 10 00:48:13.035251 ignition[832]: no config at "/usr/lib/ignition/user.ign" Mar 10 00:48:13.035256 ignition[832]: failed to fetch config: resource requires networking Mar 10 00:48:13.070788 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 10 00:48:13.035431 ignition[832]: Ignition finished successfully Mar 10 00:48:13.087458 ignition[905]: Ignition 2.19.0 Mar 10 00:48:13.087465 ignition[905]: Stage: fetch Mar 10 00:48:13.087693 ignition[905]: no configs at "/usr/lib/ignition/base.d" Mar 10 00:48:13.087706 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 00:48:13.087812 ignition[905]: parsed url from cmdline: "" Mar 10 00:48:13.087815 ignition[905]: no config URL provided Mar 10 00:48:13.087820 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Mar 10 00:48:13.087827 ignition[905]: no config at "/usr/lib/ignition/user.ign" Mar 10 00:48:13.087852 ignition[905]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 10 00:48:13.179911 ignition[905]: GET result: OK Mar 10 00:48:13.179992 ignition[905]: config has been read from IMDS userdata Mar 10 00:48:13.180035 ignition[905]: parsing config with SHA512: 0b17dfc4a97a06d00bfe0fa707ac8d03e5593c85d28db381031152fc91daba4e4ef46997d925f675bffcd117b851ed36e1a3192904d262d0eb6bfd1a7ccf4033 Mar 10 00:48:13.183724 unknown[905]: fetched base config from "system" Mar 10 00:48:13.184086 ignition[905]: fetch: fetch complete Mar 10 00:48:13.183731 unknown[905]: fetched base config from "system" Mar 10 00:48:13.184091 ignition[905]: fetch: fetch passed Mar 10 00:48:13.183737 unknown[905]: fetched user config from "azure" Mar 10 00:48:13.184131 ignition[905]: Ignition finished successfully Mar 10 00:48:13.187710 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 10 00:48:13.209778 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 10 00:48:13.226233 ignition[912]: Ignition 2.19.0 Mar 10 00:48:13.226241 ignition[912]: Stage: kargs Mar 10 00:48:13.226408 ignition[912]: no configs at "/usr/lib/ignition/base.d" Mar 10 00:48:13.234821 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 10 00:48:13.226417 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 00:48:13.227295 ignition[912]: kargs: kargs passed Mar 10 00:48:13.227337 ignition[912]: Ignition finished successfully Mar 10 00:48:13.255901 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 10 00:48:13.268309 ignition[918]: Ignition 2.19.0 Mar 10 00:48:13.268318 ignition[918]: Stage: disks Mar 10 00:48:13.268497 ignition[918]: no configs at "/usr/lib/ignition/base.d" Mar 10 00:48:13.274334 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 10 00:48:13.268506 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 00:48:13.279299 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 10 00:48:13.271910 ignition[918]: disks: disks passed Mar 10 00:48:13.287488 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 10 00:48:13.271961 ignition[918]: Ignition finished successfully Mar 10 00:48:13.297327 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 00:48:13.306554 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 00:48:13.315875 systemd[1]: Reached target basic.target - Basic System. Mar 10 00:48:13.339886 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 10 00:48:13.377042 systemd-fsck[927]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 10 00:48:13.383408 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 10 00:48:13.395849 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 10 00:48:13.456651 kernel: EXT4-fs (sda9): mounted filesystem 844b1bba-9a09-4570-b8c9-fc554c45e730 r/w with ordered data mode. Quota mode: none. Mar 10 00:48:13.457300 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 10 00:48:13.461930 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 10 00:48:13.485712 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 00:48:13.496753 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 10 00:48:13.519880 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (938) Mar 10 00:48:13.519931 kernel: BTRFS info (device sda6): first mount of filesystem da6e29ba-b4b7-4235-b73d-7fb31890cd2f Mar 10 00:48:13.519893 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 10 00:48:13.538232 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 10 00:48:13.538255 kernel: BTRFS info (device sda6): using free space tree Mar 10 00:48:13.532861 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 10 00:48:13.532896 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 00:48:13.544540 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 10 00:48:13.574659 kernel: BTRFS info (device sda6): auto enabling async discard Mar 10 00:48:13.574807 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 10 00:48:13.581065 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 00:48:13.708295 coreos-metadata[940]: Mar 10 00:48:13.708 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 10 00:48:13.717685 coreos-metadata[940]: Mar 10 00:48:13.717 INFO Fetch successful Mar 10 00:48:13.717685 coreos-metadata[940]: Mar 10 00:48:13.717 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 10 00:48:13.734261 coreos-metadata[940]: Mar 10 00:48:13.734 INFO Fetch successful Mar 10 00:48:13.740408 coreos-metadata[940]: Mar 10 00:48:13.740 INFO wrote hostname ci-4081.3.6-n-9b959526b1 to /sysroot/etc/hostname Mar 10 00:48:13.748510 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 10 00:48:13.777161 initrd-setup-root[967]: cut: /sysroot/etc/passwd: No such file or directory Mar 10 00:48:13.789966 initrd-setup-root[974]: cut: /sysroot/etc/group: No such file or directory Mar 10 00:48:13.798428 initrd-setup-root[981]: cut: /sysroot/etc/shadow: No such file or directory Mar 10 00:48:13.806453 initrd-setup-root[988]: cut: /sysroot/etc/gshadow: No such file or directory Mar 10 00:48:13.960946 kernel: hv_netvsc 7ced8d86-7753-7ced-8d86-77537ced8d86 eth0: VF slot 1 added Mar 10 00:48:13.972387 kernel: hv_vmbus: registering driver hv_pci Mar 10 00:48:13.972449 kernel: hv_pci 47babfe7-6771-409d-a406-1ba9adcbb034: PCI VMBus probing: Using version 0x10004 Mar 10 00:48:13.983395 kernel: hv_pci 47babfe7-6771-409d-a406-1ba9adcbb034: PCI host bridge to bus 6771:00 Mar 10 00:48:13.983624 kernel: pci_bus 6771:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 10 00:48:13.983743 kernel: pci_bus 6771:00: No busn resource found for root bus, will use [bus 00-ff] Mar 10 00:48:13.993663 kernel: pci 6771:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 10 00:48:14.000679 kernel: pci 6771:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 10 00:48:14.005676 kernel: pci 6771:00:02.0: enabling Extended Tags Mar 10 00:48:14.021685 kernel: pci 6771:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6771:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 10 00:48:14.031224 kernel: pci_bus 6771:00: busn_res: [bus 00-ff] end is updated to 00 Mar 10 00:48:14.031424 kernel: pci 6771:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 10 00:48:14.070103 kernel: mlx5_core 6771:00:02.0: enabling device (0000 -> 0002) Mar 10 00:48:14.076656 kernel: mlx5_core 6771:00:02.0: firmware version: 16.30.5026 Mar 10 00:48:14.136144 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 10 00:48:14.155463 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 10 00:48:14.167875 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 10 00:48:14.183896 kernel: BTRFS info (device sda6): last unmount of filesystem da6e29ba-b4b7-4235-b73d-7fb31890cd2f Mar 10 00:48:14.183533 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 10 00:48:14.217919 ignition[1061]: INFO : Ignition 2.19.0 Mar 10 00:48:14.221926 ignition[1061]: INFO : Stage: mount Mar 10 00:48:14.221926 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 00:48:14.221926 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 00:48:14.221926 ignition[1061]: INFO : mount: mount passed Mar 10 00:48:14.221926 ignition[1061]: INFO : Ignition finished successfully Mar 10 00:48:14.224528 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 10 00:48:14.229362 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 10 00:48:14.251894 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 10 00:48:14.270626 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 00:48:14.298646 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1077) Mar 10 00:48:14.309385 kernel: BTRFS info (device sda6): first mount of filesystem da6e29ba-b4b7-4235-b73d-7fb31890cd2f Mar 10 00:48:14.309427 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 10 00:48:14.312866 kernel: BTRFS info (device sda6): using free space tree Mar 10 00:48:14.319643 kernel: BTRFS info (device sda6): auto enabling async discard Mar 10 00:48:14.321507 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 00:48:14.345324 ignition[1096]: INFO : Ignition 2.19.0 Mar 10 00:48:14.345324 ignition[1096]: INFO : Stage: files Mar 10 00:48:14.351517 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 00:48:14.351517 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 00:48:14.351517 ignition[1096]: DEBUG : files: compiled without relabeling support, skipping Mar 10 00:48:14.371000 ignition[1096]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 10 00:48:14.371000 ignition[1096]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 10 00:48:14.371000 ignition[1096]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 10 00:48:14.400493 kernel: hv_netvsc 7ced8d86-7753-7ced-8d86-77537ced8d86 eth0: VF registering: eth1 Mar 10 00:48:14.400652 kernel: mlx5_core 6771:00:02.0 eth1: joined to eth0 Mar 10 00:48:14.400786 kernel: mlx5_core 6771:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 10 00:48:14.387800 unknown[1096]: wrote ssh authorized keys file for user: core Mar 10 00:48:14.408981 kernel: mlx5_core 6771:00:02.0 enP26481s1: renamed from eth1 Mar 10 00:48:14.409163 ignition[1096]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 10 00:48:14.409163 ignition[1096]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 10 00:48:14.409163 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 10 00:48:14.409163 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 10 00:48:14.421000 systemd-networkd[896]: eth1: Interface name change detected, renamed to enP26481s1. Mar 10 00:48:14.460734 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 10 00:48:14.521651 kernel: mlx5_core 6771:00:02.0 enP26481s1: Link up Mar 10 00:48:14.521606 systemd-networkd[896]: enP26481s1: Link UP Mar 10 00:48:14.565205 systemd-networkd[896]: enP26481s1: Gained carrier Mar 10 00:48:14.569561 kernel: hv_netvsc 7ced8d86-7753-7ced-8d86-77537ced8d86 eth0: Data path switched to VF: enP26481s1 Mar 10 00:48:14.639800 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 10 00:48:14.648314 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Mar 10 00:48:14.930747 systemd-networkd[896]: eth0: Gained IPv6LL Mar 10 00:48:15.120464 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 10 00:48:15.573581 ignition[1096]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 10 00:48:15.573581 ignition[1096]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 10 00:48:15.592712 ignition[1096]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 00:48:15.592712 ignition[1096]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 00:48:15.592712 ignition[1096]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 10 00:48:15.592712 ignition[1096]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 10 00:48:15.592712 ignition[1096]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 10 00:48:15.592712 ignition[1096]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 10 00:48:15.592712 ignition[1096]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 10 00:48:15.592712 ignition[1096]: INFO : files: files passed Mar 10 00:48:15.592712 ignition[1096]: INFO : Ignition finished successfully Mar 10 00:48:15.590061 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 10 00:48:15.618945 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 10 00:48:15.632826 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 10 00:48:15.642472 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 10 00:48:15.702577 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 00:48:15.702577 initrd-setup-root-after-ignition[1124]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 10 00:48:15.642559 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 10 00:48:15.729838 initrd-setup-root-after-ignition[1128]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 00:48:15.668661 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 00:48:15.677325 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 10 00:48:15.702878 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 10 00:48:15.738969 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 10 00:48:15.739129 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 10 00:48:15.749098 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 10 00:48:15.758273 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 10 00:48:15.768550 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 10 00:48:15.787901 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 10 00:48:15.813739 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 00:48:15.833903 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 10 00:48:15.849045 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 10 00:48:15.854267 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 00:48:15.864150 systemd[1]: Stopped target timers.target - Timer Units. Mar 10 00:48:15.873201 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 10 00:48:15.873324 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 00:48:15.886140 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 10 00:48:15.890756 systemd[1]: Stopped target basic.target - Basic System. Mar 10 00:48:15.899518 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 10 00:48:15.908518 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 00:48:15.917195 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 10 00:48:15.926324 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 10 00:48:15.936008 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 00:48:15.945910 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 10 00:48:15.954701 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 10 00:48:15.964353 systemd[1]: Stopped target swap.target - Swaps. Mar 10 00:48:15.972111 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 10 00:48:15.972229 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 10 00:48:15.983940 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 10 00:48:15.988881 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 00:48:15.998591 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 10 00:48:15.998663 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 00:48:16.008379 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 10 00:48:16.008493 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 10 00:48:16.023060 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 10 00:48:16.023193 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 00:48:16.032731 systemd[1]: ignition-files.service: Deactivated successfully. Mar 10 00:48:16.032822 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 10 00:48:16.043296 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 10 00:48:16.043394 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 10 00:48:16.098805 ignition[1149]: INFO : Ignition 2.19.0 Mar 10 00:48:16.098805 ignition[1149]: INFO : Stage: umount Mar 10 00:48:16.098805 ignition[1149]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 00:48:16.098805 ignition[1149]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 10 00:48:16.098805 ignition[1149]: INFO : umount: umount passed Mar 10 00:48:16.098805 ignition[1149]: INFO : Ignition finished successfully Mar 10 00:48:16.072931 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 10 00:48:16.081475 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 10 00:48:16.086184 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 00:48:16.100852 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 10 00:48:16.108000 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 10 00:48:16.108139 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 00:48:16.115929 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 10 00:48:16.116025 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 00:48:16.131622 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 10 00:48:16.133555 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 10 00:48:16.146318 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 10 00:48:16.146612 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 10 00:48:16.160764 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 10 00:48:16.161241 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 10 00:48:16.161279 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 10 00:48:16.174573 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 10 00:48:16.174733 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 10 00:48:16.183418 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 10 00:48:16.183473 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 10 00:48:16.192007 systemd[1]: Stopped target network.target - Network. Mar 10 00:48:16.202875 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 10 00:48:16.202947 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 00:48:16.208072 systemd[1]: Stopped target paths.target - Path Units. Mar 10 00:48:16.216385 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 10 00:48:16.220667 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 00:48:16.226156 systemd[1]: Stopped target slices.target - Slice Units. Mar 10 00:48:16.233917 systemd[1]: Stopped target sockets.target - Socket Units. Mar 10 00:48:16.242575 systemd[1]: iscsid.socket: Deactivated successfully. Mar 10 00:48:16.242626 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 00:48:16.253530 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 10 00:48:16.253583 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 00:48:16.264024 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 10 00:48:16.264075 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 10 00:48:16.272295 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 10 00:48:16.272338 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 10 00:48:16.281323 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 10 00:48:16.291423 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 10 00:48:16.310709 systemd-networkd[896]: eth0: DHCPv6 lease lost Mar 10 00:48:16.315912 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 10 00:48:16.316109 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 10 00:48:16.323999 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 10 00:48:16.324139 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 10 00:48:16.336905 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 10 00:48:16.504445 kernel: hv_netvsc 7ced8d86-7753-7ced-8d86-77537ced8d86 eth0: Data path switched from VF: enP26481s1 Mar 10 00:48:16.336954 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 10 00:48:16.364189 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 10 00:48:16.370848 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 10 00:48:16.370920 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 00:48:16.380432 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 10 00:48:16.380477 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 10 00:48:16.389284 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 10 00:48:16.389326 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 10 00:48:16.397837 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 10 00:48:16.397879 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 00:48:16.407915 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 00:48:16.442003 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 10 00:48:16.443160 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 00:48:16.453341 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 10 00:48:16.453478 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 10 00:48:16.461041 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 10 00:48:16.461076 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 00:48:16.470485 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 10 00:48:16.470540 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 10 00:48:16.483658 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 10 00:48:16.483725 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 10 00:48:16.506055 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 00:48:16.506157 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 00:48:16.530878 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 10 00:48:16.542759 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 10 00:48:16.542828 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 00:48:16.553968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 00:48:16.554020 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 00:48:16.563852 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 10 00:48:16.563958 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 10 00:48:16.572356 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 10 00:48:16.572447 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 10 00:48:16.582881 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 10 00:48:16.582976 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 10 00:48:16.594245 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 10 00:48:16.602978 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 10 00:48:16.603067 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 10 00:48:16.625905 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 10 00:48:16.751562 systemd[1]: Switching root. Mar 10 00:48:16.814663 systemd-journald[217]: Journal stopped Mar 10 00:48:18.758517 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Mar 10 00:48:18.758542 kernel: SELinux: policy capability network_peer_controls=1 Mar 10 00:48:18.758552 kernel: SELinux: policy capability open_perms=1 Mar 10 00:48:18.758562 kernel: SELinux: policy capability extended_socket_class=1 Mar 10 00:48:18.758570 kernel: SELinux: policy capability always_check_network=0 Mar 10 00:48:18.758578 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 10 00:48:18.758586 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 10 00:48:18.758594 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 10 00:48:18.758602 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 10 00:48:18.758611 systemd[1]: Successfully loaded SELinux policy in 73.519ms. Mar 10 00:48:18.758621 kernel: audit: type=1403 audit(1773103697.150:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 10 00:48:18.758758 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.291ms. Mar 10 00:48:18.758782 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 10 00:48:18.758792 systemd[1]: Detected virtualization microsoft. Mar 10 00:48:18.758802 systemd[1]: Detected architecture arm64. Mar 10 00:48:18.758817 systemd[1]: Detected first boot. Mar 10 00:48:18.758827 systemd[1]: Hostname set to . Mar 10 00:48:18.758836 systemd[1]: Initializing machine ID from random generator. Mar 10 00:48:18.758845 zram_generator::config[1190]: No configuration found. Mar 10 00:48:18.758855 systemd[1]: Populated /etc with preset unit settings. Mar 10 00:48:18.758864 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 10 00:48:18.758874 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 10 00:48:18.758884 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 10 00:48:18.758893 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 10 00:48:18.758903 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 10 00:48:18.758912 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 10 00:48:18.758921 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 10 00:48:18.758930 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 10 00:48:18.758942 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 10 00:48:18.758951 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 10 00:48:18.758961 systemd[1]: Created slice user.slice - User and Session Slice. Mar 10 00:48:18.758970 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 00:48:18.758979 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 00:48:18.758988 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 10 00:48:18.758998 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 10 00:48:18.759007 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 10 00:48:18.759018 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 00:48:18.759029 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 10 00:48:18.759039 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 00:48:18.759048 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 10 00:48:18.759060 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 10 00:48:18.759069 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 10 00:48:18.759079 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 10 00:48:18.759089 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 00:48:18.759100 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 00:48:18.759109 systemd[1]: Reached target slices.target - Slice Units. Mar 10 00:48:18.759119 systemd[1]: Reached target swap.target - Swaps. Mar 10 00:48:18.759128 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 10 00:48:18.759138 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 10 00:48:18.759147 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 00:48:18.759157 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 00:48:18.759169 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 00:48:18.759179 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 10 00:48:18.759188 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 10 00:48:18.759198 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 10 00:48:18.759207 systemd[1]: Mounting media.mount - External Media Directory... Mar 10 00:48:18.759217 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 10 00:48:18.759229 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 10 00:48:18.759239 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 10 00:48:18.759249 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 10 00:48:18.759259 systemd[1]: Reached target machines.target - Containers. Mar 10 00:48:18.759269 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 10 00:48:18.759279 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 00:48:18.759288 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 00:48:18.759298 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 10 00:48:18.759309 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 00:48:18.759319 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 00:48:18.759329 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 00:48:18.759339 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 10 00:48:18.759348 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 00:48:18.759358 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 10 00:48:18.759368 kernel: ACPI: bus type drm_connector registered Mar 10 00:48:18.759377 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 10 00:48:18.759387 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 10 00:48:18.759398 kernel: fuse: init (API version 7.39) Mar 10 00:48:18.759408 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 10 00:48:18.759417 systemd[1]: Stopped systemd-fsck-usr.service. Mar 10 00:48:18.759427 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 00:48:18.759437 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 00:48:18.759447 kernel: loop: module loaded Mar 10 00:48:18.759479 systemd-journald[1293]: Collecting audit messages is disabled. Mar 10 00:48:18.759502 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 10 00:48:18.759513 systemd-journald[1293]: Journal started Mar 10 00:48:18.759533 systemd-journald[1293]: Runtime Journal (/run/log/journal/a6053212887847e88b4c14c54902079a) is 8.0M, max 78.5M, 70.5M free. Mar 10 00:48:18.025810 systemd[1]: Queued start job for default target multi-user.target. Mar 10 00:48:18.066680 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 10 00:48:18.067040 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 10 00:48:18.067367 systemd[1]: systemd-journald.service: Consumed 2.520s CPU time. Mar 10 00:48:18.777861 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 10 00:48:18.786356 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 00:48:18.797848 systemd[1]: verity-setup.service: Deactivated successfully. Mar 10 00:48:18.797910 systemd[1]: Stopped verity-setup.service. Mar 10 00:48:18.817736 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 00:48:18.818625 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 10 00:48:18.823757 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 10 00:48:18.828866 systemd[1]: Mounted media.mount - External Media Directory. Mar 10 00:48:18.833435 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 10 00:48:18.838502 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 10 00:48:18.843762 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 10 00:48:18.848389 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 10 00:48:18.854288 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 00:48:18.860483 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 10 00:48:18.860867 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 10 00:48:18.866732 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 00:48:18.866977 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 00:48:18.872859 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 00:48:18.873077 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 00:48:18.878450 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 00:48:18.878815 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 00:48:18.884998 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 10 00:48:18.885219 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 10 00:48:18.890493 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 00:48:18.890850 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 00:48:18.896241 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 00:48:18.902008 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 00:48:18.908372 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 10 00:48:18.914805 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 00:48:18.932182 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 10 00:48:18.943716 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 10 00:48:18.950206 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 10 00:48:18.955697 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 10 00:48:18.955798 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 00:48:18.961501 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 10 00:48:18.970776 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 10 00:48:18.977273 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 10 00:48:18.982383 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 00:48:18.988349 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 10 00:48:18.997850 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 10 00:48:19.002910 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 00:48:19.004828 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 10 00:48:19.012807 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 00:48:19.014818 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 00:48:19.024289 systemd-journald[1293]: Time spent on flushing to /var/log/journal/a6053212887847e88b4c14c54902079a is 30.414ms for 890 entries. Mar 10 00:48:19.024289 systemd-journald[1293]: System Journal (/var/log/journal/a6053212887847e88b4c14c54902079a) is 8.0M, max 2.6G, 2.6G free. Mar 10 00:48:19.115831 systemd-journald[1293]: Received client request to flush runtime journal. Mar 10 00:48:19.115889 kernel: loop0: detected capacity change from 0 to 31320 Mar 10 00:48:19.025851 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 10 00:48:19.045892 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 10 00:48:19.060895 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 10 00:48:19.072487 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 10 00:48:19.083226 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 10 00:48:19.095307 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 10 00:48:19.109237 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 10 00:48:19.128796 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 10 00:48:19.140499 udevadm[1327]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 10 00:48:19.141687 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 10 00:48:19.154001 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 10 00:48:19.164044 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 00:48:19.180223 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 10 00:48:19.193811 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 00:48:19.228457 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 10 00:48:19.232666 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Mar 10 00:48:19.232680 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Mar 10 00:48:19.232825 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 10 00:48:19.241680 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 00:48:19.273731 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 10 00:48:19.308656 kernel: loop1: detected capacity change from 0 to 197488 Mar 10 00:48:19.350657 kernel: loop2: detected capacity change from 0 to 114328 Mar 10 00:48:19.454659 kernel: loop3: detected capacity change from 0 to 114432 Mar 10 00:48:19.571668 kernel: loop4: detected capacity change from 0 to 31320 Mar 10 00:48:19.584034 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 10 00:48:19.590145 kernel: loop5: detected capacity change from 0 to 197488 Mar 10 00:48:19.602803 kernel: loop6: detected capacity change from 0 to 114328 Mar 10 00:48:19.602956 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 00:48:19.625694 kernel: loop7: detected capacity change from 0 to 114432 Mar 10 00:48:19.627049 systemd-udevd[1350]: Using default interface naming scheme 'v255'. Mar 10 00:48:19.634106 (sd-merge)[1348]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 10 00:48:19.634502 (sd-merge)[1348]: Merged extensions into '/usr'. Mar 10 00:48:19.638296 systemd[1]: Reloading requested from client PID 1324 ('systemd-sysext') (unit systemd-sysext.service)... Mar 10 00:48:19.638310 systemd[1]: Reloading... Mar 10 00:48:19.724427 zram_generator::config[1391]: No configuration found. Mar 10 00:48:19.871710 kernel: mousedev: PS/2 mouse device common for all mice Mar 10 00:48:19.871811 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#254 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 10 00:48:19.914266 kernel: hv_vmbus: registering driver hv_balloon Mar 10 00:48:19.914381 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 10 00:48:19.918353 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 10 00:48:19.924664 kernel: hv_vmbus: registering driver hyperv_fb Mar 10 00:48:19.935206 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 10 00:48:19.935287 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 10 00:48:19.939655 kernel: Console: switching to colour dummy device 80x25 Mar 10 00:48:19.953917 kernel: Console: switching to colour frame buffer device 128x48 Mar 10 00:48:19.956500 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 00:48:20.052358 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 10 00:48:20.052572 systemd[1]: Reloading finished in 413 ms. Mar 10 00:48:20.061837 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1381) Mar 10 00:48:20.085554 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 00:48:20.097334 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 10 00:48:20.144874 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 10 00:48:20.155579 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 10 00:48:20.168789 systemd[1]: Starting ensure-sysext.service... Mar 10 00:48:20.173542 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 10 00:48:20.180039 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 10 00:48:20.187797 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 00:48:20.198820 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 00:48:20.211286 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 00:48:20.226669 lvm[1508]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 10 00:48:20.227099 systemd-tmpfiles[1511]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 10 00:48:20.227355 systemd-tmpfiles[1511]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 10 00:48:20.228964 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 10 00:48:20.237212 systemd-tmpfiles[1511]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 10 00:48:20.237397 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 10 00:48:20.237463 systemd-tmpfiles[1511]: ACLs are not supported, ignoring. Mar 10 00:48:20.237512 systemd-tmpfiles[1511]: ACLs are not supported, ignoring. Mar 10 00:48:20.241996 systemd-tmpfiles[1511]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 00:48:20.242001 systemd-tmpfiles[1511]: Skipping /boot Mar 10 00:48:20.247332 systemd[1]: Reloading requested from client PID 1507 ('systemctl') (unit ensure-sysext.service)... Mar 10 00:48:20.249686 systemd[1]: Reloading... Mar 10 00:48:20.254156 systemd-tmpfiles[1511]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 00:48:20.254161 systemd-tmpfiles[1511]: Skipping /boot Mar 10 00:48:20.362662 zram_generator::config[1550]: No configuration found. Mar 10 00:48:20.402438 systemd-networkd[1510]: lo: Link UP Mar 10 00:48:20.402447 systemd-networkd[1510]: lo: Gained carrier Mar 10 00:48:20.406450 systemd-networkd[1510]: Enumeration completed Mar 10 00:48:20.408396 systemd-networkd[1510]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 00:48:20.408402 systemd-networkd[1510]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 00:48:20.465664 kernel: mlx5_core 6771:00:02.0 enP26481s1: Link up Mar 10 00:48:20.475286 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 00:48:20.491659 kernel: hv_netvsc 7ced8d86-7753-7ced-8d86-77537ced8d86 eth0: Data path switched to VF: enP26481s1 Mar 10 00:48:20.493186 systemd-networkd[1510]: enP26481s1: Link UP Mar 10 00:48:20.493279 systemd-networkd[1510]: eth0: Link UP Mar 10 00:48:20.493282 systemd-networkd[1510]: eth0: Gained carrier Mar 10 00:48:20.493297 systemd-networkd[1510]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 00:48:20.497924 systemd-networkd[1510]: enP26481s1: Gained carrier Mar 10 00:48:20.503677 systemd-networkd[1510]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 10 00:48:20.550086 systemd[1]: Reloading finished in 300 ms. Mar 10 00:48:20.566965 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 10 00:48:20.572161 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 00:48:20.586671 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 10 00:48:20.592890 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 00:48:20.598956 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 00:48:20.610521 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 00:48:20.618868 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 10 00:48:20.629155 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 10 00:48:20.637904 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 10 00:48:20.651660 lvm[1617]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 10 00:48:20.654972 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 10 00:48:20.664839 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 10 00:48:20.680989 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 00:48:20.689059 ldconfig[1319]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 10 00:48:20.690332 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 10 00:48:20.699610 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 10 00:48:20.710910 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 10 00:48:20.717896 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 10 00:48:20.729839 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 00:48:20.735973 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 00:48:20.743271 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 00:48:20.755928 augenrules[1641]: No rules Mar 10 00:48:20.756069 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 00:48:20.760830 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 00:48:20.763974 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 10 00:48:20.776373 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 10 00:48:20.784280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 00:48:20.784424 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 00:48:20.790151 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 00:48:20.790283 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 00:48:20.796274 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 00:48:20.796403 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 00:48:20.804505 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 10 00:48:20.812096 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 10 00:48:20.821020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 00:48:20.825428 systemd-resolved[1625]: Positive Trust Anchors: Mar 10 00:48:20.825615 systemd-resolved[1625]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 00:48:20.825677 systemd-resolved[1625]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 00:48:20.826933 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 00:48:20.835598 systemd-resolved[1625]: Using system hostname 'ci-4081.3.6-n-9b959526b1'. Mar 10 00:48:20.847992 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 00:48:20.855904 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 00:48:20.860908 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 00:48:20.862050 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 00:48:20.869676 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 10 00:48:20.876023 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 00:48:20.876262 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 00:48:20.882467 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 00:48:20.882743 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 00:48:20.888891 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 00:48:20.889138 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 00:48:20.900544 systemd[1]: Reached target network.target - Network. Mar 10 00:48:20.904890 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 00:48:20.910733 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 00:48:20.917912 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 00:48:20.923822 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 00:48:20.930447 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 00:48:20.938046 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 00:48:20.942757 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 00:48:20.943032 systemd[1]: Reached target time-set.target - System Time Set. Mar 10 00:48:20.947531 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 10 00:48:20.949218 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 00:48:20.949383 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 00:48:20.955305 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 00:48:20.955453 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 00:48:20.960583 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 00:48:20.960850 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 00:48:20.966765 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 00:48:20.966922 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 00:48:20.973551 systemd[1]: Finished ensure-sysext.service. Mar 10 00:48:20.981894 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 00:48:20.981938 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 00:48:20.986888 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 10 00:48:20.992159 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 10 00:48:20.997735 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 10 00:48:21.002605 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 10 00:48:21.008118 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 10 00:48:21.013923 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 10 00:48:21.013951 systemd[1]: Reached target paths.target - Path Units. Mar 10 00:48:21.018149 systemd[1]: Reached target timers.target - Timer Units. Mar 10 00:48:21.022821 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 10 00:48:21.028967 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 10 00:48:21.041265 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 10 00:48:21.045885 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 00:48:21.046452 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 10 00:48:21.051319 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 00:48:21.055677 systemd[1]: Reached target basic.target - Basic System. Mar 10 00:48:21.059966 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 10 00:48:21.059994 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 10 00:48:21.068731 systemd[1]: Starting chronyd.service - NTP client/server... Mar 10 00:48:21.076767 systemd[1]: Starting containerd.service - containerd container runtime... Mar 10 00:48:21.087898 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 10 00:48:21.093822 (chronyd)[1672]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 10 00:48:21.097852 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 10 00:48:21.103602 chronyd[1679]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 10 00:48:21.106794 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 10 00:48:21.114292 chronyd[1679]: Timezone right/UTC failed leap second check, ignoring Mar 10 00:48:21.114527 chronyd[1679]: Loaded seccomp filter (level 2) Mar 10 00:48:21.115255 jq[1680]: false Mar 10 00:48:21.118427 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 10 00:48:21.123322 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 10 00:48:21.123587 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 10 00:48:21.124700 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 10 00:48:21.130897 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 10 00:48:21.140506 KVP[1682]: KVP starting; pid is:1682 Mar 10 00:48:21.138893 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 10 00:48:21.145650 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 10 00:48:21.158520 extend-filesystems[1681]: Found loop4 Mar 10 00:48:21.165406 extend-filesystems[1681]: Found loop5 Mar 10 00:48:21.165406 extend-filesystems[1681]: Found loop6 Mar 10 00:48:21.165406 extend-filesystems[1681]: Found loop7 Mar 10 00:48:21.165406 extend-filesystems[1681]: Found sda Mar 10 00:48:21.165406 extend-filesystems[1681]: Found sda1 Mar 10 00:48:21.165406 extend-filesystems[1681]: Found sda2 Mar 10 00:48:21.165406 extend-filesystems[1681]: Found sda3 Mar 10 00:48:21.165406 extend-filesystems[1681]: Found usr Mar 10 00:48:21.165406 extend-filesystems[1681]: Found sda4 Mar 10 00:48:21.165406 extend-filesystems[1681]: Found sda6 Mar 10 00:48:21.165406 extend-filesystems[1681]: Found sda7 Mar 10 00:48:21.165406 extend-filesystems[1681]: Found sda9 Mar 10 00:48:21.165406 extend-filesystems[1681]: Checking size of /dev/sda9 Mar 10 00:48:21.332316 kernel: hv_utils: KVP IC version 4.0 Mar 10 00:48:21.332356 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1377) Mar 10 00:48:21.160503 dbus-daemon[1675]: [system] SELinux support is enabled Mar 10 00:48:21.160857 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 10 00:48:21.344280 extend-filesystems[1681]: Old size kept for /dev/sda9 Mar 10 00:48:21.344280 extend-filesystems[1681]: Found sr0 Mar 10 00:48:21.373999 coreos-metadata[1674]: Mar 10 00:48:21.264 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 10 00:48:21.373999 coreos-metadata[1674]: Mar 10 00:48:21.280 INFO Fetch successful Mar 10 00:48:21.373999 coreos-metadata[1674]: Mar 10 00:48:21.280 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 10 00:48:21.373999 coreos-metadata[1674]: Mar 10 00:48:21.287 INFO Fetch successful Mar 10 00:48:21.373999 coreos-metadata[1674]: Mar 10 00:48:21.287 INFO Fetching http://168.63.129.16/machine/e8042e36-d02a-4476-9083-d3a02181ae77/a8535440%2D0a52%2D4e48%2D9307%2Dd7900d13db46.%5Fci%2D4081.3.6%2Dn%2D9b959526b1?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 10 00:48:21.373999 coreos-metadata[1674]: Mar 10 00:48:21.290 INFO Fetch successful Mar 10 00:48:21.373999 coreos-metadata[1674]: Mar 10 00:48:21.290 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 10 00:48:21.373999 coreos-metadata[1674]: Mar 10 00:48:21.324 INFO Fetch successful Mar 10 00:48:21.194080 KVP[1682]: KVP LIC Version: 3.1 Mar 10 00:48:21.176897 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 10 00:48:21.195423 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 10 00:48:21.204972 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 10 00:48:21.378451 update_engine[1703]: I20260310 00:48:21.258261 1703 main.cc:92] Flatcar Update Engine starting Mar 10 00:48:21.378451 update_engine[1703]: I20260310 00:48:21.270009 1703 update_check_scheduler.cc:74] Next update check in 9m8s Mar 10 00:48:21.205489 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 10 00:48:21.216847 systemd[1]: Starting update-engine.service - Update Engine... Mar 10 00:48:21.380271 jq[1707]: true Mar 10 00:48:21.226776 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 10 00:48:21.235449 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 10 00:48:21.259848 systemd[1]: Started chronyd.service - NTP client/server. Mar 10 00:48:21.277075 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 10 00:48:21.277266 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 10 00:48:21.277533 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 10 00:48:21.277700 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 10 00:48:21.292310 systemd[1]: motdgen.service: Deactivated successfully. Mar 10 00:48:21.292559 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 10 00:48:21.338003 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 10 00:48:21.338178 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 10 00:48:21.346586 systemd-logind[1694]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 10 00:48:21.359306 systemd-logind[1694]: New seat seat0. Mar 10 00:48:21.366551 systemd[1]: Started update-engine.service - Update Engine. Mar 10 00:48:21.383779 systemd[1]: Started systemd-logind.service - User Login Management. Mar 10 00:48:21.384874 (ntainerd)[1725]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 10 00:48:21.396201 jq[1724]: true Mar 10 00:48:21.405751 tar[1713]: linux-arm64/LICENSE Mar 10 00:48:21.405751 tar[1713]: linux-arm64/helm Mar 10 00:48:21.405986 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 10 00:48:21.406116 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 10 00:48:21.419026 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 10 00:48:21.419150 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 10 00:48:21.432901 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 10 00:48:21.442235 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 10 00:48:21.451499 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 10 00:48:21.523517 bash[1766]: Updated "/home/core/.ssh/authorized_keys" Mar 10 00:48:21.525046 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 10 00:48:21.532999 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 10 00:48:21.678765 locksmithd[1750]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 10 00:48:21.716802 containerd[1725]: time="2026-03-10T00:48:21.715026400Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 10 00:48:21.746811 containerd[1725]: time="2026-03-10T00:48:21.746760360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 10 00:48:21.748471 containerd[1725]: time="2026-03-10T00:48:21.748432720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 10 00:48:21.748582 containerd[1725]: time="2026-03-10T00:48:21.748568640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 10 00:48:21.748647 containerd[1725]: time="2026-03-10T00:48:21.748625480Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 10 00:48:21.748852 containerd[1725]: time="2026-03-10T00:48:21.748835000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 10 00:48:21.748930 containerd[1725]: time="2026-03-10T00:48:21.748915280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 10 00:48:21.749050 containerd[1725]: time="2026-03-10T00:48:21.749033400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 00:48:21.749107 containerd[1725]: time="2026-03-10T00:48:21.749094840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 10 00:48:21.749351 containerd[1725]: time="2026-03-10T00:48:21.749332480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 00:48:21.749412 containerd[1725]: time="2026-03-10T00:48:21.749399360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 10 00:48:21.749473 containerd[1725]: time="2026-03-10T00:48:21.749459800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 00:48:21.749517 containerd[1725]: time="2026-03-10T00:48:21.749506720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 10 00:48:21.749647 containerd[1725]: time="2026-03-10T00:48:21.749619640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 10 00:48:21.749912 containerd[1725]: time="2026-03-10T00:48:21.749895560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 10 00:48:21.750326 containerd[1725]: time="2026-03-10T00:48:21.750067440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 00:48:21.750326 containerd[1725]: time="2026-03-10T00:48:21.750086360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 10 00:48:21.750326 containerd[1725]: time="2026-03-10T00:48:21.750162800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 10 00:48:21.750326 containerd[1725]: time="2026-03-10T00:48:21.750203760Z" level=info msg="metadata content store policy set" policy=shared Mar 10 00:48:21.767984 containerd[1725]: time="2026-03-10T00:48:21.767930760Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 10 00:48:21.768406 containerd[1725]: time="2026-03-10T00:48:21.768159000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 10 00:48:21.768540 containerd[1725]: time="2026-03-10T00:48:21.768188960Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 10 00:48:21.768688 containerd[1725]: time="2026-03-10T00:48:21.768620520Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 10 00:48:21.769128 containerd[1725]: time="2026-03-10T00:48:21.768758720Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 10 00:48:21.769236 containerd[1725]: time="2026-03-10T00:48:21.769220240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 10 00:48:21.770163 containerd[1725]: time="2026-03-10T00:48:21.770068880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 10 00:48:21.770714 containerd[1725]: time="2026-03-10T00:48:21.770579520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 10 00:48:21.770714 containerd[1725]: time="2026-03-10T00:48:21.770602600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 10 00:48:21.770714 containerd[1725]: time="2026-03-10T00:48:21.770616440Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 10 00:48:21.770714 containerd[1725]: time="2026-03-10T00:48:21.770639240Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 10 00:48:21.770714 containerd[1725]: time="2026-03-10T00:48:21.770656040Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 10 00:48:21.770714 containerd[1725]: time="2026-03-10T00:48:21.770669440Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 10 00:48:21.770714 containerd[1725]: time="2026-03-10T00:48:21.770688800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 10 00:48:21.771314 containerd[1725]: time="2026-03-10T00:48:21.770703240Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 10 00:48:21.771314 containerd[1725]: time="2026-03-10T00:48:21.770933840Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 10 00:48:21.771314 containerd[1725]: time="2026-03-10T00:48:21.770952960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 10 00:48:21.771314 containerd[1725]: time="2026-03-10T00:48:21.770965720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 10 00:48:21.771314 containerd[1725]: time="2026-03-10T00:48:21.771248120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 10 00:48:21.771314 containerd[1725]: time="2026-03-10T00:48:21.771266080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 10 00:48:21.771314 containerd[1725]: time="2026-03-10T00:48:21.771279880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 10 00:48:21.771314 containerd[1725]: time="2026-03-10T00:48:21.771293680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 10 00:48:21.771563 containerd[1725]: time="2026-03-10T00:48:21.771306080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 10 00:48:21.771563 containerd[1725]: time="2026-03-10T00:48:21.771512120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 10 00:48:21.771563 containerd[1725]: time="2026-03-10T00:48:21.771526720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 10 00:48:21.771563 containerd[1725]: time="2026-03-10T00:48:21.771540360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 10 00:48:21.772016 containerd[1725]: time="2026-03-10T00:48:21.771552400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 10 00:48:21.772016 containerd[1725]: time="2026-03-10T00:48:21.771722640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 10 00:48:21.772016 containerd[1725]: time="2026-03-10T00:48:21.771738240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 10 00:48:21.772016 containerd[1725]: time="2026-03-10T00:48:21.771753800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 10 00:48:21.772305 containerd[1725]: time="2026-03-10T00:48:21.772166320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 10 00:48:21.772305 containerd[1725]: time="2026-03-10T00:48:21.772203040Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 10 00:48:21.772305 containerd[1725]: time="2026-03-10T00:48:21.772230720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 10 00:48:21.772305 containerd[1725]: time="2026-03-10T00:48:21.772243800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 10 00:48:21.772305 containerd[1725]: time="2026-03-10T00:48:21.772254480Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 10 00:48:21.773940 containerd[1725]: time="2026-03-10T00:48:21.773351240Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 10 00:48:21.773940 containerd[1725]: time="2026-03-10T00:48:21.773380840Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 10 00:48:21.773940 containerd[1725]: time="2026-03-10T00:48:21.773393040Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 10 00:48:21.773940 containerd[1725]: time="2026-03-10T00:48:21.773405520Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 10 00:48:21.773940 containerd[1725]: time="2026-03-10T00:48:21.773422480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 10 00:48:21.773940 containerd[1725]: time="2026-03-10T00:48:21.773437920Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 10 00:48:21.773940 containerd[1725]: time="2026-03-10T00:48:21.773449040Z" level=info msg="NRI interface is disabled by configuration." Mar 10 00:48:21.773940 containerd[1725]: time="2026-03-10T00:48:21.773468120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 10 00:48:21.774130 containerd[1725]: time="2026-03-10T00:48:21.773788000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 10 00:48:21.774130 containerd[1725]: time="2026-03-10T00:48:21.773851360Z" level=info msg="Connect containerd service" Mar 10 00:48:21.774130 containerd[1725]: time="2026-03-10T00:48:21.773887080Z" level=info msg="using legacy CRI server" Mar 10 00:48:21.774130 containerd[1725]: time="2026-03-10T00:48:21.773894800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 10 00:48:21.774957 containerd[1725]: time="2026-03-10T00:48:21.774354680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 10 00:48:21.776142 containerd[1725]: time="2026-03-10T00:48:21.776120000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 10 00:48:21.776936 containerd[1725]: time="2026-03-10T00:48:21.776304040Z" level=info msg="Start subscribing containerd event" Mar 10 00:48:21.776936 containerd[1725]: time="2026-03-10T00:48:21.776354080Z" level=info msg="Start recovering state" Mar 10 00:48:21.776936 containerd[1725]: time="2026-03-10T00:48:21.776418920Z" level=info msg="Start event monitor" Mar 10 00:48:21.776936 containerd[1725]: time="2026-03-10T00:48:21.776429960Z" level=info msg="Start snapshots syncer" Mar 10 00:48:21.776936 containerd[1725]: time="2026-03-10T00:48:21.776438600Z" level=info msg="Start cni network conf syncer for default" Mar 10 00:48:21.776936 containerd[1725]: time="2026-03-10T00:48:21.776445960Z" level=info msg="Start streaming server" Mar 10 00:48:21.779248 containerd[1725]: time="2026-03-10T00:48:21.777574760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 10 00:48:21.779248 containerd[1725]: time="2026-03-10T00:48:21.777615800Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 10 00:48:21.779248 containerd[1725]: time="2026-03-10T00:48:21.777673400Z" level=info msg="containerd successfully booted in 0.065696s" Mar 10 00:48:21.777751 systemd[1]: Started containerd.service - containerd container runtime. Mar 10 00:48:21.954547 tar[1713]: linux-arm64/README.md Mar 10 00:48:21.965677 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 10 00:48:21.986538 sshd_keygen[1705]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 10 00:48:22.005536 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 10 00:48:22.018853 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 10 00:48:22.028974 systemd[1]: issuegen.service: Deactivated successfully. Mar 10 00:48:22.030684 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 10 00:48:22.042218 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 10 00:48:22.054341 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 10 00:48:22.063949 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 10 00:48:22.069843 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 10 00:48:22.075473 systemd[1]: Reached target getty.target - Login Prompts. Mar 10 00:48:22.354906 systemd-networkd[1510]: eth0: Gained IPv6LL Mar 10 00:48:22.359702 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 10 00:48:22.365888 systemd[1]: Reached target network-online.target - Network is Online. Mar 10 00:48:22.377815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 00:48:22.383619 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 10 00:48:22.388829 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 10 00:48:22.419528 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 10 00:48:22.429805 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 10 00:48:23.047777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 00:48:23.053204 (kubelet)[1827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 00:48:23.056123 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 10 00:48:23.063816 systemd[1]: Startup finished in 625ms (kernel) + 7.952s (initrd) + 5.986s (userspace) = 14.564s. Mar 10 00:48:23.072245 waagent[1818]: 2026-03-10T00:48:23.070804Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 10 00:48:23.078456 waagent[1818]: 2026-03-10T00:48:23.078372Z INFO Daemon Daemon OS: flatcar 4081.3.6 Mar 10 00:48:23.084049 waagent[1818]: 2026-03-10T00:48:23.083829Z INFO Daemon Daemon Python: 3.11.9 Mar 10 00:48:23.091775 waagent[1818]: 2026-03-10T00:48:23.090814Z INFO Daemon Daemon Run daemon Mar 10 00:48:23.094705 waagent[1818]: 2026-03-10T00:48:23.094602Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Mar 10 00:48:23.128386 waagent[1818]: 2026-03-10T00:48:23.128239Z INFO Daemon Daemon Using waagent for provisioning Mar 10 00:48:23.133280 waagent[1818]: 2026-03-10T00:48:23.133213Z INFO Daemon Daemon Activate resource disk Mar 10 00:48:23.137645 waagent[1818]: 2026-03-10T00:48:23.137484Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 10 00:48:23.148087 waagent[1818]: 2026-03-10T00:48:23.148019Z INFO Daemon Daemon Found device: None Mar 10 00:48:23.153337 waagent[1818]: 2026-03-10T00:48:23.152720Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 10 00:48:23.160830 waagent[1818]: 2026-03-10T00:48:23.160715Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 10 00:48:23.172179 waagent[1818]: 2026-03-10T00:48:23.172104Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 10 00:48:23.177052 waagent[1818]: 2026-03-10T00:48:23.176979Z INFO Daemon Daemon Running default provisioning handler Mar 10 00:48:23.189305 waagent[1818]: 2026-03-10T00:48:23.189223Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 10 00:48:23.203066 waagent[1818]: 2026-03-10T00:48:23.202991Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 10 00:48:23.212141 waagent[1818]: 2026-03-10T00:48:23.212068Z INFO Daemon Daemon cloud-init is enabled: False Mar 10 00:48:23.217027 waagent[1818]: 2026-03-10T00:48:23.216962Z INFO Daemon Daemon Copying ovf-env.xml Mar 10 00:48:23.223092 login[1800]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:48:23.224517 login[1801]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:48:23.236096 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 10 00:48:23.236907 systemd-logind[1694]: New session 2 of user core. Mar 10 00:48:23.243267 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 10 00:48:23.245894 systemd-logind[1694]: New session 1 of user core. Mar 10 00:48:23.265416 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 10 00:48:23.273885 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 10 00:48:23.286317 waagent[1818]: 2026-03-10T00:48:23.280174Z INFO Daemon Daemon Successfully mounted dvd Mar 10 00:48:23.291092 (systemd)[1844]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 10 00:48:23.301161 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 10 00:48:23.307245 waagent[1818]: 2026-03-10T00:48:23.307163Z INFO Daemon Daemon Detect protocol endpoint Mar 10 00:48:23.313659 waagent[1818]: 2026-03-10T00:48:23.311366Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 10 00:48:23.316583 waagent[1818]: 2026-03-10T00:48:23.316118Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 10 00:48:23.322792 waagent[1818]: 2026-03-10T00:48:23.321984Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 10 00:48:23.326715 waagent[1818]: 2026-03-10T00:48:23.326652Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 10 00:48:23.332660 waagent[1818]: 2026-03-10T00:48:23.330821Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 10 00:48:23.351650 waagent[1818]: 2026-03-10T00:48:23.351413Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 10 00:48:23.357749 waagent[1818]: 2026-03-10T00:48:23.357296Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 10 00:48:23.361731 waagent[1818]: 2026-03-10T00:48:23.361676Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 10 00:48:23.432266 systemd[1844]: Queued start job for default target default.target. Mar 10 00:48:23.439073 systemd[1844]: Created slice app.slice - User Application Slice. Mar 10 00:48:23.439220 systemd[1844]: Reached target paths.target - Paths. Mar 10 00:48:23.439234 systemd[1844]: Reached target timers.target - Timers. Mar 10 00:48:23.442780 systemd[1844]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 10 00:48:23.452551 systemd[1844]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 10 00:48:23.452739 systemd[1844]: Reached target sockets.target - Sockets. Mar 10 00:48:23.452848 systemd[1844]: Reached target basic.target - Basic System. Mar 10 00:48:23.452942 systemd[1844]: Reached target default.target - Main User Target. Mar 10 00:48:23.453029 systemd[1844]: Startup finished in 156ms. Mar 10 00:48:23.453764 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 10 00:48:23.458797 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 10 00:48:23.461383 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 10 00:48:23.571641 waagent[1818]: 2026-03-10T00:48:23.571477Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 10 00:48:23.577272 waagent[1818]: 2026-03-10T00:48:23.577186Z INFO Daemon Daemon Forcing an update of the goal state. Mar 10 00:48:23.585699 waagent[1818]: 2026-03-10T00:48:23.585617Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 10 00:48:23.604993 waagent[1818]: 2026-03-10T00:48:23.604940Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 10 00:48:23.610323 waagent[1818]: 2026-03-10T00:48:23.610266Z INFO Daemon Mar 10 00:48:23.612791 waagent[1818]: 2026-03-10T00:48:23.612623Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 189f4df1-d6ab-4b21-97f4-633187ef32b0 eTag: 7868267592004189877 source: Fabric] Mar 10 00:48:23.621867 waagent[1818]: 2026-03-10T00:48:23.621818Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 10 00:48:23.623831 kubelet[1827]: E0310 00:48:23.623752 1827 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 00:48:23.627563 waagent[1818]: 2026-03-10T00:48:23.627516Z INFO Daemon Mar 10 00:48:23.630048 waagent[1818]: 2026-03-10T00:48:23.629795Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 10 00:48:23.630799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 00:48:23.631086 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 00:48:23.639546 waagent[1818]: 2026-03-10T00:48:23.639503Z INFO Daemon Daemon Downloading artifacts profile blob Mar 10 00:48:23.780824 waagent[1818]: 2026-03-10T00:48:23.780745Z INFO Daemon Downloaded certificate {'thumbprint': '55F39153DCFDAC3634F2FF7FF41DB291278312DA', 'hasPrivateKey': True} Mar 10 00:48:23.788562 waagent[1818]: 2026-03-10T00:48:23.788512Z INFO Daemon Fetch goal state completed Mar 10 00:48:23.830791 waagent[1818]: 2026-03-10T00:48:23.830680Z INFO Daemon Daemon Starting provisioning Mar 10 00:48:23.834863 waagent[1818]: 2026-03-10T00:48:23.834807Z INFO Daemon Daemon Handle ovf-env.xml. Mar 10 00:48:23.838765 waagent[1818]: 2026-03-10T00:48:23.838717Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-9b959526b1] Mar 10 00:48:23.845313 waagent[1818]: 2026-03-10T00:48:23.845254Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-9b959526b1] Mar 10 00:48:23.850185 waagent[1818]: 2026-03-10T00:48:23.850135Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 10 00:48:23.855138 waagent[1818]: 2026-03-10T00:48:23.855095Z INFO Daemon Daemon Primary interface is [eth0] Mar 10 00:48:23.872015 systemd-networkd[1510]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 00:48:23.872022 systemd-networkd[1510]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 00:48:23.872062 systemd-networkd[1510]: eth0: DHCP lease lost Mar 10 00:48:23.873295 waagent[1818]: 2026-03-10T00:48:23.873226Z INFO Daemon Daemon Create user account if not exists Mar 10 00:48:23.877700 waagent[1818]: 2026-03-10T00:48:23.877651Z INFO Daemon Daemon User core already exists, skip useradd Mar 10 00:48:23.882099 waagent[1818]: 2026-03-10T00:48:23.882058Z INFO Daemon Daemon Configure sudoer Mar 10 00:48:23.885858 waagent[1818]: 2026-03-10T00:48:23.885811Z INFO Daemon Daemon Configure sshd Mar 10 00:48:23.886684 systemd-networkd[1510]: eth0: DHCPv6 lease lost Mar 10 00:48:23.889390 waagent[1818]: 2026-03-10T00:48:23.889338Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 10 00:48:23.899260 waagent[1818]: 2026-03-10T00:48:23.899210Z INFO Daemon Daemon Deploy ssh public key. Mar 10 00:48:23.913706 systemd-networkd[1510]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 10 00:48:24.968474 waagent[1818]: 2026-03-10T00:48:24.968423Z INFO Daemon Daemon Provisioning complete Mar 10 00:48:24.985213 waagent[1818]: 2026-03-10T00:48:24.985169Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 10 00:48:24.990205 waagent[1818]: 2026-03-10T00:48:24.990157Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 10 00:48:24.997425 waagent[1818]: 2026-03-10T00:48:24.997380Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 10 00:48:25.129770 waagent[1891]: 2026-03-10T00:48:25.129094Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 10 00:48:25.129770 waagent[1891]: 2026-03-10T00:48:25.129246Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Mar 10 00:48:25.129770 waagent[1891]: 2026-03-10T00:48:25.129303Z INFO ExtHandler ExtHandler Python: 3.11.9 Mar 10 00:48:25.148839 waagent[1891]: 2026-03-10T00:48:25.148760Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 10 00:48:25.149179 waagent[1891]: 2026-03-10T00:48:25.149139Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 10 00:48:25.149325 waagent[1891]: 2026-03-10T00:48:25.149291Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 10 00:48:25.157698 waagent[1891]: 2026-03-10T00:48:25.157610Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 10 00:48:25.164154 waagent[1891]: 2026-03-10T00:48:25.164108Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 10 00:48:25.164819 waagent[1891]: 2026-03-10T00:48:25.164778Z INFO ExtHandler Mar 10 00:48:25.164970 waagent[1891]: 2026-03-10T00:48:25.164939Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 13e8d694-9c3d-47f2-99ae-6a3eb31658bd eTag: 7868267592004189877 source: Fabric] Mar 10 00:48:25.165362 waagent[1891]: 2026-03-10T00:48:25.165324Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 10 00:48:25.166745 waagent[1891]: 2026-03-10T00:48:25.166008Z INFO ExtHandler Mar 10 00:48:25.166745 waagent[1891]: 2026-03-10T00:48:25.166090Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 10 00:48:25.171656 waagent[1891]: 2026-03-10T00:48:25.170197Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 10 00:48:25.242984 waagent[1891]: 2026-03-10T00:48:25.242860Z INFO ExtHandler Downloaded certificate {'thumbprint': '55F39153DCFDAC3634F2FF7FF41DB291278312DA', 'hasPrivateKey': True} Mar 10 00:48:25.243681 waagent[1891]: 2026-03-10T00:48:25.243613Z INFO ExtHandler Fetch goal state completed Mar 10 00:48:25.259751 waagent[1891]: 2026-03-10T00:48:25.259690Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1891 Mar 10 00:48:25.260019 waagent[1891]: 2026-03-10T00:48:25.259985Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 10 00:48:25.261731 waagent[1891]: 2026-03-10T00:48:25.261688Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Mar 10 00:48:25.262173 waagent[1891]: 2026-03-10T00:48:25.262136Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 10 00:48:25.273844 waagent[1891]: 2026-03-10T00:48:25.273805Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 10 00:48:25.274165 waagent[1891]: 2026-03-10T00:48:25.274127Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 10 00:48:25.280317 waagent[1891]: 2026-03-10T00:48:25.280281Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 10 00:48:25.286752 systemd[1]: Reloading requested from client PID 1904 ('systemctl') (unit waagent.service)... Mar 10 00:48:25.286988 systemd[1]: Reloading... Mar 10 00:48:25.355661 zram_generator::config[1938]: No configuration found. Mar 10 00:48:25.455321 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 00:48:25.534146 systemd[1]: Reloading finished in 246 ms. Mar 10 00:48:25.557791 waagent[1891]: 2026-03-10T00:48:25.554929Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 10 00:48:25.562596 systemd[1]: Reloading requested from client PID 1993 ('systemctl') (unit waagent.service)... Mar 10 00:48:25.562611 systemd[1]: Reloading... Mar 10 00:48:25.636672 zram_generator::config[2023]: No configuration found. Mar 10 00:48:25.743867 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 00:48:25.819309 systemd[1]: Reloading finished in 256 ms. Mar 10 00:48:25.848063 waagent[1891]: 2026-03-10T00:48:25.847251Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 10 00:48:25.848063 waagent[1891]: 2026-03-10T00:48:25.847413Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 10 00:48:25.962912 waagent[1891]: 2026-03-10T00:48:25.962834Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 10 00:48:25.963585 waagent[1891]: 2026-03-10T00:48:25.963539Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 10 00:48:25.964469 waagent[1891]: 2026-03-10T00:48:25.964417Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 10 00:48:25.964699 waagent[1891]: 2026-03-10T00:48:25.964637Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 10 00:48:25.965051 waagent[1891]: 2026-03-10T00:48:25.965003Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 10 00:48:25.965183 waagent[1891]: 2026-03-10T00:48:25.965110Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 10 00:48:25.965585 waagent[1891]: 2026-03-10T00:48:25.965544Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 10 00:48:25.965687 waagent[1891]: 2026-03-10T00:48:25.965624Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 10 00:48:25.965851 waagent[1891]: 2026-03-10T00:48:25.965814Z INFO EnvHandler ExtHandler Configure routes Mar 10 00:48:25.965912 waagent[1891]: 2026-03-10T00:48:25.965886Z INFO EnvHandler ExtHandler Gateway:None Mar 10 00:48:25.965959 waagent[1891]: 2026-03-10T00:48:25.965935Z INFO EnvHandler ExtHandler Routes:None Mar 10 00:48:25.966532 waagent[1891]: 2026-03-10T00:48:25.966488Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 10 00:48:25.967661 waagent[1891]: 2026-03-10T00:48:25.966682Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 10 00:48:25.967661 waagent[1891]: 2026-03-10T00:48:25.966920Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 10 00:48:25.967661 waagent[1891]: 2026-03-10T00:48:25.967115Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 10 00:48:25.967661 waagent[1891]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 10 00:48:25.967661 waagent[1891]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 10 00:48:25.967661 waagent[1891]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 10 00:48:25.967661 waagent[1891]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 10 00:48:25.967661 waagent[1891]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 10 00:48:25.967661 waagent[1891]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 10 00:48:25.969156 waagent[1891]: 2026-03-10T00:48:25.969071Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 10 00:48:25.969248 waagent[1891]: 2026-03-10T00:48:25.969158Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 10 00:48:25.971232 waagent[1891]: 2026-03-10T00:48:25.969937Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 10 00:48:25.982296 waagent[1891]: 2026-03-10T00:48:25.982242Z INFO ExtHandler ExtHandler Mar 10 00:48:25.982385 waagent[1891]: 2026-03-10T00:48:25.982350Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: e50bd942-2081-4702-935b-4f2760c6999f correlation f077aef9-b78d-430c-9483-213b212b4d97 created: 2026-03-10T00:47:49.616844Z] Mar 10 00:48:25.984010 waagent[1891]: 2026-03-10T00:48:25.983678Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 10 00:48:25.985192 waagent[1891]: 2026-03-10T00:48:25.985029Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Mar 10 00:48:25.991347 waagent[1891]: 2026-03-10T00:48:25.991289Z INFO MonitorHandler ExtHandler Network interfaces: Mar 10 00:48:25.991347 waagent[1891]: Executing ['ip', '-a', '-o', 'link']: Mar 10 00:48:25.991347 waagent[1891]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 10 00:48:25.991347 waagent[1891]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:86:77:53 brd ff:ff:ff:ff:ff:ff Mar 10 00:48:25.991347 waagent[1891]: 3: enP26481s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:86:77:53 brd ff:ff:ff:ff:ff:ff\ altname enP26481p0s2 Mar 10 00:48:25.991347 waagent[1891]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 10 00:48:25.991347 waagent[1891]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 10 00:48:25.991347 waagent[1891]: 2: eth0 inet 10.200.20.10/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 10 00:48:25.991347 waagent[1891]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 10 00:48:25.991347 waagent[1891]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 10 00:48:25.991347 waagent[1891]: 2: eth0 inet6 fe80::7eed:8dff:fe86:7753/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 10 00:48:26.024666 waagent[1891]: 2026-03-10T00:48:26.024003Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: B8234878-552F-43E8-BB87-3A23AC9C29C5;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 10 00:48:26.032260 waagent[1891]: 2026-03-10T00:48:26.032187Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 10 00:48:26.032260 waagent[1891]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 10 00:48:26.032260 waagent[1891]: pkts bytes target prot opt in out source destination Mar 10 00:48:26.032260 waagent[1891]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 10 00:48:26.032260 waagent[1891]: pkts bytes target prot opt in out source destination Mar 10 00:48:26.032260 waagent[1891]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 10 00:48:26.032260 waagent[1891]: pkts bytes target prot opt in out source destination Mar 10 00:48:26.032260 waagent[1891]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 10 00:48:26.032260 waagent[1891]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 10 00:48:26.032260 waagent[1891]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 10 00:48:26.035784 waagent[1891]: 2026-03-10T00:48:26.035727Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 10 00:48:26.035784 waagent[1891]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 10 00:48:26.035784 waagent[1891]: pkts bytes target prot opt in out source destination Mar 10 00:48:26.035784 waagent[1891]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 10 00:48:26.035784 waagent[1891]: pkts bytes target prot opt in out source destination Mar 10 00:48:26.035784 waagent[1891]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 10 00:48:26.035784 waagent[1891]: pkts bytes target prot opt in out source destination Mar 10 00:48:26.035784 waagent[1891]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 10 00:48:26.035784 waagent[1891]: 4 594 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 10 00:48:26.035784 waagent[1891]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 10 00:48:26.036421 waagent[1891]: 2026-03-10T00:48:26.036299Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 10 00:48:33.860749 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 10 00:48:33.871340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 00:48:33.972151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 00:48:33.976780 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 00:48:34.064649 kubelet[2120]: E0310 00:48:34.062785 2120 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 00:48:34.066399 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 00:48:34.066549 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 00:48:35.827575 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 10 00:48:35.828788 systemd[1]: Started sshd@0-10.200.20.10:22-10.200.16.10:43392.service - OpenSSH per-connection server daemon (10.200.16.10:43392). Mar 10 00:48:36.335497 sshd[2128]: Accepted publickey for core from 10.200.16.10 port 43392 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:48:36.336852 sshd[2128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:48:36.341325 systemd-logind[1694]: New session 3 of user core. Mar 10 00:48:36.346814 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 10 00:48:36.788880 systemd[1]: Started sshd@1-10.200.20.10:22-10.200.16.10:43402.service - OpenSSH per-connection server daemon (10.200.16.10:43402). Mar 10 00:48:37.278292 sshd[2133]: Accepted publickey for core from 10.200.16.10 port 43402 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:48:37.279082 sshd[2133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:48:37.282664 systemd-logind[1694]: New session 4 of user core. Mar 10 00:48:37.290756 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 10 00:48:37.626055 sshd[2133]: pam_unix(sshd:session): session closed for user core Mar 10 00:48:37.630041 systemd[1]: sshd@1-10.200.20.10:22-10.200.16.10:43402.service: Deactivated successfully. Mar 10 00:48:37.631724 systemd[1]: session-4.scope: Deactivated successfully. Mar 10 00:48:37.632483 systemd-logind[1694]: Session 4 logged out. Waiting for processes to exit. Mar 10 00:48:37.633397 systemd-logind[1694]: Removed session 4. Mar 10 00:48:37.719441 systemd[1]: Started sshd@2-10.200.20.10:22-10.200.16.10:43412.service - OpenSSH per-connection server daemon (10.200.16.10:43412). Mar 10 00:48:38.203083 sshd[2140]: Accepted publickey for core from 10.200.16.10 port 43412 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:48:38.203908 sshd[2140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:48:38.207544 systemd-logind[1694]: New session 5 of user core. Mar 10 00:48:38.213798 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 10 00:48:38.547812 sshd[2140]: pam_unix(sshd:session): session closed for user core Mar 10 00:48:38.551333 systemd[1]: sshd@2-10.200.20.10:22-10.200.16.10:43412.service: Deactivated successfully. Mar 10 00:48:38.552937 systemd[1]: session-5.scope: Deactivated successfully. Mar 10 00:48:38.554310 systemd-logind[1694]: Session 5 logged out. Waiting for processes to exit. Mar 10 00:48:38.555276 systemd-logind[1694]: Removed session 5. Mar 10 00:48:38.638118 systemd[1]: Started sshd@3-10.200.20.10:22-10.200.16.10:43422.service - OpenSSH per-connection server daemon (10.200.16.10:43422). Mar 10 00:48:39.127018 sshd[2147]: Accepted publickey for core from 10.200.16.10 port 43422 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:48:39.128273 sshd[2147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:48:39.132736 systemd-logind[1694]: New session 6 of user core. Mar 10 00:48:39.139780 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 10 00:48:39.479530 sshd[2147]: pam_unix(sshd:session): session closed for user core Mar 10 00:48:39.482358 systemd[1]: sshd@3-10.200.20.10:22-10.200.16.10:43422.service: Deactivated successfully. Mar 10 00:48:39.483985 systemd[1]: session-6.scope: Deactivated successfully. Mar 10 00:48:39.485428 systemd-logind[1694]: Session 6 logged out. Waiting for processes to exit. Mar 10 00:48:39.486303 systemd-logind[1694]: Removed session 6. Mar 10 00:48:39.567402 systemd[1]: Started sshd@4-10.200.20.10:22-10.200.16.10:43426.service - OpenSSH per-connection server daemon (10.200.16.10:43426). Mar 10 00:48:40.051434 sshd[2154]: Accepted publickey for core from 10.200.16.10 port 43426 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:48:40.052730 sshd[2154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:48:40.056269 systemd-logind[1694]: New session 7 of user core. Mar 10 00:48:40.065761 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 10 00:48:40.355241 sudo[2157]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 10 00:48:40.355514 sudo[2157]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 00:48:40.371830 sudo[2157]: pam_unix(sudo:session): session closed for user root Mar 10 00:48:40.450071 sshd[2154]: pam_unix(sshd:session): session closed for user core Mar 10 00:48:40.454092 systemd[1]: sshd@4-10.200.20.10:22-10.200.16.10:43426.service: Deactivated successfully. Mar 10 00:48:40.455696 systemd[1]: session-7.scope: Deactivated successfully. Mar 10 00:48:40.457048 systemd-logind[1694]: Session 7 logged out. Waiting for processes to exit. Mar 10 00:48:40.457995 systemd-logind[1694]: Removed session 7. Mar 10 00:48:40.541557 systemd[1]: Started sshd@5-10.200.20.10:22-10.200.16.10:47808.service - OpenSSH per-connection server daemon (10.200.16.10:47808). Mar 10 00:48:41.028251 sshd[2162]: Accepted publickey for core from 10.200.16.10 port 47808 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:48:41.029100 sshd[2162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:48:41.032676 systemd-logind[1694]: New session 8 of user core. Mar 10 00:48:41.039766 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 10 00:48:41.302694 sudo[2166]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 10 00:48:41.303345 sudo[2166]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 00:48:41.306410 sudo[2166]: pam_unix(sudo:session): session closed for user root Mar 10 00:48:41.311120 sudo[2165]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 10 00:48:41.311379 sudo[2165]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 00:48:41.323941 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 10 00:48:41.325306 auditctl[2169]: No rules Mar 10 00:48:41.326420 systemd[1]: audit-rules.service: Deactivated successfully. Mar 10 00:48:41.326604 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 10 00:48:41.329722 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 10 00:48:41.353241 augenrules[2187]: No rules Mar 10 00:48:41.354687 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 10 00:48:41.357027 sudo[2165]: pam_unix(sudo:session): session closed for user root Mar 10 00:48:41.435844 sshd[2162]: pam_unix(sshd:session): session closed for user core Mar 10 00:48:41.439306 systemd-logind[1694]: Session 8 logged out. Waiting for processes to exit. Mar 10 00:48:41.440085 systemd[1]: sshd@5-10.200.20.10:22-10.200.16.10:47808.service: Deactivated successfully. Mar 10 00:48:41.443048 systemd[1]: session-8.scope: Deactivated successfully. Mar 10 00:48:41.443976 systemd-logind[1694]: Removed session 8. Mar 10 00:48:41.527851 systemd[1]: Started sshd@6-10.200.20.10:22-10.200.16.10:47822.service - OpenSSH per-connection server daemon (10.200.16.10:47822). Mar 10 00:48:42.007399 sshd[2195]: Accepted publickey for core from 10.200.16.10 port 47822 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:48:42.008727 sshd[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:48:42.013425 systemd-logind[1694]: New session 9 of user core. Mar 10 00:48:42.018793 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 10 00:48:42.281734 sudo[2198]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 10 00:48:42.282008 sudo[2198]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 00:48:42.627995 (dockerd)[2213]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 10 00:48:42.628411 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 10 00:48:42.881549 dockerd[2213]: time="2026-03-10T00:48:42.880762200Z" level=info msg="Starting up" Mar 10 00:48:42.996823 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport73508965-merged.mount: Deactivated successfully. Mar 10 00:48:43.073364 dockerd[2213]: time="2026-03-10T00:48:43.073318840Z" level=info msg="Loading containers: start." Mar 10 00:48:43.170762 kernel: Initializing XFRM netlink socket Mar 10 00:48:43.245181 systemd-networkd[1510]: docker0: Link UP Mar 10 00:48:43.270645 dockerd[2213]: time="2026-03-10T00:48:43.270596040Z" level=info msg="Loading containers: done." Mar 10 00:48:43.290511 dockerd[2213]: time="2026-03-10T00:48:43.290463240Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 10 00:48:43.290682 dockerd[2213]: time="2026-03-10T00:48:43.290590440Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 10 00:48:43.290754 dockerd[2213]: time="2026-03-10T00:48:43.290731640Z" level=info msg="Daemon has completed initialization" Mar 10 00:48:43.363537 dockerd[2213]: time="2026-03-10T00:48:43.363216040Z" level=info msg="API listen on /run/docker.sock" Mar 10 00:48:43.363678 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 10 00:48:43.728538 containerd[1725]: time="2026-03-10T00:48:43.728500040Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 10 00:48:44.110793 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 10 00:48:44.116802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 00:48:44.224235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 00:48:44.234941 (kubelet)[2355]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 00:48:44.316995 kubelet[2355]: E0310 00:48:44.316937 2355 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 00:48:44.319710 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 00:48:44.319963 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 00:48:44.889623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3257292746.mount: Deactivated successfully. Mar 10 00:48:44.905240 chronyd[1679]: Selected source PHC0 Mar 10 00:48:46.446670 containerd[1725]: time="2026-03-10T00:48:46.446033603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:46.449202 containerd[1725]: time="2026-03-10T00:48:46.449167285Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=24701796" Mar 10 00:48:46.452945 containerd[1725]: time="2026-03-10T00:48:46.452912367Z" level=info msg="ImageCreate event name:\"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:46.458014 containerd[1725]: time="2026-03-10T00:48:46.457970291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:46.459226 containerd[1725]: time="2026-03-10T00:48:46.459183852Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"24698395\" in 2.730638732s" Mar 10 00:48:46.459261 containerd[1725]: time="2026-03-10T00:48:46.459234132Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\"" Mar 10 00:48:46.460119 containerd[1725]: time="2026-03-10T00:48:46.460089532Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 10 00:48:48.082413 containerd[1725]: time="2026-03-10T00:48:48.082359273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:48.085650 containerd[1725]: time="2026-03-10T00:48:48.085588715Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=19063039" Mar 10 00:48:48.089798 containerd[1725]: time="2026-03-10T00:48:48.089383998Z" level=info msg="ImageCreate event name:\"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:48.095853 containerd[1725]: time="2026-03-10T00:48:48.095799322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:48.097653 containerd[1725]: time="2026-03-10T00:48:48.096775323Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"20675140\" in 1.636652511s" Mar 10 00:48:48.097653 containerd[1725]: time="2026-03-10T00:48:48.096809843Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\"" Mar 10 00:48:48.097653 containerd[1725]: time="2026-03-10T00:48:48.097290043Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 10 00:48:49.367406 containerd[1725]: time="2026-03-10T00:48:49.367354379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:49.370578 containerd[1725]: time="2026-03-10T00:48:49.370545142Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=13797901" Mar 10 00:48:49.374590 containerd[1725]: time="2026-03-10T00:48:49.374559784Z" level=info msg="ImageCreate event name:\"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:49.380427 containerd[1725]: time="2026-03-10T00:48:49.380399148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:49.382397 containerd[1725]: time="2026-03-10T00:48:49.382369830Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"15410020\" in 1.285040627s" Mar 10 00:48:49.382427 containerd[1725]: time="2026-03-10T00:48:49.382404030Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\"" Mar 10 00:48:49.382829 containerd[1725]: time="2026-03-10T00:48:49.382798190Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 10 00:48:50.445670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1625242784.mount: Deactivated successfully. Mar 10 00:48:50.682513 containerd[1725]: time="2026-03-10T00:48:50.681815667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:50.684698 containerd[1725]: time="2026-03-10T00:48:50.684664469Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=22329583" Mar 10 00:48:50.688295 containerd[1725]: time="2026-03-10T00:48:50.688247391Z" level=info msg="ImageCreate event name:\"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:50.692760 containerd[1725]: time="2026-03-10T00:48:50.692719194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:50.693993 containerd[1725]: time="2026-03-10T00:48:50.693504755Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"22328602\" in 1.310671285s" Mar 10 00:48:50.693993 containerd[1725]: time="2026-03-10T00:48:50.693537635Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\"" Mar 10 00:48:50.694096 containerd[1725]: time="2026-03-10T00:48:50.693992195Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 10 00:48:51.365494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2611645877.mount: Deactivated successfully. Mar 10 00:48:52.592000 containerd[1725]: time="2026-03-10T00:48:52.591949054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:52.595222 containerd[1725]: time="2026-03-10T00:48:52.595187016Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172211" Mar 10 00:48:52.598781 containerd[1725]: time="2026-03-10T00:48:52.598708179Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:52.606243 containerd[1725]: time="2026-03-10T00:48:52.606115424Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.912094869s" Mar 10 00:48:52.606243 containerd[1725]: time="2026-03-10T00:48:52.606159024Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Mar 10 00:48:52.607056 containerd[1725]: time="2026-03-10T00:48:52.607024905Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 10 00:48:52.607283 containerd[1725]: time="2026-03-10T00:48:52.607186265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:53.179859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2350943218.mount: Deactivated successfully. Mar 10 00:48:53.202279 containerd[1725]: time="2026-03-10T00:48:53.201494069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:53.205245 containerd[1725]: time="2026-03-10T00:48:53.205212479Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 10 00:48:53.208649 containerd[1725]: time="2026-03-10T00:48:53.208597407Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:53.214429 containerd[1725]: time="2026-03-10T00:48:53.213536179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:53.214429 containerd[1725]: time="2026-03-10T00:48:53.214212781Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 607.072156ms" Mar 10 00:48:53.214429 containerd[1725]: time="2026-03-10T00:48:53.214249301Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 10 00:48:53.215074 containerd[1725]: time="2026-03-10T00:48:53.215049223Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 10 00:48:53.936015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount549932883.mount: Deactivated successfully. Mar 10 00:48:54.360740 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 10 00:48:54.369851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 00:48:54.486981 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 00:48:54.495887 (kubelet)[2514]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 00:48:54.526466 kubelet[2514]: E0310 00:48:54.526402 2514 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 00:48:54.528997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 00:48:54.529137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 00:48:55.752780 containerd[1725]: time="2026-03-10T00:48:55.752730569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:55.756264 containerd[1725]: time="2026-03-10T00:48:55.756046818Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21738165" Mar 10 00:48:55.761650 containerd[1725]: time="2026-03-10T00:48:55.759915867Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:55.767835 containerd[1725]: time="2026-03-10T00:48:55.767790967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:48:55.768819 containerd[1725]: time="2026-03-10T00:48:55.768785489Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 2.553702706s" Mar 10 00:48:55.768916 containerd[1725]: time="2026-03-10T00:48:55.768901049Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Mar 10 00:48:58.388019 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 00:48:58.393845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 00:48:58.423870 systemd[1]: Reloading requested from client PID 2601 ('systemctl') (unit session-9.scope)... Mar 10 00:48:58.423885 systemd[1]: Reloading... Mar 10 00:48:58.536690 zram_generator::config[2647]: No configuration found. Mar 10 00:48:58.624918 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 00:48:58.703086 systemd[1]: Reloading finished in 278 ms. Mar 10 00:48:58.825444 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 10 00:48:58.825540 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 10 00:48:58.825785 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 00:48:58.830943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 00:48:59.039822 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 00:48:59.048105 (kubelet)[2707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 00:48:59.082777 kubelet[2707]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 00:48:59.416743 kubelet[2707]: I0310 00:48:59.416685 2707 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 10 00:48:59.416743 kubelet[2707]: I0310 00:48:59.416732 2707 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 00:48:59.418010 kubelet[2707]: I0310 00:48:59.417993 2707 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 10 00:48:59.418046 kubelet[2707]: I0310 00:48:59.418011 2707 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 00:48:59.418286 kubelet[2707]: I0310 00:48:59.418275 2707 server.go:951] "Client rotation is on, will bootstrap in background" Mar 10 00:48:59.564686 kubelet[2707]: E0310 00:48:59.564606 2707 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 00:48:59.564812 kubelet[2707]: I0310 00:48:59.564758 2707 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 00:48:59.567736 kubelet[2707]: E0310 00:48:59.567683 2707 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 10 00:48:59.567859 kubelet[2707]: I0310 00:48:59.567757 2707 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 10 00:48:59.570605 kubelet[2707]: I0310 00:48:59.570574 2707 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 10 00:48:59.571425 kubelet[2707]: I0310 00:48:59.571384 2707 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 00:48:59.571583 kubelet[2707]: I0310 00:48:59.571427 2707 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-9b959526b1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 00:48:59.571694 kubelet[2707]: I0310 00:48:59.571590 2707 topology_manager.go:143] "Creating topology manager with none policy" Mar 10 00:48:59.571694 kubelet[2707]: I0310 00:48:59.571599 2707 container_manager_linux.go:308] "Creating device plugin manager" Mar 10 00:48:59.571746 kubelet[2707]: I0310 00:48:59.571709 2707 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 10 00:48:59.576929 kubelet[2707]: I0310 00:48:59.576909 2707 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 10 00:48:59.577070 kubelet[2707]: I0310 00:48:59.577059 2707 kubelet.go:482] "Attempting to sync node with API server" Mar 10 00:48:59.577104 kubelet[2707]: I0310 00:48:59.577079 2707 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 00:48:59.577104 kubelet[2707]: I0310 00:48:59.577096 2707 kubelet.go:394] "Adding apiserver pod source" Mar 10 00:48:59.578694 kubelet[2707]: I0310 00:48:59.577104 2707 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 00:48:59.580781 kubelet[2707]: I0310 00:48:59.580752 2707 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 10 00:48:59.581715 kubelet[2707]: I0310 00:48:59.581691 2707 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 00:48:59.581773 kubelet[2707]: I0310 00:48:59.581726 2707 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 10 00:48:59.581773 kubelet[2707]: W0310 00:48:59.581767 2707 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 10 00:48:59.583956 kubelet[2707]: I0310 00:48:59.583926 2707 server.go:1257] "Started kubelet" Mar 10 00:48:59.584146 kubelet[2707]: I0310 00:48:59.584108 2707 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 00:48:59.586271 kubelet[2707]: I0310 00:48:59.586197 2707 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 00:48:59.586271 kubelet[2707]: I0310 00:48:59.586278 2707 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 10 00:48:59.586566 kubelet[2707]: I0310 00:48:59.586540 2707 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 00:48:59.587500 kubelet[2707]: I0310 00:48:59.587459 2707 server.go:317] "Adding debug handlers to kubelet server" Mar 10 00:48:59.590385 kubelet[2707]: I0310 00:48:59.590365 2707 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 10 00:48:59.595335 kubelet[2707]: E0310 00:48:59.592823 2707 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.10:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-9b959526b1.189b5474f3d0b49a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-9b959526b1,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-9b959526b1,},FirstTimestamp:2026-03-10 00:48:59.583902874 +0000 UTC m=+0.532643351,LastTimestamp:2026-03-10 00:48:59.583902874 +0000 UTC m=+0.532643351,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-9b959526b1,}" Mar 10 00:48:59.596714 kubelet[2707]: I0310 00:48:59.595795 2707 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 00:48:59.597425 kubelet[2707]: I0310 00:48:59.597404 2707 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 10 00:48:59.597589 kubelet[2707]: E0310 00:48:59.597570 2707 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-9b959526b1\" not found" Mar 10 00:48:59.598338 kubelet[2707]: E0310 00:48:59.598304 2707 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-9b959526b1?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="200ms" Mar 10 00:48:59.598763 kubelet[2707]: I0310 00:48:59.598739 2707 factory.go:223] Registration of the systemd container factory successfully Mar 10 00:48:59.598933 kubelet[2707]: I0310 00:48:59.598917 2707 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 00:48:59.601653 kubelet[2707]: I0310 00:48:59.601622 2707 factory.go:223] Registration of the containerd container factory successfully Mar 10 00:48:59.603734 kubelet[2707]: I0310 00:48:59.603704 2707 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 10 00:48:59.606432 kubelet[2707]: I0310 00:48:59.606403 2707 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 00:48:59.606495 kubelet[2707]: I0310 00:48:59.606461 2707 reconciler.go:29] "Reconciler: start to sync state" Mar 10 00:48:59.624314 kubelet[2707]: E0310 00:48:59.624278 2707 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 00:48:59.626939 kubelet[2707]: I0310 00:48:59.626790 2707 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 10 00:48:59.626939 kubelet[2707]: I0310 00:48:59.626821 2707 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 10 00:48:59.626939 kubelet[2707]: I0310 00:48:59.626840 2707 kubelet.go:2501] "Starting kubelet main sync loop" Mar 10 00:48:59.626939 kubelet[2707]: E0310 00:48:59.626877 2707 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 00:48:59.669214 kubelet[2707]: I0310 00:48:59.668952 2707 cpu_manager.go:225] "Starting" policy="none" Mar 10 00:48:59.669214 kubelet[2707]: I0310 00:48:59.668970 2707 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 10 00:48:59.669214 kubelet[2707]: I0310 00:48:59.668990 2707 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 10 00:48:59.698621 kubelet[2707]: E0310 00:48:59.698588 2707 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-9b959526b1\" not found" Mar 10 00:48:59.727912 kubelet[2707]: I0310 00:48:59.727627 2707 policy_none.go:50] "Start" Mar 10 00:48:59.727912 kubelet[2707]: I0310 00:48:59.727665 2707 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 10 00:48:59.727912 kubelet[2707]: I0310 00:48:59.727682 2707 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 10 00:48:59.728056 kubelet[2707]: E0310 00:48:59.727932 2707 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 00:48:59.751718 kubelet[2707]: I0310 00:48:59.751691 2707 policy_none.go:44] "Start" Mar 10 00:48:59.756319 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 10 00:48:59.768119 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 10 00:48:59.779041 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 10 00:48:59.780169 kubelet[2707]: E0310 00:48:59.780149 2707 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 00:48:59.781294 kubelet[2707]: I0310 00:48:59.780671 2707 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 10 00:48:59.781294 kubelet[2707]: I0310 00:48:59.780687 2707 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 00:48:59.781973 kubelet[2707]: I0310 00:48:59.781955 2707 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 10 00:48:59.782797 kubelet[2707]: E0310 00:48:59.782740 2707 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 00:48:59.782797 kubelet[2707]: E0310 00:48:59.782772 2707 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-9b959526b1\" not found" Mar 10 00:48:59.798835 kubelet[2707]: E0310 00:48:59.798803 2707 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-9b959526b1?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="400ms" Mar 10 00:48:59.883582 kubelet[2707]: I0310 00:48:59.883550 2707 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-9b959526b1" Mar 10 00:48:59.883985 kubelet[2707]: E0310 00:48:59.883954 2707 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4081.3.6-n-9b959526b1" Mar 10 00:48:59.953745 systemd[1]: Created slice kubepods-burstable-podf188aeb08d1f466ff016197c82f79e0a.slice - libcontainer container kubepods-burstable-podf188aeb08d1f466ff016197c82f79e0a.slice. Mar 10 00:48:59.959444 kubelet[2707]: E0310 00:48:59.959269 2707 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-9b959526b1\" not found" node="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:00.007831 kubelet[2707]: I0310 00:49:00.007791 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f188aeb08d1f466ff016197c82f79e0a-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-9b959526b1\" (UID: \"f188aeb08d1f466ff016197c82f79e0a\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:00.007831 kubelet[2707]: I0310 00:49:00.007839 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec01d02f8c0450723c7e9f2677f5927c-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-9b959526b1\" (UID: \"ec01d02f8c0450723c7e9f2677f5927c\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:00.501187 kubelet[2707]: I0310 00:49:00.007858 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec01d02f8c0450723c7e9f2677f5927c-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-9b959526b1\" (UID: \"ec01d02f8c0450723c7e9f2677f5927c\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:00.501187 kubelet[2707]: I0310 00:49:00.007874 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec01d02f8c0450723c7e9f2677f5927c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-9b959526b1\" (UID: \"ec01d02f8c0450723c7e9f2677f5927c\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:00.501187 kubelet[2707]: E0310 00:49:00.008171 2707 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.10:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-9b959526b1.189b5474f3d0b49a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-9b959526b1,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-9b959526b1,},FirstTimestamp:2026-03-10 00:48:59.583902874 +0000 UTC m=+0.532643351,LastTimestamp:2026-03-10 00:48:59.583902874 +0000 UTC m=+0.532643351,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-9b959526b1,}" Mar 10 00:49:00.501187 kubelet[2707]: I0310 00:49:00.086176 2707 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:00.501327 kubelet[2707]: E0310 00:49:00.086461 2707 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:00.501327 kubelet[2707]: E0310 00:49:00.199539 2707 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-9b959526b1?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="800ms" Mar 10 00:49:00.501327 kubelet[2707]: I0310 00:49:00.488764 2707 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:00.501327 kubelet[2707]: E0310 00:49:00.489038 2707 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:00.709364 systemd[1]: Created slice kubepods-burstable-podec01d02f8c0450723c7e9f2677f5927c.slice - libcontainer container kubepods-burstable-podec01d02f8c0450723c7e9f2677f5927c.slice. Mar 10 00:49:01.244010 kubelet[2707]: E0310 00:49:00.710769 2707 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-9b959526b1\" not found" node="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:01.244010 kubelet[2707]: I0310 00:49:00.718101 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/41f40b08e857406b971be852d482e8c4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-9b959526b1\" (UID: \"41f40b08e857406b971be852d482e8c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:01.244010 kubelet[2707]: I0310 00:49:00.718159 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41f40b08e857406b971be852d482e8c4-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-9b959526b1\" (UID: \"41f40b08e857406b971be852d482e8c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:01.244010 kubelet[2707]: I0310 00:49:00.718179 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41f40b08e857406b971be852d482e8c4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-9b959526b1\" (UID: \"41f40b08e857406b971be852d482e8c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:01.244010 kubelet[2707]: I0310 00:49:00.718200 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/41f40b08e857406b971be852d482e8c4-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-9b959526b1\" (UID: \"41f40b08e857406b971be852d482e8c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:01.244168 kubelet[2707]: I0310 00:49:00.718217 2707 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41f40b08e857406b971be852d482e8c4-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-9b959526b1\" (UID: \"41f40b08e857406b971be852d482e8c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:01.244168 kubelet[2707]: E0310 00:49:01.000251 2707 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-9b959526b1?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="1.6s" Mar 10 00:49:01.291314 kubelet[2707]: I0310 00:49:01.291231 2707 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:01.291578 kubelet[2707]: E0310 00:49:01.291539 2707 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:01.353029 containerd[1725]: time="2026-03-10T00:49:01.352987931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-9b959526b1,Uid:f188aeb08d1f466ff016197c82f79e0a,Namespace:kube-system,Attempt:0,}" Mar 10 00:49:01.502025 containerd[1725]: time="2026-03-10T00:49:01.501679969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-9b959526b1,Uid:ec01d02f8c0450723c7e9f2677f5927c,Namespace:kube-system,Attempt:0,}" Mar 10 00:49:01.510944 systemd[1]: Created slice kubepods-burstable-pod41f40b08e857406b971be852d482e8c4.slice - libcontainer container kubepods-burstable-pod41f40b08e857406b971be852d482e8c4.slice. Mar 10 00:49:01.512852 kubelet[2707]: E0310 00:49:01.512830 2707 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-9b959526b1\" not found" node="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:01.519180 containerd[1725]: time="2026-03-10T00:49:01.519147713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-9b959526b1,Uid:41f40b08e857406b971be852d482e8c4,Namespace:kube-system,Attempt:0,}" Mar 10 00:49:01.683029 kubelet[2707]: E0310 00:49:01.682989 2707 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 00:49:02.140292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2638781904.mount: Deactivated successfully. Mar 10 00:49:02.177240 containerd[1725]: time="2026-03-10T00:49:02.176414230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 00:49:02.179943 containerd[1725]: time="2026-03-10T00:49:02.179862475Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 10 00:49:02.184112 containerd[1725]: time="2026-03-10T00:49:02.183344440Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 00:49:02.186915 containerd[1725]: time="2026-03-10T00:49:02.186880884Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 00:49:02.190188 containerd[1725]: time="2026-03-10T00:49:02.190151329Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 10 00:49:02.193864 containerd[1725]: time="2026-03-10T00:49:02.193834294Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 00:49:02.198014 containerd[1725]: time="2026-03-10T00:49:02.196715738Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 10 00:49:02.201651 containerd[1725]: time="2026-03-10T00:49:02.201312544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 00:49:02.202410 containerd[1725]: time="2026-03-10T00:49:02.202382385Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 700.628656ms" Mar 10 00:49:02.207149 containerd[1725]: time="2026-03-10T00:49:02.207105631Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 854.03554ms" Mar 10 00:49:02.219571 containerd[1725]: time="2026-03-10T00:49:02.219534048Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 700.309495ms" Mar 10 00:49:02.437892 containerd[1725]: time="2026-03-10T00:49:02.436823818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 00:49:02.437892 containerd[1725]: time="2026-03-10T00:49:02.436881658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 00:49:02.437892 containerd[1725]: time="2026-03-10T00:49:02.436900618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:49:02.437892 containerd[1725]: time="2026-03-10T00:49:02.437608659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:49:02.440488 containerd[1725]: time="2026-03-10T00:49:02.439883302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 00:49:02.440488 containerd[1725]: time="2026-03-10T00:49:02.439933102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 00:49:02.440488 containerd[1725]: time="2026-03-10T00:49:02.439950582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:49:02.440488 containerd[1725]: time="2026-03-10T00:49:02.440030622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:49:02.440795 containerd[1725]: time="2026-03-10T00:49:02.440389263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 00:49:02.440795 containerd[1725]: time="2026-03-10T00:49:02.440434183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 00:49:02.440795 containerd[1725]: time="2026-03-10T00:49:02.440449663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:49:02.440795 containerd[1725]: time="2026-03-10T00:49:02.440514303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:49:02.462836 systemd[1]: Started cri-containerd-11206689058b74229de6dc1d95aec7bd146a620ea5a9d78babd1add36bc33a50.scope - libcontainer container 11206689058b74229de6dc1d95aec7bd146a620ea5a9d78babd1add36bc33a50. Mar 10 00:49:02.468773 systemd[1]: Started cri-containerd-c93f19cb0c2d3cc5a6c0ac6f9f53fa0e78c5c376e60da92037a1bb2390997e76.scope - libcontainer container c93f19cb0c2d3cc5a6c0ac6f9f53fa0e78c5c376e60da92037a1bb2390997e76. Mar 10 00:49:02.474375 systemd[1]: Started cri-containerd-aa2b7ae76b4b4055dfad22acbcaa21dde24fe445ba89b7201e8b25643a82e6ff.scope - libcontainer container aa2b7ae76b4b4055dfad22acbcaa21dde24fe445ba89b7201e8b25643a82e6ff. Mar 10 00:49:02.521733 containerd[1725]: time="2026-03-10T00:49:02.521626491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-9b959526b1,Uid:ec01d02f8c0450723c7e9f2677f5927c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c93f19cb0c2d3cc5a6c0ac6f9f53fa0e78c5c376e60da92037a1bb2390997e76\"" Mar 10 00:49:02.525231 containerd[1725]: time="2026-03-10T00:49:02.525173856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-9b959526b1,Uid:f188aeb08d1f466ff016197c82f79e0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa2b7ae76b4b4055dfad22acbcaa21dde24fe445ba89b7201e8b25643a82e6ff\"" Mar 10 00:49:02.529118 containerd[1725]: time="2026-03-10T00:49:02.528871221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-9b959526b1,Uid:41f40b08e857406b971be852d482e8c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"11206689058b74229de6dc1d95aec7bd146a620ea5a9d78babd1add36bc33a50\"" Mar 10 00:49:02.536280 containerd[1725]: time="2026-03-10T00:49:02.536245151Z" level=info msg="CreateContainer within sandbox \"c93f19cb0c2d3cc5a6c0ac6f9f53fa0e78c5c376e60da92037a1bb2390997e76\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 10 00:49:02.541413 containerd[1725]: time="2026-03-10T00:49:02.541377358Z" level=info msg="CreateContainer within sandbox \"aa2b7ae76b4b4055dfad22acbcaa21dde24fe445ba89b7201e8b25643a82e6ff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 10 00:49:02.547093 containerd[1725]: time="2026-03-10T00:49:02.547056845Z" level=info msg="CreateContainer within sandbox \"11206689058b74229de6dc1d95aec7bd146a620ea5a9d78babd1add36bc33a50\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 10 00:49:02.600954 kubelet[2707]: E0310 00:49:02.600913 2707 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-9b959526b1?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="3.2s" Mar 10 00:49:02.612768 containerd[1725]: time="2026-03-10T00:49:02.612723173Z" level=info msg="CreateContainer within sandbox \"c93f19cb0c2d3cc5a6c0ac6f9f53fa0e78c5c376e60da92037a1bb2390997e76\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d034a25e925e206e38841f32b5c0b62a8f313aac1fd740d8aa5f7eeb257d40d4\"" Mar 10 00:49:02.613447 containerd[1725]: time="2026-03-10T00:49:02.613420934Z" level=info msg="StartContainer for \"d034a25e925e206e38841f32b5c0b62a8f313aac1fd740d8aa5f7eeb257d40d4\"" Mar 10 00:49:02.619480 containerd[1725]: time="2026-03-10T00:49:02.619380982Z" level=info msg="CreateContainer within sandbox \"11206689058b74229de6dc1d95aec7bd146a620ea5a9d78babd1add36bc33a50\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"68dae3e5c274404b9303441feb6cdf0658832bf49c90f065f941d022cc10d0e7\"" Mar 10 00:49:02.620676 containerd[1725]: time="2026-03-10T00:49:02.620104863Z" level=info msg="StartContainer for \"68dae3e5c274404b9303441feb6cdf0658832bf49c90f065f941d022cc10d0e7\"" Mar 10 00:49:02.621318 containerd[1725]: time="2026-03-10T00:49:02.621287705Z" level=info msg="CreateContainer within sandbox \"aa2b7ae76b4b4055dfad22acbcaa21dde24fe445ba89b7201e8b25643a82e6ff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c162b96629ed9173c8730e50bde58a3c64100ebb980873c52d825ea0e0b98a17\"" Mar 10 00:49:02.621686 containerd[1725]: time="2026-03-10T00:49:02.621662945Z" level=info msg="StartContainer for \"c162b96629ed9173c8730e50bde58a3c64100ebb980873c52d825ea0e0b98a17\"" Mar 10 00:49:02.655865 systemd[1]: Started cri-containerd-d034a25e925e206e38841f32b5c0b62a8f313aac1fd740d8aa5f7eeb257d40d4.scope - libcontainer container d034a25e925e206e38841f32b5c0b62a8f313aac1fd740d8aa5f7eeb257d40d4. Mar 10 00:49:02.663988 systemd[1]: Started cri-containerd-68dae3e5c274404b9303441feb6cdf0658832bf49c90f065f941d022cc10d0e7.scope - libcontainer container 68dae3e5c274404b9303441feb6cdf0658832bf49c90f065f941d022cc10d0e7. Mar 10 00:49:02.669055 systemd[1]: Started cri-containerd-c162b96629ed9173c8730e50bde58a3c64100ebb980873c52d825ea0e0b98a17.scope - libcontainer container c162b96629ed9173c8730e50bde58a3c64100ebb980873c52d825ea0e0b98a17. Mar 10 00:49:02.715854 containerd[1725]: time="2026-03-10T00:49:02.714814109Z" level=info msg="StartContainer for \"d034a25e925e206e38841f32b5c0b62a8f313aac1fd740d8aa5f7eeb257d40d4\" returns successfully" Mar 10 00:49:02.725444 containerd[1725]: time="2026-03-10T00:49:02.725045523Z" level=info msg="StartContainer for \"c162b96629ed9173c8730e50bde58a3c64100ebb980873c52d825ea0e0b98a17\" returns successfully" Mar 10 00:49:02.735954 containerd[1725]: time="2026-03-10T00:49:02.735905898Z" level=info msg="StartContainer for \"68dae3e5c274404b9303441feb6cdf0658832bf49c90f065f941d022cc10d0e7\" returns successfully" Mar 10 00:49:02.894655 kubelet[2707]: I0310 00:49:02.894392 2707 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:03.649570 kubelet[2707]: E0310 00:49:03.649529 2707 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-9b959526b1\" not found" node="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:03.653454 kubelet[2707]: E0310 00:49:03.653425 2707 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-9b959526b1\" not found" node="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:03.657685 kubelet[2707]: E0310 00:49:03.657535 2707 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-9b959526b1\" not found" node="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:04.398432 kubelet[2707]: I0310 00:49:04.398241 2707 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:04.398432 kubelet[2707]: E0310 00:49:04.398283 2707 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-9b959526b1\": node \"ci-4081.3.6-n-9b959526b1\" not found" Mar 10 00:49:04.398432 kubelet[2707]: I0310 00:49:04.398305 2707 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:04.412077 kubelet[2707]: E0310 00:49:04.412037 2707 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-9b959526b1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:04.412077 kubelet[2707]: I0310 00:49:04.412069 2707 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:04.416690 kubelet[2707]: E0310 00:49:04.416475 2707 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-9b959526b1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:04.416690 kubelet[2707]: I0310 00:49:04.416504 2707 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:04.422656 kubelet[2707]: E0310 00:49:04.422599 2707 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-9b959526b1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:04.582436 kubelet[2707]: I0310 00:49:04.582383 2707 apiserver.go:52] "Watching apiserver" Mar 10 00:49:04.606824 kubelet[2707]: I0310 00:49:04.606763 2707 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 00:49:04.656408 kubelet[2707]: I0310 00:49:04.656310 2707 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:04.657810 kubelet[2707]: I0310 00:49:04.656899 2707 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:04.657810 kubelet[2707]: I0310 00:49:04.657610 2707 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:04.660175 kubelet[2707]: E0310 00:49:04.659835 2707 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-9b959526b1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:04.660175 kubelet[2707]: E0310 00:49:04.659898 2707 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-9b959526b1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:04.660175 kubelet[2707]: E0310 00:49:04.659840 2707 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-9b959526b1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:05.660661 kubelet[2707]: I0310 00:49:05.658164 2707 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:05.660661 kubelet[2707]: I0310 00:49:05.658207 2707 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:05.670335 kubelet[2707]: I0310 00:49:05.670293 2707 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 10 00:49:05.673819 kubelet[2707]: I0310 00:49:05.673793 2707 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 10 00:49:06.503935 update_engine[1703]: I20260310 00:49:06.503875 1703 update_attempter.cc:509] Updating boot flags... Mar 10 00:49:06.542732 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (3006) Mar 10 00:49:06.617076 systemd[1]: Reloading requested from client PID 3034 ('systemctl') (unit session-9.scope)... Mar 10 00:49:06.617089 systemd[1]: Reloading... Mar 10 00:49:06.702663 zram_generator::config[3070]: No configuration found. Mar 10 00:49:06.824343 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 00:49:06.917132 systemd[1]: Reloading finished in 299 ms. Mar 10 00:49:06.954915 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 00:49:06.971022 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 00:49:06.972673 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 00:49:06.978871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 00:49:07.082502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 00:49:07.092145 (kubelet)[3138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 00:49:07.129787 kubelet[3138]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 00:49:07.138340 kubelet[3138]: I0310 00:49:07.138275 3138 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 10 00:49:07.138497 kubelet[3138]: I0310 00:49:07.138487 3138 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 00:49:07.139349 kubelet[3138]: I0310 00:49:07.138560 3138 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 10 00:49:07.139349 kubelet[3138]: I0310 00:49:07.138570 3138 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 00:49:07.139349 kubelet[3138]: I0310 00:49:07.139058 3138 server.go:951] "Client rotation is on, will bootstrap in background" Mar 10 00:49:07.140902 kubelet[3138]: I0310 00:49:07.140882 3138 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 10 00:49:07.143754 kubelet[3138]: I0310 00:49:07.143587 3138 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 00:49:07.145443 kubelet[3138]: E0310 00:49:07.145411 3138 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 10 00:49:07.145910 kubelet[3138]: I0310 00:49:07.145691 3138 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 10 00:49:07.148908 kubelet[3138]: I0310 00:49:07.148892 3138 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 10 00:49:07.149586 kubelet[3138]: I0310 00:49:07.149193 3138 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 00:49:07.149586 kubelet[3138]: I0310 00:49:07.149216 3138 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-9b959526b1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 00:49:07.149586 kubelet[3138]: I0310 00:49:07.149359 3138 topology_manager.go:143] "Creating topology manager with none policy" Mar 10 00:49:07.149586 kubelet[3138]: I0310 00:49:07.149366 3138 container_manager_linux.go:308] "Creating device plugin manager" Mar 10 00:49:07.149781 kubelet[3138]: I0310 00:49:07.149388 3138 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 10 00:49:07.149781 kubelet[3138]: I0310 00:49:07.149554 3138 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 10 00:49:07.149952 kubelet[3138]: I0310 00:49:07.149941 3138 kubelet.go:482] "Attempting to sync node with API server" Mar 10 00:49:07.150030 kubelet[3138]: I0310 00:49:07.150021 3138 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 00:49:07.150108 kubelet[3138]: I0310 00:49:07.150100 3138 kubelet.go:394] "Adding apiserver pod source" Mar 10 00:49:07.150155 kubelet[3138]: I0310 00:49:07.150148 3138 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 00:49:07.153962 kubelet[3138]: I0310 00:49:07.153929 3138 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 10 00:49:07.154838 kubelet[3138]: I0310 00:49:07.154814 3138 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 00:49:07.155012 kubelet[3138]: I0310 00:49:07.154854 3138 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 10 00:49:07.160742 kubelet[3138]: I0310 00:49:07.160725 3138 server.go:1257] "Started kubelet" Mar 10 00:49:07.163697 kubelet[3138]: I0310 00:49:07.163278 3138 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 00:49:07.164685 kubelet[3138]: I0310 00:49:07.164653 3138 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 10 00:49:07.168383 kubelet[3138]: I0310 00:49:07.168346 3138 server.go:317] "Adding debug handlers to kubelet server" Mar 10 00:49:07.173607 kubelet[3138]: I0310 00:49:07.173552 3138 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 00:49:07.174718 kubelet[3138]: I0310 00:49:07.173622 3138 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 10 00:49:07.174718 kubelet[3138]: I0310 00:49:07.173792 3138 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 00:49:07.184710 kubelet[3138]: I0310 00:49:07.184669 3138 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 00:49:07.186912 kubelet[3138]: I0310 00:49:07.186428 3138 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 10 00:49:07.187696 kubelet[3138]: E0310 00:49:07.187257 3138 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-9b959526b1\" not found" Mar 10 00:49:07.188652 kubelet[3138]: I0310 00:49:07.188372 3138 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 00:49:07.188879 kubelet[3138]: I0310 00:49:07.188863 3138 reconciler.go:29] "Reconciler: start to sync state" Mar 10 00:49:07.195687 kubelet[3138]: I0310 00:49:07.195460 3138 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 10 00:49:07.197452 kubelet[3138]: I0310 00:49:07.197412 3138 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 10 00:49:07.197452 kubelet[3138]: I0310 00:49:07.197440 3138 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 10 00:49:07.197557 kubelet[3138]: I0310 00:49:07.197464 3138 kubelet.go:2501] "Starting kubelet main sync loop" Mar 10 00:49:07.197557 kubelet[3138]: E0310 00:49:07.197507 3138 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 00:49:07.201716 kubelet[3138]: I0310 00:49:07.201415 3138 factory.go:223] Registration of the systemd container factory successfully Mar 10 00:49:07.201716 kubelet[3138]: I0310 00:49:07.201534 3138 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 00:49:07.212503 kubelet[3138]: I0310 00:49:07.209603 3138 factory.go:223] Registration of the containerd container factory successfully Mar 10 00:49:07.268115 kubelet[3138]: I0310 00:49:07.268090 3138 cpu_manager.go:225] "Starting" policy="none" Mar 10 00:49:07.268115 kubelet[3138]: I0310 00:49:07.268103 3138 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 10 00:49:07.268115 kubelet[3138]: I0310 00:49:07.268124 3138 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 10 00:49:07.268281 kubelet[3138]: I0310 00:49:07.268271 3138 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 10 00:49:07.268367 kubelet[3138]: I0310 00:49:07.268339 3138 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 10 00:49:07.268367 kubelet[3138]: I0310 00:49:07.268365 3138 policy_none.go:50] "Start" Mar 10 00:49:07.268425 kubelet[3138]: I0310 00:49:07.268373 3138 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 10 00:49:07.268425 kubelet[3138]: I0310 00:49:07.268383 3138 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 10 00:49:07.268497 kubelet[3138]: I0310 00:49:07.268484 3138 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 10 00:49:07.268528 kubelet[3138]: I0310 00:49:07.268501 3138 policy_none.go:44] "Start" Mar 10 00:49:07.273404 kubelet[3138]: E0310 00:49:07.273375 3138 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 00:49:07.274599 kubelet[3138]: I0310 00:49:07.273533 3138 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 10 00:49:07.274599 kubelet[3138]: I0310 00:49:07.273547 3138 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 00:49:07.275021 kubelet[3138]: I0310 00:49:07.274645 3138 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 10 00:49:07.275994 kubelet[3138]: E0310 00:49:07.275977 3138 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 00:49:07.298942 kubelet[3138]: I0310 00:49:07.298910 3138 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:07.299362 kubelet[3138]: I0310 00:49:07.299083 3138 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:07.299362 kubelet[3138]: I0310 00:49:07.298988 3138 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:07.312226 kubelet[3138]: I0310 00:49:07.312187 3138 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 10 00:49:07.312685 kubelet[3138]: I0310 00:49:07.312607 3138 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 10 00:49:07.313108 kubelet[3138]: E0310 00:49:07.312668 3138 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-9b959526b1\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:07.313811 kubelet[3138]: I0310 00:49:07.313727 3138 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 10 00:49:07.313811 kubelet[3138]: E0310 00:49:07.313764 3138 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-9b959526b1\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:07.376591 kubelet[3138]: I0310 00:49:07.376342 3138 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:07.388690 kubelet[3138]: I0310 00:49:07.388663 3138 kubelet_node_status.go:123] "Node was previously registered" node="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:07.389049 kubelet[3138]: I0310 00:49:07.388874 3138 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:07.390041 kubelet[3138]: I0310 00:49:07.389662 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41f40b08e857406b971be852d482e8c4-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-9b959526b1\" (UID: \"41f40b08e857406b971be852d482e8c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:07.390041 kubelet[3138]: I0310 00:49:07.389695 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/41f40b08e857406b971be852d482e8c4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-9b959526b1\" (UID: \"41f40b08e857406b971be852d482e8c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:07.390041 kubelet[3138]: I0310 00:49:07.389714 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41f40b08e857406b971be852d482e8c4-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-9b959526b1\" (UID: \"41f40b08e857406b971be852d482e8c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:07.390041 kubelet[3138]: I0310 00:49:07.389732 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f188aeb08d1f466ff016197c82f79e0a-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-9b959526b1\" (UID: \"f188aeb08d1f466ff016197c82f79e0a\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:07.390041 kubelet[3138]: I0310 00:49:07.389747 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec01d02f8c0450723c7e9f2677f5927c-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-9b959526b1\" (UID: \"ec01d02f8c0450723c7e9f2677f5927c\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:07.390257 kubelet[3138]: I0310 00:49:07.389780 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec01d02f8c0450723c7e9f2677f5927c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-9b959526b1\" (UID: \"ec01d02f8c0450723c7e9f2677f5927c\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:07.390822 kubelet[3138]: I0310 00:49:07.390440 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/41f40b08e857406b971be852d482e8c4-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-9b959526b1\" (UID: \"41f40b08e857406b971be852d482e8c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:07.390822 kubelet[3138]: I0310 00:49:07.390474 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41f40b08e857406b971be852d482e8c4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-9b959526b1\" (UID: \"41f40b08e857406b971be852d482e8c4\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:07.390822 kubelet[3138]: I0310 00:49:07.390511 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec01d02f8c0450723c7e9f2677f5927c-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-9b959526b1\" (UID: \"ec01d02f8c0450723c7e9f2677f5927c\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:08.031682 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 10 00:49:08.151933 kubelet[3138]: I0310 00:49:08.151705 3138 apiserver.go:52] "Watching apiserver" Mar 10 00:49:08.188930 kubelet[3138]: I0310 00:49:08.188878 3138 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 00:49:08.247080 kubelet[3138]: I0310 00:49:08.247041 3138 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:08.248750 kubelet[3138]: I0310 00:49:08.248362 3138 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:08.248750 kubelet[3138]: I0310 00:49:08.248375 3138 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:08.269755 kubelet[3138]: I0310 00:49:08.269714 3138 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 10 00:49:08.270045 kubelet[3138]: E0310 00:49:08.269771 3138 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-9b959526b1\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:08.270045 kubelet[3138]: I0310 00:49:08.269714 3138 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 10 00:49:08.270045 kubelet[3138]: E0310 00:49:08.269831 3138 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-9b959526b1\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:08.270406 kubelet[3138]: I0310 00:49:08.270201 3138 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 10 00:49:08.270406 kubelet[3138]: E0310 00:49:08.270251 3138 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-9b959526b1\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-9b959526b1" Mar 10 00:49:09.297709 kubelet[3138]: I0310 00:49:09.297596 3138 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-9b959526b1" podStartSLOduration=4.297570739 podStartE2EDuration="4.297570739s" podCreationTimestamp="2026-03-10 00:49:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 00:49:09.286341601 +0000 UTC m=+2.190116401" watchObservedRunningTime="2026-03-10 00:49:09.297570739 +0000 UTC m=+2.201345539" Mar 10 00:49:09.299068 kubelet[3138]: I0310 00:49:09.298908 3138 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-9b959526b1" podStartSLOduration=4.298899462 podStartE2EDuration="4.298899462s" podCreationTimestamp="2026-03-10 00:49:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 00:49:09.29784874 +0000 UTC m=+2.201623540" watchObservedRunningTime="2026-03-10 00:49:09.298899462 +0000 UTC m=+2.202674262" Mar 10 00:49:09.311644 kubelet[3138]: I0310 00:49:09.311397 3138 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-9b959526b1" podStartSLOduration=2.311383722 podStartE2EDuration="2.311383722s" podCreationTimestamp="2026-03-10 00:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 00:49:09.311349682 +0000 UTC m=+2.215124482" watchObservedRunningTime="2026-03-10 00:49:09.311383722 +0000 UTC m=+2.215158522" Mar 10 00:49:12.545037 kubelet[3138]: I0310 00:49:12.544739 3138 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 10 00:49:12.545759 kubelet[3138]: I0310 00:49:12.545490 3138 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 10 00:49:12.545809 containerd[1725]: time="2026-03-10T00:49:12.545035265Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 10 00:49:13.266982 systemd[1]: Created slice kubepods-besteffort-pod3d937d2d_51e3_4ca7_912d_abdc5c5f1dc8.slice - libcontainer container kubepods-besteffort-pod3d937d2d_51e3_4ca7_912d_abdc5c5f1dc8.slice. Mar 10 00:49:13.327638 kubelet[3138]: I0310 00:49:13.327584 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3d937d2d-51e3-4ca7-912d-abdc5c5f1dc8-kube-proxy\") pod \"kube-proxy-g79nc\" (UID: \"3d937d2d-51e3-4ca7-912d-abdc5c5f1dc8\") " pod="kube-system/kube-proxy-g79nc" Mar 10 00:49:13.327638 kubelet[3138]: I0310 00:49:13.327628 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d937d2d-51e3-4ca7-912d-abdc5c5f1dc8-lib-modules\") pod \"kube-proxy-g79nc\" (UID: \"3d937d2d-51e3-4ca7-912d-abdc5c5f1dc8\") " pod="kube-system/kube-proxy-g79nc" Mar 10 00:49:13.327796 kubelet[3138]: I0310 00:49:13.327651 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d937d2d-51e3-4ca7-912d-abdc5c5f1dc8-xtables-lock\") pod \"kube-proxy-g79nc\" (UID: \"3d937d2d-51e3-4ca7-912d-abdc5c5f1dc8\") " pod="kube-system/kube-proxy-g79nc" Mar 10 00:49:13.327796 kubelet[3138]: I0310 00:49:13.327667 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxlgr\" (UniqueName: \"kubernetes.io/projected/3d937d2d-51e3-4ca7-912d-abdc5c5f1dc8-kube-api-access-fxlgr\") pod \"kube-proxy-g79nc\" (UID: \"3d937d2d-51e3-4ca7-912d-abdc5c5f1dc8\") " pod="kube-system/kube-proxy-g79nc" Mar 10 00:49:13.589127 containerd[1725]: time="2026-03-10T00:49:13.588627483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g79nc,Uid:3d937d2d-51e3-4ca7-912d-abdc5c5f1dc8,Namespace:kube-system,Attempt:0,}" Mar 10 00:49:13.635948 containerd[1725]: time="2026-03-10T00:49:13.634875080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 00:49:13.635948 containerd[1725]: time="2026-03-10T00:49:13.634927240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 00:49:13.635948 containerd[1725]: time="2026-03-10T00:49:13.634952480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:49:13.635948 containerd[1725]: time="2026-03-10T00:49:13.635036760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:49:13.657811 systemd[1]: Started cri-containerd-9e92552e63b1a62b40b2ccef088e581263bce362678d9950ff00cdf2d5f83cad.scope - libcontainer container 9e92552e63b1a62b40b2ccef088e581263bce362678d9950ff00cdf2d5f83cad. Mar 10 00:49:13.680916 containerd[1725]: time="2026-03-10T00:49:13.680830156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g79nc,Uid:3d937d2d-51e3-4ca7-912d-abdc5c5f1dc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e92552e63b1a62b40b2ccef088e581263bce362678d9950ff00cdf2d5f83cad\"" Mar 10 00:49:13.690491 containerd[1725]: time="2026-03-10T00:49:13.690446652Z" level=info msg="CreateContainer within sandbox \"9e92552e63b1a62b40b2ccef088e581263bce362678d9950ff00cdf2d5f83cad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 10 00:49:13.731323 containerd[1725]: time="2026-03-10T00:49:13.731276880Z" level=info msg="CreateContainer within sandbox \"9e92552e63b1a62b40b2ccef088e581263bce362678d9950ff00cdf2d5f83cad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6eae9b6fb54316536d12a3d4ee7639a70a9c2874a3803d7a4d28b253de2ace40\"" Mar 10 00:49:13.731966 containerd[1725]: time="2026-03-10T00:49:13.731880801Z" level=info msg="StartContainer for \"6eae9b6fb54316536d12a3d4ee7639a70a9c2874a3803d7a4d28b253de2ace40\"" Mar 10 00:49:13.766823 systemd[1]: Started cri-containerd-6eae9b6fb54316536d12a3d4ee7639a70a9c2874a3803d7a4d28b253de2ace40.scope - libcontainer container 6eae9b6fb54316536d12a3d4ee7639a70a9c2874a3803d7a4d28b253de2ace40. Mar 10 00:49:13.819698 systemd[1]: Created slice kubepods-besteffort-podb038ac62_cde3_4b76_b029_ea68e44cbc36.slice - libcontainer container kubepods-besteffort-podb038ac62_cde3_4b76_b029_ea68e44cbc36.slice. Mar 10 00:49:13.832745 kubelet[3138]: I0310 00:49:13.832649 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b038ac62-cde3-4b76-b029-ea68e44cbc36-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-4jq7k\" (UID: \"b038ac62-cde3-4b76-b029-ea68e44cbc36\") " pod="tigera-operator/tigera-operator-6cf4cccc57-4jq7k" Mar 10 00:49:13.832745 kubelet[3138]: I0310 00:49:13.832696 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grpx6\" (UniqueName: \"kubernetes.io/projected/b038ac62-cde3-4b76-b029-ea68e44cbc36-kube-api-access-grpx6\") pod \"tigera-operator-6cf4cccc57-4jq7k\" (UID: \"b038ac62-cde3-4b76-b029-ea68e44cbc36\") " pod="tigera-operator/tigera-operator-6cf4cccc57-4jq7k" Mar 10 00:49:13.833445 containerd[1725]: time="2026-03-10T00:49:13.833331250Z" level=info msg="StartContainer for \"6eae9b6fb54316536d12a3d4ee7639a70a9c2874a3803d7a4d28b253de2ace40\" returns successfully" Mar 10 00:49:14.130261 containerd[1725]: time="2026-03-10T00:49:14.129894464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-4jq7k,Uid:b038ac62-cde3-4b76-b029-ea68e44cbc36,Namespace:tigera-operator,Attempt:0,}" Mar 10 00:49:14.181610 containerd[1725]: time="2026-03-10T00:49:14.181422989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 00:49:14.181610 containerd[1725]: time="2026-03-10T00:49:14.181477750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 00:49:14.181610 containerd[1725]: time="2026-03-10T00:49:14.181488430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:49:14.181610 containerd[1725]: time="2026-03-10T00:49:14.181567350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:49:14.202068 systemd[1]: Started cri-containerd-03097130a6e8389b7cea1b0394768bc46a6b2a1ff15c0f424d82fd8e646e0707.scope - libcontainer container 03097130a6e8389b7cea1b0394768bc46a6b2a1ff15c0f424d82fd8e646e0707. Mar 10 00:49:14.241163 containerd[1725]: time="2026-03-10T00:49:14.241119889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-4jq7k,Uid:b038ac62-cde3-4b76-b029-ea68e44cbc36,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"03097130a6e8389b7cea1b0394768bc46a6b2a1ff15c0f424d82fd8e646e0707\"" Mar 10 00:49:14.244675 containerd[1725]: time="2026-03-10T00:49:14.244434374Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 10 00:49:14.443030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3679220224.mount: Deactivated successfully. Mar 10 00:49:16.360531 kubelet[3138]: I0310 00:49:16.360362 3138 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-g79nc" podStartSLOduration=3.360351057 podStartE2EDuration="3.360351057s" podCreationTimestamp="2026-03-10 00:49:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 00:49:14.280030314 +0000 UTC m=+7.183805114" watchObservedRunningTime="2026-03-10 00:49:16.360351057 +0000 UTC m=+9.264125857" Mar 10 00:49:16.688831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount75805072.mount: Deactivated successfully. Mar 10 00:49:17.607459 containerd[1725]: time="2026-03-10T00:49:17.606721298Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:17.610107 containerd[1725]: time="2026-03-10T00:49:17.610078944Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 10 00:49:17.614754 containerd[1725]: time="2026-03-10T00:49:17.614714871Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:17.618132 containerd[1725]: time="2026-03-10T00:49:17.618034756Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:17.619003 containerd[1725]: time="2026-03-10T00:49:17.618777637Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 3.374290983s" Mar 10 00:49:17.619003 containerd[1725]: time="2026-03-10T00:49:17.618811517Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 10 00:49:17.627938 containerd[1725]: time="2026-03-10T00:49:17.627903811Z" level=info msg="CreateContainer within sandbox \"03097130a6e8389b7cea1b0394768bc46a6b2a1ff15c0f424d82fd8e646e0707\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 10 00:49:17.661596 containerd[1725]: time="2026-03-10T00:49:17.661534664Z" level=info msg="CreateContainer within sandbox \"03097130a6e8389b7cea1b0394768bc46a6b2a1ff15c0f424d82fd8e646e0707\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ee4b1413a6eefd799a04780b0ed3550462fc9564f371c61a2552b3cd0eb4f778\"" Mar 10 00:49:17.663100 containerd[1725]: time="2026-03-10T00:49:17.662968586Z" level=info msg="StartContainer for \"ee4b1413a6eefd799a04780b0ed3550462fc9564f371c61a2552b3cd0eb4f778\"" Mar 10 00:49:17.696783 systemd[1]: Started cri-containerd-ee4b1413a6eefd799a04780b0ed3550462fc9564f371c61a2552b3cd0eb4f778.scope - libcontainer container ee4b1413a6eefd799a04780b0ed3550462fc9564f371c61a2552b3cd0eb4f778. Mar 10 00:49:17.724129 containerd[1725]: time="2026-03-10T00:49:17.724078321Z" level=info msg="StartContainer for \"ee4b1413a6eefd799a04780b0ed3550462fc9564f371c61a2552b3cd0eb4f778\" returns successfully" Mar 10 00:49:18.288270 kubelet[3138]: I0310 00:49:18.288205 3138 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-4jq7k" podStartSLOduration=1.910855091 podStartE2EDuration="5.288190959s" podCreationTimestamp="2026-03-10 00:49:13 +0000 UTC" firstStartedPulling="2026-03-10 00:49:14.242989812 +0000 UTC m=+7.146764612" lastFinishedPulling="2026-03-10 00:49:17.62032568 +0000 UTC m=+10.524100480" observedRunningTime="2026-03-10 00:49:18.287788998 +0000 UTC m=+11.191563798" watchObservedRunningTime="2026-03-10 00:49:18.288190959 +0000 UTC m=+11.191965799" Mar 10 00:49:23.564488 sudo[2198]: pam_unix(sudo:session): session closed for user root Mar 10 00:49:23.641597 sshd[2195]: pam_unix(sshd:session): session closed for user core Mar 10 00:49:23.647923 systemd-logind[1694]: Session 9 logged out. Waiting for processes to exit. Mar 10 00:49:23.648125 systemd[1]: sshd@6-10.200.20.10:22-10.200.16.10:47822.service: Deactivated successfully. Mar 10 00:49:23.651047 systemd[1]: session-9.scope: Deactivated successfully. Mar 10 00:49:23.651480 systemd[1]: session-9.scope: Consumed 4.309s CPU time, 151.3M memory peak, 0B memory swap peak. Mar 10 00:49:23.652660 systemd-logind[1694]: Removed session 9. Mar 10 00:49:29.541983 systemd[1]: Created slice kubepods-besteffort-pod447af4e6_9d2c_4186_a0d4_8608cf942516.slice - libcontainer container kubepods-besteffort-pod447af4e6_9d2c_4186_a0d4_8608cf942516.slice. Mar 10 00:49:29.627927 kubelet[3138]: I0310 00:49:29.627866 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/447af4e6-9d2c-4186-a0d4-8608cf942516-tigera-ca-bundle\") pod \"calico-typha-7fd8b8b776-xhlmh\" (UID: \"447af4e6-9d2c-4186-a0d4-8608cf942516\") " pod="calico-system/calico-typha-7fd8b8b776-xhlmh" Mar 10 00:49:29.627927 kubelet[3138]: I0310 00:49:29.627911 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/447af4e6-9d2c-4186-a0d4-8608cf942516-typha-certs\") pod \"calico-typha-7fd8b8b776-xhlmh\" (UID: \"447af4e6-9d2c-4186-a0d4-8608cf942516\") " pod="calico-system/calico-typha-7fd8b8b776-xhlmh" Mar 10 00:49:29.627927 kubelet[3138]: I0310 00:49:29.627931 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzkr8\" (UniqueName: \"kubernetes.io/projected/447af4e6-9d2c-4186-a0d4-8608cf942516-kube-api-access-tzkr8\") pod \"calico-typha-7fd8b8b776-xhlmh\" (UID: \"447af4e6-9d2c-4186-a0d4-8608cf942516\") " pod="calico-system/calico-typha-7fd8b8b776-xhlmh" Mar 10 00:49:29.646112 systemd[1]: Created slice kubepods-besteffort-podf3b21f82_6a2b_44eb_8808_ae38ed7d6a37.slice - libcontainer container kubepods-besteffort-podf3b21f82_6a2b_44eb_8808_ae38ed7d6a37.slice. Mar 10 00:49:29.729081 kubelet[3138]: I0310 00:49:29.729042 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f3b21f82-6a2b-44eb-8808-ae38ed7d6a37-flexvol-driver-host\") pod \"calico-node-bq9wj\" (UID: \"f3b21f82-6a2b-44eb-8808-ae38ed7d6a37\") " pod="calico-system/calico-node-bq9wj" Mar 10 00:49:29.729081 kubelet[3138]: I0310 00:49:29.729082 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/f3b21f82-6a2b-44eb-8808-ae38ed7d6a37-nodeproc\") pod \"calico-node-bq9wj\" (UID: \"f3b21f82-6a2b-44eb-8808-ae38ed7d6a37\") " pod="calico-system/calico-node-bq9wj" Mar 10 00:49:29.729255 kubelet[3138]: I0310 00:49:29.729099 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghz7m\" (UniqueName: \"kubernetes.io/projected/f3b21f82-6a2b-44eb-8808-ae38ed7d6a37-kube-api-access-ghz7m\") pod \"calico-node-bq9wj\" (UID: \"f3b21f82-6a2b-44eb-8808-ae38ed7d6a37\") " pod="calico-system/calico-node-bq9wj" Mar 10 00:49:29.729255 kubelet[3138]: I0310 00:49:29.729131 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/f3b21f82-6a2b-44eb-8808-ae38ed7d6a37-bpffs\") pod \"calico-node-bq9wj\" (UID: \"f3b21f82-6a2b-44eb-8808-ae38ed7d6a37\") " pod="calico-system/calico-node-bq9wj" Mar 10 00:49:29.729255 kubelet[3138]: I0310 00:49:29.729149 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f3b21f82-6a2b-44eb-8808-ae38ed7d6a37-cni-bin-dir\") pod \"calico-node-bq9wj\" (UID: \"f3b21f82-6a2b-44eb-8808-ae38ed7d6a37\") " pod="calico-system/calico-node-bq9wj" Mar 10 00:49:29.729255 kubelet[3138]: I0310 00:49:29.729163 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f3b21f82-6a2b-44eb-8808-ae38ed7d6a37-cni-log-dir\") pod \"calico-node-bq9wj\" (UID: \"f3b21f82-6a2b-44eb-8808-ae38ed7d6a37\") " pod="calico-system/calico-node-bq9wj" Mar 10 00:49:29.729255 kubelet[3138]: I0310 00:49:29.729177 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f3b21f82-6a2b-44eb-8808-ae38ed7d6a37-cni-net-dir\") pod \"calico-node-bq9wj\" (UID: \"f3b21f82-6a2b-44eb-8808-ae38ed7d6a37\") " pod="calico-system/calico-node-bq9wj" Mar 10 00:49:29.729359 kubelet[3138]: I0310 00:49:29.729192 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3b21f82-6a2b-44eb-8808-ae38ed7d6a37-tigera-ca-bundle\") pod \"calico-node-bq9wj\" (UID: \"f3b21f82-6a2b-44eb-8808-ae38ed7d6a37\") " pod="calico-system/calico-node-bq9wj" Mar 10 00:49:29.729359 kubelet[3138]: I0310 00:49:29.729207 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3b21f82-6a2b-44eb-8808-ae38ed7d6a37-xtables-lock\") pod \"calico-node-bq9wj\" (UID: \"f3b21f82-6a2b-44eb-8808-ae38ed7d6a37\") " pod="calico-system/calico-node-bq9wj" Mar 10 00:49:29.729359 kubelet[3138]: I0310 00:49:29.729242 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/f3b21f82-6a2b-44eb-8808-ae38ed7d6a37-sys-fs\") pod \"calico-node-bq9wj\" (UID: \"f3b21f82-6a2b-44eb-8808-ae38ed7d6a37\") " pod="calico-system/calico-node-bq9wj" Mar 10 00:49:29.729359 kubelet[3138]: I0310 00:49:29.729257 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f3b21f82-6a2b-44eb-8808-ae38ed7d6a37-var-run-calico\") pod \"calico-node-bq9wj\" (UID: \"f3b21f82-6a2b-44eb-8808-ae38ed7d6a37\") " pod="calico-system/calico-node-bq9wj" Mar 10 00:49:29.729359 kubelet[3138]: I0310 00:49:29.729274 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f3b21f82-6a2b-44eb-8808-ae38ed7d6a37-policysync\") pod \"calico-node-bq9wj\" (UID: \"f3b21f82-6a2b-44eb-8808-ae38ed7d6a37\") " pod="calico-system/calico-node-bq9wj" Mar 10 00:49:29.729465 kubelet[3138]: I0310 00:49:29.729288 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f3b21f82-6a2b-44eb-8808-ae38ed7d6a37-var-lib-calico\") pod \"calico-node-bq9wj\" (UID: \"f3b21f82-6a2b-44eb-8808-ae38ed7d6a37\") " pod="calico-system/calico-node-bq9wj" Mar 10 00:49:29.729465 kubelet[3138]: I0310 00:49:29.729306 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3b21f82-6a2b-44eb-8808-ae38ed7d6a37-lib-modules\") pod \"calico-node-bq9wj\" (UID: \"f3b21f82-6a2b-44eb-8808-ae38ed7d6a37\") " pod="calico-system/calico-node-bq9wj" Mar 10 00:49:29.729465 kubelet[3138]: I0310 00:49:29.729320 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f3b21f82-6a2b-44eb-8808-ae38ed7d6a37-node-certs\") pod \"calico-node-bq9wj\" (UID: \"f3b21f82-6a2b-44eb-8808-ae38ed7d6a37\") " pod="calico-system/calico-node-bq9wj" Mar 10 00:49:29.758649 kubelet[3138]: E0310 00:49:29.757570 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zhks4" podUID="e643e7c4-0931-432c-9761-f364e4ac4030" Mar 10 00:49:29.831690 kubelet[3138]: I0310 00:49:29.830440 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llgzt\" (UniqueName: \"kubernetes.io/projected/e643e7c4-0931-432c-9761-f364e4ac4030-kube-api-access-llgzt\") pod \"csi-node-driver-zhks4\" (UID: \"e643e7c4-0931-432c-9761-f364e4ac4030\") " pod="calico-system/csi-node-driver-zhks4" Mar 10 00:49:29.833924 kubelet[3138]: I0310 00:49:29.832009 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e643e7c4-0931-432c-9761-f364e4ac4030-kubelet-dir\") pod \"csi-node-driver-zhks4\" (UID: \"e643e7c4-0931-432c-9761-f364e4ac4030\") " pod="calico-system/csi-node-driver-zhks4" Mar 10 00:49:29.833924 kubelet[3138]: I0310 00:49:29.832393 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e643e7c4-0931-432c-9761-f364e4ac4030-varrun\") pod \"csi-node-driver-zhks4\" (UID: \"e643e7c4-0931-432c-9761-f364e4ac4030\") " pod="calico-system/csi-node-driver-zhks4" Mar 10 00:49:29.833924 kubelet[3138]: I0310 00:49:29.833343 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e643e7c4-0931-432c-9761-f364e4ac4030-registration-dir\") pod \"csi-node-driver-zhks4\" (UID: \"e643e7c4-0931-432c-9761-f364e4ac4030\") " pod="calico-system/csi-node-driver-zhks4" Mar 10 00:49:29.833924 kubelet[3138]: I0310 00:49:29.833434 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e643e7c4-0931-432c-9761-f364e4ac4030-socket-dir\") pod \"csi-node-driver-zhks4\" (UID: \"e643e7c4-0931-432c-9761-f364e4ac4030\") " pod="calico-system/csi-node-driver-zhks4" Mar 10 00:49:29.836964 kubelet[3138]: E0310 00:49:29.836942 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.837127 kubelet[3138]: W0310 00:49:29.837084 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.837202 kubelet[3138]: E0310 00:49:29.837190 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.839435 kubelet[3138]: E0310 00:49:29.839407 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.839435 kubelet[3138]: W0310 00:49:29.839425 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.839435 kubelet[3138]: E0310 00:49:29.839440 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.846827 kubelet[3138]: E0310 00:49:29.846800 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.846827 kubelet[3138]: W0310 00:49:29.846821 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.846948 kubelet[3138]: E0310 00:49:29.846840 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.852726 containerd[1725]: time="2026-03-10T00:49:29.852688990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fd8b8b776-xhlmh,Uid:447af4e6-9d2c-4186-a0d4-8608cf942516,Namespace:calico-system,Attempt:0,}" Mar 10 00:49:29.893316 containerd[1725]: time="2026-03-10T00:49:29.893129630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 00:49:29.893316 containerd[1725]: time="2026-03-10T00:49:29.893198270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 00:49:29.893316 containerd[1725]: time="2026-03-10T00:49:29.893219670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:49:29.893545 containerd[1725]: time="2026-03-10T00:49:29.893317190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:49:29.912842 systemd[1]: Started cri-containerd-00ff005b2b4cb8ecd4a715c3d4a4b20ae88d31f6444cc4e31b6169b44c288e7e.scope - libcontainer container 00ff005b2b4cb8ecd4a715c3d4a4b20ae88d31f6444cc4e31b6169b44c288e7e. Mar 10 00:49:29.935207 kubelet[3138]: E0310 00:49:29.935097 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.935207 kubelet[3138]: W0310 00:49:29.935121 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.935207 kubelet[3138]: E0310 00:49:29.935139 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.936004 kubelet[3138]: E0310 00:49:29.935329 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.936004 kubelet[3138]: W0310 00:49:29.935337 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.936004 kubelet[3138]: E0310 00:49:29.935346 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.936004 kubelet[3138]: E0310 00:49:29.935555 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.936004 kubelet[3138]: W0310 00:49:29.935567 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.936004 kubelet[3138]: E0310 00:49:29.935580 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.936004 kubelet[3138]: E0310 00:49:29.935821 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.936004 kubelet[3138]: W0310 00:49:29.935832 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.936004 kubelet[3138]: E0310 00:49:29.935843 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.936802 kubelet[3138]: E0310 00:49:29.936716 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.936802 kubelet[3138]: W0310 00:49:29.936735 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.936802 kubelet[3138]: E0310 00:49:29.936753 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.937043 kubelet[3138]: E0310 00:49:29.936979 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.937043 kubelet[3138]: W0310 00:49:29.936991 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.937043 kubelet[3138]: E0310 00:49:29.937003 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.937394 kubelet[3138]: E0310 00:49:29.937373 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.937394 kubelet[3138]: W0310 00:49:29.937391 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.937474 kubelet[3138]: E0310 00:49:29.937407 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.937709 kubelet[3138]: E0310 00:49:29.937693 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.937709 kubelet[3138]: W0310 00:49:29.937707 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.937826 kubelet[3138]: E0310 00:49:29.937733 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.938109 kubelet[3138]: E0310 00:49:29.938026 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.938109 kubelet[3138]: W0310 00:49:29.938038 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.938109 kubelet[3138]: E0310 00:49:29.938048 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.938323 kubelet[3138]: E0310 00:49:29.938254 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.938323 kubelet[3138]: W0310 00:49:29.938263 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.938323 kubelet[3138]: E0310 00:49:29.938276 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.938819 kubelet[3138]: E0310 00:49:29.938491 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.938819 kubelet[3138]: W0310 00:49:29.938500 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.938819 kubelet[3138]: E0310 00:49:29.938509 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.938882 kubelet[3138]: E0310 00:49:29.938822 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.938882 kubelet[3138]: W0310 00:49:29.938832 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.938882 kubelet[3138]: E0310 00:49:29.938853 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.939107 kubelet[3138]: E0310 00:49:29.939095 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.939107 kubelet[3138]: W0310 00:49:29.939105 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.939188 kubelet[3138]: E0310 00:49:29.939115 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.939454 kubelet[3138]: E0310 00:49:29.939381 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.939454 kubelet[3138]: W0310 00:49:29.939396 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.939454 kubelet[3138]: E0310 00:49:29.939405 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.940714 kubelet[3138]: E0310 00:49:29.939597 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.940714 kubelet[3138]: W0310 00:49:29.939606 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.940714 kubelet[3138]: E0310 00:49:29.939614 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.940714 kubelet[3138]: E0310 00:49:29.939892 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.940714 kubelet[3138]: W0310 00:49:29.939901 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.940714 kubelet[3138]: E0310 00:49:29.939922 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.940714 kubelet[3138]: E0310 00:49:29.940113 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.940714 kubelet[3138]: W0310 00:49:29.940121 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.940714 kubelet[3138]: E0310 00:49:29.940129 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.940714 kubelet[3138]: E0310 00:49:29.940296 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.940931 kubelet[3138]: W0310 00:49:29.940304 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.940931 kubelet[3138]: E0310 00:49:29.940314 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.940931 kubelet[3138]: E0310 00:49:29.940460 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.940931 kubelet[3138]: W0310 00:49:29.940467 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.940931 kubelet[3138]: E0310 00:49:29.940475 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.940931 kubelet[3138]: E0310 00:49:29.940658 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.940931 kubelet[3138]: W0310 00:49:29.940666 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.940931 kubelet[3138]: E0310 00:49:29.940674 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.941328 kubelet[3138]: E0310 00:49:29.941312 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.941500 kubelet[3138]: W0310 00:49:29.941389 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.941500 kubelet[3138]: E0310 00:49:29.941407 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.942061 kubelet[3138]: E0310 00:49:29.941768 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.942061 kubelet[3138]: W0310 00:49:29.941798 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.942061 kubelet[3138]: E0310 00:49:29.941814 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.942061 kubelet[3138]: E0310 00:49:29.941990 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.942061 kubelet[3138]: W0310 00:49:29.941998 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.942061 kubelet[3138]: E0310 00:49:29.942007 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.942386 kubelet[3138]: E0310 00:49:29.942274 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.942386 kubelet[3138]: W0310 00:49:29.942286 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.942386 kubelet[3138]: E0310 00:49:29.942298 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.942762 kubelet[3138]: E0310 00:49:29.942748 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.942898 kubelet[3138]: W0310 00:49:29.942884 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.942957 kubelet[3138]: E0310 00:49:29.942947 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:29.947147 containerd[1725]: time="2026-03-10T00:49:29.947108897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fd8b8b776-xhlmh,Uid:447af4e6-9d2c-4186-a0d4-8608cf942516,Namespace:calico-system,Attempt:0,} returns sandbox id \"00ff005b2b4cb8ecd4a715c3d4a4b20ae88d31f6444cc4e31b6169b44c288e7e\"" Mar 10 00:49:29.949674 containerd[1725]: time="2026-03-10T00:49:29.949625022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 10 00:49:29.961941 containerd[1725]: time="2026-03-10T00:49:29.961089564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bq9wj,Uid:f3b21f82-6a2b-44eb-8808-ae38ed7d6a37,Namespace:calico-system,Attempt:0,}" Mar 10 00:49:29.962377 kubelet[3138]: E0310 00:49:29.962352 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:29.962377 kubelet[3138]: W0310 00:49:29.962375 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:29.962467 kubelet[3138]: E0310 00:49:29.962414 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:30.001992 containerd[1725]: time="2026-03-10T00:49:30.001721445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 00:49:30.001992 containerd[1725]: time="2026-03-10T00:49:30.001836045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 00:49:30.001992 containerd[1725]: time="2026-03-10T00:49:30.001848485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:49:30.002336 containerd[1725]: time="2026-03-10T00:49:30.002077766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:49:30.017852 systemd[1]: Started cri-containerd-ae5a25c89670960be3db69af1903295bf5fc95007d0041b44cc05e4abd627c94.scope - libcontainer container ae5a25c89670960be3db69af1903295bf5fc95007d0041b44cc05e4abd627c94. Mar 10 00:49:30.039029 containerd[1725]: time="2026-03-10T00:49:30.038993559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bq9wj,Uid:f3b21f82-6a2b-44eb-8808-ae38ed7d6a37,Namespace:calico-system,Attempt:0,} returns sandbox id \"ae5a25c89670960be3db69af1903295bf5fc95007d0041b44cc05e4abd627c94\"" Mar 10 00:49:31.198834 kubelet[3138]: E0310 00:49:31.198784 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zhks4" podUID="e643e7c4-0931-432c-9761-f364e4ac4030" Mar 10 00:49:31.689554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount147921900.mount: Deactivated successfully. Mar 10 00:49:32.101943 containerd[1725]: time="2026-03-10T00:49:32.101829883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:32.105255 containerd[1725]: time="2026-03-10T00:49:32.105096170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 10 00:49:32.108791 containerd[1725]: time="2026-03-10T00:49:32.108764694Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:32.114035 containerd[1725]: time="2026-03-10T00:49:32.113705820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:32.114831 containerd[1725]: time="2026-03-10T00:49:32.114799222Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.16512108s" Mar 10 00:49:32.114896 containerd[1725]: time="2026-03-10T00:49:32.114833222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 10 00:49:32.116201 containerd[1725]: time="2026-03-10T00:49:32.116028343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 10 00:49:32.135879 containerd[1725]: time="2026-03-10T00:49:32.135699208Z" level=info msg="CreateContainer within sandbox \"00ff005b2b4cb8ecd4a715c3d4a4b20ae88d31f6444cc4e31b6169b44c288e7e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 10 00:49:32.176017 containerd[1725]: time="2026-03-10T00:49:32.175973619Z" level=info msg="CreateContainer within sandbox \"00ff005b2b4cb8ecd4a715c3d4a4b20ae88d31f6444cc4e31b6169b44c288e7e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"79fd330159a08a4889229e232fa3e5a64684770803c16de4025b7c9a4c236879\"" Mar 10 00:49:32.176985 containerd[1725]: time="2026-03-10T00:49:32.176946021Z" level=info msg="StartContainer for \"79fd330159a08a4889229e232fa3e5a64684770803c16de4025b7c9a4c236879\"" Mar 10 00:49:32.205814 systemd[1]: Started cri-containerd-79fd330159a08a4889229e232fa3e5a64684770803c16de4025b7c9a4c236879.scope - libcontainer container 79fd330159a08a4889229e232fa3e5a64684770803c16de4025b7c9a4c236879. Mar 10 00:49:32.243478 containerd[1725]: time="2026-03-10T00:49:32.243429145Z" level=info msg="StartContainer for \"79fd330159a08a4889229e232fa3e5a64684770803c16de4025b7c9a4c236879\" returns successfully" Mar 10 00:49:32.337895 kubelet[3138]: E0310 00:49:32.337824 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.338687 kubelet[3138]: W0310 00:49:32.337851 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.338687 kubelet[3138]: E0310 00:49:32.338306 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.338687 kubelet[3138]: E0310 00:49:32.338537 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.338687 kubelet[3138]: W0310 00:49:32.338547 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.338687 kubelet[3138]: E0310 00:49:32.338557 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.339718 kubelet[3138]: E0310 00:49:32.339551 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.339718 kubelet[3138]: W0310 00:49:32.339574 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.339718 kubelet[3138]: E0310 00:49:32.339587 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.340061 kubelet[3138]: E0310 00:49:32.339899 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.340061 kubelet[3138]: W0310 00:49:32.339918 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.340061 kubelet[3138]: E0310 00:49:32.339930 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.340359 kubelet[3138]: E0310 00:49:32.340265 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.340359 kubelet[3138]: W0310 00:49:32.340278 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.340359 kubelet[3138]: E0310 00:49:32.340290 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.340699 kubelet[3138]: E0310 00:49:32.340589 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.340699 kubelet[3138]: W0310 00:49:32.340601 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.340699 kubelet[3138]: E0310 00:49:32.340612 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.341034 kubelet[3138]: E0310 00:49:32.340931 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.341034 kubelet[3138]: W0310 00:49:32.340943 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.341034 kubelet[3138]: E0310 00:49:32.340961 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.341371 kubelet[3138]: E0310 00:49:32.341233 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.341371 kubelet[3138]: W0310 00:49:32.341244 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.341371 kubelet[3138]: E0310 00:49:32.341254 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.341640 kubelet[3138]: E0310 00:49:32.341516 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.341640 kubelet[3138]: W0310 00:49:32.341526 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.341640 kubelet[3138]: E0310 00:49:32.341538 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.342280 kubelet[3138]: E0310 00:49:32.341948 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.342280 kubelet[3138]: W0310 00:49:32.341961 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.342280 kubelet[3138]: E0310 00:49:32.341972 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.342774 kubelet[3138]: E0310 00:49:32.342556 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.342774 kubelet[3138]: W0310 00:49:32.342570 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.342774 kubelet[3138]: E0310 00:49:32.342692 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.343141 kubelet[3138]: E0310 00:49:32.343005 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.343141 kubelet[3138]: W0310 00:49:32.343017 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.343141 kubelet[3138]: E0310 00:49:32.343028 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.343494 kubelet[3138]: E0310 00:49:32.343387 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.343494 kubelet[3138]: W0310 00:49:32.343397 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.343494 kubelet[3138]: E0310 00:49:32.343408 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.343832 kubelet[3138]: E0310 00:49:32.343745 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.343832 kubelet[3138]: W0310 00:49:32.343756 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.343832 kubelet[3138]: E0310 00:49:32.343766 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.344179 kubelet[3138]: E0310 00:49:32.344071 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.344179 kubelet[3138]: W0310 00:49:32.344081 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.344179 kubelet[3138]: E0310 00:49:32.344101 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.352525 kubelet[3138]: E0310 00:49:32.352422 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.352525 kubelet[3138]: W0310 00:49:32.352443 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.352525 kubelet[3138]: E0310 00:49:32.352463 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.354978 kubelet[3138]: E0310 00:49:32.353799 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.354978 kubelet[3138]: W0310 00:49:32.353819 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.354978 kubelet[3138]: E0310 00:49:32.353845 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.355398 kubelet[3138]: E0310 00:49:32.355377 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.355398 kubelet[3138]: W0310 00:49:32.355394 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.355554 kubelet[3138]: E0310 00:49:32.355412 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.355650 kubelet[3138]: E0310 00:49:32.355628 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.355699 kubelet[3138]: W0310 00:49:32.355650 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.355699 kubelet[3138]: E0310 00:49:32.355661 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.355843 kubelet[3138]: E0310 00:49:32.355830 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.355843 kubelet[3138]: W0310 00:49:32.355840 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.355916 kubelet[3138]: E0310 00:49:32.355849 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.356187 kubelet[3138]: E0310 00:49:32.356167 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.356187 kubelet[3138]: W0310 00:49:32.356182 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.356281 kubelet[3138]: E0310 00:49:32.356195 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.356817 kubelet[3138]: E0310 00:49:32.356797 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.356817 kubelet[3138]: W0310 00:49:32.356812 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.357020 kubelet[3138]: E0310 00:49:32.356825 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.357710 kubelet[3138]: E0310 00:49:32.357225 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.357710 kubelet[3138]: W0310 00:49:32.357709 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.358203 kubelet[3138]: E0310 00:49:32.357724 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.358416 kubelet[3138]: E0310 00:49:32.358395 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.358416 kubelet[3138]: W0310 00:49:32.358413 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.358524 kubelet[3138]: E0310 00:49:32.358426 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.359848 kubelet[3138]: E0310 00:49:32.359823 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.359924 kubelet[3138]: W0310 00:49:32.359883 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.359924 kubelet[3138]: E0310 00:49:32.359901 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.360596 kubelet[3138]: E0310 00:49:32.360150 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.360596 kubelet[3138]: W0310 00:49:32.360164 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.360596 kubelet[3138]: E0310 00:49:32.360204 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.360596 kubelet[3138]: E0310 00:49:32.360387 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.360596 kubelet[3138]: W0310 00:49:32.360396 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.360596 kubelet[3138]: E0310 00:49:32.360407 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.363279 kubelet[3138]: E0310 00:49:32.363244 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.363279 kubelet[3138]: W0310 00:49:32.363265 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.363279 kubelet[3138]: E0310 00:49:32.363282 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.364869 kubelet[3138]: E0310 00:49:32.364847 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.364869 kubelet[3138]: W0310 00:49:32.364864 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.364967 kubelet[3138]: E0310 00:49:32.364878 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.365916 kubelet[3138]: E0310 00:49:32.365892 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.365916 kubelet[3138]: W0310 00:49:32.365910 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.366061 kubelet[3138]: E0310 00:49:32.365924 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.366241 kubelet[3138]: E0310 00:49:32.366200 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.366241 kubelet[3138]: W0310 00:49:32.366239 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.366357 kubelet[3138]: E0310 00:49:32.366252 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.366707 kubelet[3138]: E0310 00:49:32.366688 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.366774 kubelet[3138]: W0310 00:49:32.366703 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.366774 kubelet[3138]: E0310 00:49:32.366750 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:32.367559 kubelet[3138]: E0310 00:49:32.367538 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:32.367559 kubelet[3138]: W0310 00:49:32.367554 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:32.367693 kubelet[3138]: E0310 00:49:32.367567 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.199374 kubelet[3138]: E0310 00:49:33.198365 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zhks4" podUID="e643e7c4-0931-432c-9761-f364e4ac4030" Mar 10 00:49:33.306994 kubelet[3138]: I0310 00:49:33.306962 3138 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 10 00:49:33.352309 kubelet[3138]: E0310 00:49:33.352273 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.352309 kubelet[3138]: W0310 00:49:33.352299 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.352701 kubelet[3138]: E0310 00:49:33.352322 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.352701 kubelet[3138]: E0310 00:49:33.352520 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.352701 kubelet[3138]: W0310 00:49:33.352529 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.352701 kubelet[3138]: E0310 00:49:33.352542 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.352805 kubelet[3138]: E0310 00:49:33.352713 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.352805 kubelet[3138]: W0310 00:49:33.352722 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.352805 kubelet[3138]: E0310 00:49:33.352731 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.352904 kubelet[3138]: E0310 00:49:33.352885 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.352904 kubelet[3138]: W0310 00:49:33.352899 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.352956 kubelet[3138]: E0310 00:49:33.352908 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.353073 kubelet[3138]: E0310 00:49:33.353060 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.353099 kubelet[3138]: W0310 00:49:33.353072 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.353099 kubelet[3138]: E0310 00:49:33.353081 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.353223 kubelet[3138]: E0310 00:49:33.353211 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.353290 kubelet[3138]: W0310 00:49:33.353227 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.353290 kubelet[3138]: E0310 00:49:33.353238 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.353385 kubelet[3138]: E0310 00:49:33.353370 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.353385 kubelet[3138]: W0310 00:49:33.353383 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.353430 kubelet[3138]: E0310 00:49:33.353392 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.353536 kubelet[3138]: E0310 00:49:33.353524 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.353564 kubelet[3138]: W0310 00:49:33.353535 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.353564 kubelet[3138]: E0310 00:49:33.353544 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.353720 kubelet[3138]: E0310 00:49:33.353707 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.353750 kubelet[3138]: W0310 00:49:33.353719 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.353750 kubelet[3138]: E0310 00:49:33.353728 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.353871 kubelet[3138]: E0310 00:49:33.353859 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.353871 kubelet[3138]: W0310 00:49:33.353869 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.353918 kubelet[3138]: E0310 00:49:33.353877 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.354008 kubelet[3138]: E0310 00:49:33.353996 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.354032 kubelet[3138]: W0310 00:49:33.354007 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.354032 kubelet[3138]: E0310 00:49:33.354014 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.354148 kubelet[3138]: E0310 00:49:33.354136 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.354175 kubelet[3138]: W0310 00:49:33.354148 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.354175 kubelet[3138]: E0310 00:49:33.354155 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.354306 kubelet[3138]: E0310 00:49:33.354294 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.354333 kubelet[3138]: W0310 00:49:33.354305 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.354333 kubelet[3138]: E0310 00:49:33.354312 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.354452 kubelet[3138]: E0310 00:49:33.354440 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.354528 kubelet[3138]: W0310 00:49:33.354451 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.354528 kubelet[3138]: E0310 00:49:33.354459 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.354591 kubelet[3138]: E0310 00:49:33.354578 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.354591 kubelet[3138]: W0310 00:49:33.354588 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.354651 kubelet[3138]: E0310 00:49:33.354595 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.361910 kubelet[3138]: E0310 00:49:33.361893 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.361910 kubelet[3138]: W0310 00:49:33.361906 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.361983 kubelet[3138]: E0310 00:49:33.361917 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.362133 kubelet[3138]: E0310 00:49:33.362117 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.362133 kubelet[3138]: W0310 00:49:33.362130 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.362189 kubelet[3138]: E0310 00:49:33.362140 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.362316 kubelet[3138]: E0310 00:49:33.362301 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.362316 kubelet[3138]: W0310 00:49:33.362312 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.362369 kubelet[3138]: E0310 00:49:33.362324 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.362536 kubelet[3138]: E0310 00:49:33.362522 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.362536 kubelet[3138]: W0310 00:49:33.362533 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.362610 kubelet[3138]: E0310 00:49:33.362542 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.362739 kubelet[3138]: E0310 00:49:33.362703 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.362739 kubelet[3138]: W0310 00:49:33.362714 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.362739 kubelet[3138]: E0310 00:49:33.362724 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.362870 kubelet[3138]: E0310 00:49:33.362858 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.362870 kubelet[3138]: W0310 00:49:33.362867 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.362938 kubelet[3138]: E0310 00:49:33.362875 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.363075 kubelet[3138]: E0310 00:49:33.363063 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.363075 kubelet[3138]: W0310 00:49:33.363073 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.363136 kubelet[3138]: E0310 00:49:33.363082 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.363688 kubelet[3138]: E0310 00:49:33.363670 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.363688 kubelet[3138]: W0310 00:49:33.363686 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.363794 kubelet[3138]: E0310 00:49:33.363700 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.363882 kubelet[3138]: E0310 00:49:33.363867 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.363882 kubelet[3138]: W0310 00:49:33.363878 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.363958 kubelet[3138]: E0310 00:49:33.363887 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.364031 kubelet[3138]: E0310 00:49:33.364019 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.364031 kubelet[3138]: W0310 00:49:33.364028 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.364094 kubelet[3138]: E0310 00:49:33.364036 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.364215 kubelet[3138]: E0310 00:49:33.364203 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.364215 kubelet[3138]: W0310 00:49:33.364213 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.364271 kubelet[3138]: E0310 00:49:33.364222 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.364385 kubelet[3138]: E0310 00:49:33.364371 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.364385 kubelet[3138]: W0310 00:49:33.364382 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.364441 kubelet[3138]: E0310 00:49:33.364391 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.364596 kubelet[3138]: E0310 00:49:33.364582 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.364596 kubelet[3138]: W0310 00:49:33.364593 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.364682 kubelet[3138]: E0310 00:49:33.364601 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.365057 kubelet[3138]: E0310 00:49:33.364958 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.365057 kubelet[3138]: W0310 00:49:33.364973 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.365057 kubelet[3138]: E0310 00:49:33.364984 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.365349 kubelet[3138]: E0310 00:49:33.365215 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.365349 kubelet[3138]: W0310 00:49:33.365228 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.365349 kubelet[3138]: E0310 00:49:33.365239 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.365521 kubelet[3138]: E0310 00:49:33.365509 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.365740 kubelet[3138]: W0310 00:49:33.365565 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.365740 kubelet[3138]: E0310 00:49:33.365581 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.365837 kubelet[3138]: E0310 00:49:33.365821 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.365837 kubelet[3138]: W0310 00:49:33.365834 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.365891 kubelet[3138]: E0310 00:49:33.365846 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.366155 kubelet[3138]: E0310 00:49:33.366134 3138 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 00:49:33.366155 kubelet[3138]: W0310 00:49:33.366151 3138 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 00:49:33.366222 kubelet[3138]: E0310 00:49:33.366165 3138 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 00:49:33.687579 containerd[1725]: time="2026-03-10T00:49:33.686743460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:33.692878 containerd[1725]: time="2026-03-10T00:49:33.692842138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 10 00:49:33.696451 containerd[1725]: time="2026-03-10T00:49:33.696420560Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:33.701882 containerd[1725]: time="2026-03-10T00:49:33.701829633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:33.702959 containerd[1725]: time="2026-03-10T00:49:33.702358117Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.586292374s" Mar 10 00:49:33.702959 containerd[1725]: time="2026-03-10T00:49:33.702391397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 10 00:49:33.712429 containerd[1725]: time="2026-03-10T00:49:33.712391539Z" level=info msg="CreateContainer within sandbox \"ae5a25c89670960be3db69af1903295bf5fc95007d0041b44cc05e4abd627c94\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 10 00:49:33.752077 containerd[1725]: time="2026-03-10T00:49:33.752034384Z" level=info msg="CreateContainer within sandbox \"ae5a25c89670960be3db69af1903295bf5fc95007d0041b44cc05e4abd627c94\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0c3ea2fbf0cd1b335a974b580cd023b422f3a877d02470cb7e66dec9c7f90a4c\"" Mar 10 00:49:33.752893 containerd[1725]: time="2026-03-10T00:49:33.752778109Z" level=info msg="StartContainer for \"0c3ea2fbf0cd1b335a974b580cd023b422f3a877d02470cb7e66dec9c7f90a4c\"" Mar 10 00:49:33.782797 systemd[1]: Started cri-containerd-0c3ea2fbf0cd1b335a974b580cd023b422f3a877d02470cb7e66dec9c7f90a4c.scope - libcontainer container 0c3ea2fbf0cd1b335a974b580cd023b422f3a877d02470cb7e66dec9c7f90a4c. Mar 10 00:49:33.814256 containerd[1725]: time="2026-03-10T00:49:33.813874248Z" level=info msg="StartContainer for \"0c3ea2fbf0cd1b335a974b580cd023b422f3a877d02470cb7e66dec9c7f90a4c\" returns successfully" Mar 10 00:49:33.821869 systemd[1]: cri-containerd-0c3ea2fbf0cd1b335a974b580cd023b422f3a877d02470cb7e66dec9c7f90a4c.scope: Deactivated successfully. Mar 10 00:49:33.843585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c3ea2fbf0cd1b335a974b580cd023b422f3a877d02470cb7e66dec9c7f90a4c-rootfs.mount: Deactivated successfully. Mar 10 00:49:34.357640 kubelet[3138]: I0310 00:49:34.339423 3138 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-7fd8b8b776-xhlmh" podStartSLOduration=3.172714942 podStartE2EDuration="5.339410624s" podCreationTimestamp="2026-03-10 00:49:29 +0000 UTC" firstStartedPulling="2026-03-10 00:49:29.949187421 +0000 UTC m=+22.852962221" lastFinishedPulling="2026-03-10 00:49:32.115883103 +0000 UTC m=+25.019657903" observedRunningTime="2026-03-10 00:49:32.337817664 +0000 UTC m=+25.241592464" watchObservedRunningTime="2026-03-10 00:49:34.339410624 +0000 UTC m=+27.243185424" Mar 10 00:49:34.979015 containerd[1725]: time="2026-03-10T00:49:34.978820446Z" level=info msg="shim disconnected" id=0c3ea2fbf0cd1b335a974b580cd023b422f3a877d02470cb7e66dec9c7f90a4c namespace=k8s.io Mar 10 00:49:34.979015 containerd[1725]: time="2026-03-10T00:49:34.978990647Z" level=warning msg="cleaning up after shim disconnected" id=0c3ea2fbf0cd1b335a974b580cd023b422f3a877d02470cb7e66dec9c7f90a4c namespace=k8s.io Mar 10 00:49:34.979015 containerd[1725]: time="2026-03-10T00:49:34.979000647Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 00:49:35.199572 kubelet[3138]: E0310 00:49:35.199525 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zhks4" podUID="e643e7c4-0931-432c-9761-f364e4ac4030" Mar 10 00:49:35.321052 containerd[1725]: time="2026-03-10T00:49:35.320265008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 10 00:49:37.199454 kubelet[3138]: E0310 00:49:37.198654 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zhks4" podUID="e643e7c4-0931-432c-9761-f364e4ac4030" Mar 10 00:49:39.199908 kubelet[3138]: E0310 00:49:39.198985 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zhks4" podUID="e643e7c4-0931-432c-9761-f364e4ac4030" Mar 10 00:49:40.009936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3254054185.mount: Deactivated successfully. Mar 10 00:49:40.120415 containerd[1725]: time="2026-03-10T00:49:40.120366382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:40.126531 containerd[1725]: time="2026-03-10T00:49:40.126402952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 10 00:49:40.128343 containerd[1725]: time="2026-03-10T00:49:40.128284515Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:40.133671 containerd[1725]: time="2026-03-10T00:49:40.133450122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:40.134779 containerd[1725]: time="2026-03-10T00:49:40.134127844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 4.813822115s" Mar 10 00:49:40.134779 containerd[1725]: time="2026-03-10T00:49:40.134167164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 10 00:49:40.145055 containerd[1725]: time="2026-03-10T00:49:40.145020260Z" level=info msg="CreateContainer within sandbox \"ae5a25c89670960be3db69af1903295bf5fc95007d0041b44cc05e4abd627c94\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 10 00:49:40.188611 containerd[1725]: time="2026-03-10T00:49:40.188491607Z" level=info msg="CreateContainer within sandbox \"ae5a25c89670960be3db69af1903295bf5fc95007d0041b44cc05e4abd627c94\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"1879a40ca636404d2e8214f862c8a084ca023ab0048848f4c48c32d294e5c709\"" Mar 10 00:49:40.189274 containerd[1725]: time="2026-03-10T00:49:40.189155608Z" level=info msg="StartContainer for \"1879a40ca636404d2e8214f862c8a084ca023ab0048848f4c48c32d294e5c709\"" Mar 10 00:49:40.224813 systemd[1]: Started cri-containerd-1879a40ca636404d2e8214f862c8a084ca023ab0048848f4c48c32d294e5c709.scope - libcontainer container 1879a40ca636404d2e8214f862c8a084ca023ab0048848f4c48c32d294e5c709. Mar 10 00:49:40.255078 containerd[1725]: time="2026-03-10T00:49:40.255030308Z" level=info msg="StartContainer for \"1879a40ca636404d2e8214f862c8a084ca023ab0048848f4c48c32d294e5c709\" returns successfully" Mar 10 00:49:40.288850 systemd[1]: cri-containerd-1879a40ca636404d2e8214f862c8a084ca023ab0048848f4c48c32d294e5c709.scope: Deactivated successfully. Mar 10 00:49:41.010227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1879a40ca636404d2e8214f862c8a084ca023ab0048848f4c48c32d294e5c709-rootfs.mount: Deactivated successfully. Mar 10 00:49:41.199921 kubelet[3138]: E0310 00:49:41.198685 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zhks4" podUID="e643e7c4-0931-432c-9761-f364e4ac4030" Mar 10 00:49:42.016546 containerd[1725]: time="2026-03-10T00:49:42.016477883Z" level=info msg="shim disconnected" id=1879a40ca636404d2e8214f862c8a084ca023ab0048848f4c48c32d294e5c709 namespace=k8s.io Mar 10 00:49:42.016546 containerd[1725]: time="2026-03-10T00:49:42.016539843Z" level=warning msg="cleaning up after shim disconnected" id=1879a40ca636404d2e8214f862c8a084ca023ab0048848f4c48c32d294e5c709 namespace=k8s.io Mar 10 00:49:42.016546 containerd[1725]: time="2026-03-10T00:49:42.016550563Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 00:49:42.339437 containerd[1725]: time="2026-03-10T00:49:42.339332155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 10 00:49:43.199299 kubelet[3138]: E0310 00:49:43.199258 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zhks4" podUID="e643e7c4-0931-432c-9761-f364e4ac4030" Mar 10 00:49:45.063575 containerd[1725]: time="2026-03-10T00:49:45.063523001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:45.067885 containerd[1725]: time="2026-03-10T00:49:45.067840087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 10 00:49:45.071479 containerd[1725]: time="2026-03-10T00:49:45.071416891Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:45.076338 containerd[1725]: time="2026-03-10T00:49:45.076281458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:45.077279 containerd[1725]: time="2026-03-10T00:49:45.077159339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.737785384s" Mar 10 00:49:45.077279 containerd[1725]: time="2026-03-10T00:49:45.077191859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 10 00:49:45.085800 containerd[1725]: time="2026-03-10T00:49:45.085752391Z" level=info msg="CreateContainer within sandbox \"ae5a25c89670960be3db69af1903295bf5fc95007d0041b44cc05e4abd627c94\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 10 00:49:45.124800 containerd[1725]: time="2026-03-10T00:49:45.122930240Z" level=info msg="CreateContainer within sandbox \"ae5a25c89670960be3db69af1903295bf5fc95007d0041b44cc05e4abd627c94\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"eaaa2a77fff426c96bcd832bba146a1f9b9ee36a149d74597e8f9ef2bc601312\"" Mar 10 00:49:45.124800 containerd[1725]: time="2026-03-10T00:49:45.123897122Z" level=info msg="StartContainer for \"eaaa2a77fff426c96bcd832bba146a1f9b9ee36a149d74597e8f9ef2bc601312\"" Mar 10 00:49:45.149664 systemd[1]: run-containerd-runc-k8s.io-eaaa2a77fff426c96bcd832bba146a1f9b9ee36a149d74597e8f9ef2bc601312-runc.GVqgfk.mount: Deactivated successfully. Mar 10 00:49:45.160846 systemd[1]: Started cri-containerd-eaaa2a77fff426c96bcd832bba146a1f9b9ee36a149d74597e8f9ef2bc601312.scope - libcontainer container eaaa2a77fff426c96bcd832bba146a1f9b9ee36a149d74597e8f9ef2bc601312. Mar 10 00:49:45.191617 containerd[1725]: time="2026-03-10T00:49:45.191478852Z" level=info msg="StartContainer for \"eaaa2a77fff426c96bcd832bba146a1f9b9ee36a149d74597e8f9ef2bc601312\" returns successfully" Mar 10 00:49:45.198039 kubelet[3138]: E0310 00:49:45.197914 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zhks4" podUID="e643e7c4-0931-432c-9761-f364e4ac4030" Mar 10 00:49:46.459261 containerd[1725]: time="2026-03-10T00:49:46.459168508Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 10 00:49:46.462404 systemd[1]: cri-containerd-eaaa2a77fff426c96bcd832bba146a1f9b9ee36a149d74597e8f9ef2bc601312.scope: Deactivated successfully. Mar 10 00:49:46.482617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eaaa2a77fff426c96bcd832bba146a1f9b9ee36a149d74597e8f9ef2bc601312-rootfs.mount: Deactivated successfully. Mar 10 00:49:46.507063 kubelet[3138]: I0310 00:49:46.507032 3138 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 10 00:49:47.369123 containerd[1725]: time="2026-03-10T00:49:47.368544525Z" level=info msg="shim disconnected" id=eaaa2a77fff426c96bcd832bba146a1f9b9ee36a149d74597e8f9ef2bc601312 namespace=k8s.io Mar 10 00:49:47.369123 containerd[1725]: time="2026-03-10T00:49:47.368933686Z" level=warning msg="cleaning up after shim disconnected" id=eaaa2a77fff426c96bcd832bba146a1f9b9ee36a149d74597e8f9ef2bc601312 namespace=k8s.io Mar 10 00:49:47.369123 containerd[1725]: time="2026-03-10T00:49:47.368944646Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 00:49:47.376558 systemd[1]: Created slice kubepods-burstable-podfe1fca07_2fb3_438a_bce9_8b3a59b0fe53.slice - libcontainer container kubepods-burstable-podfe1fca07_2fb3_438a_bce9_8b3a59b0fe53.slice. Mar 10 00:49:47.399274 systemd[1]: Created slice kubepods-besteffort-pode643e7c4_0931_432c_9761_f364e4ac4030.slice - libcontainer container kubepods-besteffort-pode643e7c4_0931_432c_9761_f364e4ac4030.slice. Mar 10 00:49:47.415993 systemd[1]: Created slice kubepods-burstable-pod8e1f6144_8d73_40ae_a431_d82abb11d87e.slice - libcontainer container kubepods-burstable-pod8e1f6144_8d73_40ae_a431_d82abb11d87e.slice. Mar 10 00:49:47.420549 containerd[1725]: time="2026-03-10T00:49:47.420507755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zhks4,Uid:e643e7c4-0931-432c-9761-f364e4ac4030,Namespace:calico-system,Attempt:0,}" Mar 10 00:49:47.427527 systemd[1]: Created slice kubepods-besteffort-pod5cf6393d_0fb3_436b_8c41_2d69d998dc28.slice - libcontainer container kubepods-besteffort-pod5cf6393d_0fb3_436b_8c41_2d69d998dc28.slice. Mar 10 00:49:47.436722 systemd[1]: Created slice kubepods-besteffort-pod587df943_1233_4d7e_919c_b8c045ed5b09.slice - libcontainer container kubepods-besteffort-pod587df943_1233_4d7e_919c_b8c045ed5b09.slice. Mar 10 00:49:47.445068 systemd[1]: Created slice kubepods-besteffort-pod53655394_cd20_4d9b_8838_dda6114377a5.slice - libcontainer container kubepods-besteffort-pod53655394_cd20_4d9b_8838_dda6114377a5.slice. Mar 10 00:49:47.453287 systemd[1]: Created slice kubepods-besteffort-pod7e5ad127_4df9_4c78_b9f6_97b3790dc3ca.slice - libcontainer container kubepods-besteffort-pod7e5ad127_4df9_4c78_b9f6_97b3790dc3ca.slice. Mar 10 00:49:47.459609 kubelet[3138]: I0310 00:49:47.458708 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7e5ad127-4df9-4c78-b9f6-97b3790dc3ca-calico-apiserver-certs\") pod \"calico-apiserver-646d9b8f6d-tqwb4\" (UID: \"7e5ad127-4df9-4c78-b9f6-97b3790dc3ca\") " pod="calico-system/calico-apiserver-646d9b8f6d-tqwb4" Mar 10 00:49:47.461230 kubelet[3138]: I0310 00:49:47.461198 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53655394-cd20-4d9b-8838-dda6114377a5-whisker-ca-bundle\") pod \"whisker-767bcc99db-5sbjp\" (UID: \"53655394-cd20-4d9b-8838-dda6114377a5\") " pod="calico-system/whisker-767bcc99db-5sbjp" Mar 10 00:49:47.461402 kubelet[3138]: I0310 00:49:47.461389 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1484e1bd-a157-4269-85f5-50c715d3704d-goldmane-key-pair\") pod \"goldmane-9f7667bb8-c5qrt\" (UID: \"1484e1bd-a157-4269-85f5-50c715d3704d\") " pod="calico-system/goldmane-9f7667bb8-c5qrt" Mar 10 00:49:47.461695 kubelet[3138]: I0310 00:49:47.461582 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5cf6393d-0fb3-436b-8c41-2d69d998dc28-calico-apiserver-certs\") pod \"calico-apiserver-646d9b8f6d-rsxcf\" (UID: \"5cf6393d-0fb3-436b-8c41-2d69d998dc28\") " pod="calico-system/calico-apiserver-646d9b8f6d-rsxcf" Mar 10 00:49:47.462524 kubelet[3138]: I0310 00:49:47.462113 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65bpf\" (UniqueName: \"kubernetes.io/projected/fe1fca07-2fb3-438a-bce9-8b3a59b0fe53-kube-api-access-65bpf\") pod \"coredns-7d764666f9-djf48\" (UID: \"fe1fca07-2fb3-438a-bce9-8b3a59b0fe53\") " pod="kube-system/coredns-7d764666f9-djf48" Mar 10 00:49:47.465732 kubelet[3138]: I0310 00:49:47.464698 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qgg2\" (UniqueName: \"kubernetes.io/projected/7e5ad127-4df9-4c78-b9f6-97b3790dc3ca-kube-api-access-5qgg2\") pod \"calico-apiserver-646d9b8f6d-tqwb4\" (UID: \"7e5ad127-4df9-4c78-b9f6-97b3790dc3ca\") " pod="calico-system/calico-apiserver-646d9b8f6d-tqwb4" Mar 10 00:49:47.465732 kubelet[3138]: I0310 00:49:47.464797 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/587df943-1233-4d7e-919c-b8c045ed5b09-tigera-ca-bundle\") pod \"calico-kube-controllers-79fb6d9494-k5jwc\" (UID: \"587df943-1233-4d7e-919c-b8c045ed5b09\") " pod="calico-system/calico-kube-controllers-79fb6d9494-k5jwc" Mar 10 00:49:47.465951 kubelet[3138]: I0310 00:49:47.465931 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv57v\" (UniqueName: \"kubernetes.io/projected/587df943-1233-4d7e-919c-b8c045ed5b09-kube-api-access-gv57v\") pod \"calico-kube-controllers-79fb6d9494-k5jwc\" (UID: \"587df943-1233-4d7e-919c-b8c045ed5b09\") " pod="calico-system/calico-kube-controllers-79fb6d9494-k5jwc" Mar 10 00:49:47.466573 kubelet[3138]: I0310 00:49:47.466554 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/53655394-cd20-4d9b-8838-dda6114377a5-nginx-config\") pod \"whisker-767bcc99db-5sbjp\" (UID: \"53655394-cd20-4d9b-8838-dda6114377a5\") " pod="calico-system/whisker-767bcc99db-5sbjp" Mar 10 00:49:47.467432 kubelet[3138]: I0310 00:49:47.467120 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx2c9\" (UniqueName: \"kubernetes.io/projected/53655394-cd20-4d9b-8838-dda6114377a5-kube-api-access-dx2c9\") pod \"whisker-767bcc99db-5sbjp\" (UID: \"53655394-cd20-4d9b-8838-dda6114377a5\") " pod="calico-system/whisker-767bcc99db-5sbjp" Mar 10 00:49:47.467432 kubelet[3138]: I0310 00:49:47.467384 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1484e1bd-a157-4269-85f5-50c715d3704d-config\") pod \"goldmane-9f7667bb8-c5qrt\" (UID: \"1484e1bd-a157-4269-85f5-50c715d3704d\") " pod="calico-system/goldmane-9f7667bb8-c5qrt" Mar 10 00:49:47.468092 kubelet[3138]: I0310 00:49:47.467413 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2289\" (UniqueName: \"kubernetes.io/projected/8e1f6144-8d73-40ae-a431-d82abb11d87e-kube-api-access-z2289\") pod \"coredns-7d764666f9-2rhxk\" (UID: \"8e1f6144-8d73-40ae-a431-d82abb11d87e\") " pod="kube-system/coredns-7d764666f9-2rhxk" Mar 10 00:49:47.468092 kubelet[3138]: I0310 00:49:47.468073 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kbrm\" (UniqueName: \"kubernetes.io/projected/5cf6393d-0fb3-436b-8c41-2d69d998dc28-kube-api-access-5kbrm\") pod \"calico-apiserver-646d9b8f6d-rsxcf\" (UID: \"5cf6393d-0fb3-436b-8c41-2d69d998dc28\") " pod="calico-system/calico-apiserver-646d9b8f6d-rsxcf" Mar 10 00:49:47.468365 kubelet[3138]: I0310 00:49:47.468217 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/53655394-cd20-4d9b-8838-dda6114377a5-whisker-backend-key-pair\") pod \"whisker-767bcc99db-5sbjp\" (UID: \"53655394-cd20-4d9b-8838-dda6114377a5\") " pod="calico-system/whisker-767bcc99db-5sbjp" Mar 10 00:49:47.468365 kubelet[3138]: I0310 00:49:47.468242 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1484e1bd-a157-4269-85f5-50c715d3704d-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-c5qrt\" (UID: \"1484e1bd-a157-4269-85f5-50c715d3704d\") " pod="calico-system/goldmane-9f7667bb8-c5qrt" Mar 10 00:49:47.468630 kubelet[3138]: I0310 00:49:47.468263 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvkmf\" (UniqueName: \"kubernetes.io/projected/1484e1bd-a157-4269-85f5-50c715d3704d-kube-api-access-zvkmf\") pod \"goldmane-9f7667bb8-c5qrt\" (UID: \"1484e1bd-a157-4269-85f5-50c715d3704d\") " pod="calico-system/goldmane-9f7667bb8-c5qrt" Mar 10 00:49:47.468630 kubelet[3138]: I0310 00:49:47.468586 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe1fca07-2fb3-438a-bce9-8b3a59b0fe53-config-volume\") pod \"coredns-7d764666f9-djf48\" (UID: \"fe1fca07-2fb3-438a-bce9-8b3a59b0fe53\") " pod="kube-system/coredns-7d764666f9-djf48" Mar 10 00:49:47.468630 kubelet[3138]: I0310 00:49:47.468603 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e1f6144-8d73-40ae-a431-d82abb11d87e-config-volume\") pod \"coredns-7d764666f9-2rhxk\" (UID: \"8e1f6144-8d73-40ae-a431-d82abb11d87e\") " pod="kube-system/coredns-7d764666f9-2rhxk" Mar 10 00:49:47.475299 systemd[1]: Created slice kubepods-besteffort-pod1484e1bd_a157_4269_85f5_50c715d3704d.slice - libcontainer container kubepods-besteffort-pod1484e1bd_a157_4269_85f5_50c715d3704d.slice. Mar 10 00:49:47.528424 containerd[1725]: time="2026-03-10T00:49:47.528374659Z" level=error msg="Failed to destroy network for sandbox \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:47.530266 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4-shm.mount: Deactivated successfully. Mar 10 00:49:47.531011 containerd[1725]: time="2026-03-10T00:49:47.530970023Z" level=error msg="encountered an error cleaning up failed sandbox \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:47.531087 containerd[1725]: time="2026-03-10T00:49:47.531040983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zhks4,Uid:e643e7c4-0931-432c-9761-f364e4ac4030,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:47.531569 kubelet[3138]: E0310 00:49:47.531527 3138 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:47.532279 kubelet[3138]: E0310 00:49:47.531934 3138 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zhks4" Mar 10 00:49:47.532279 kubelet[3138]: E0310 00:49:47.531974 3138 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zhks4" Mar 10 00:49:47.532279 kubelet[3138]: E0310 00:49:47.532048 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zhks4_calico-system(e643e7c4-0931-432c-9761-f364e4ac4030)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zhks4_calico-system(e643e7c4-0931-432c-9761-f364e4ac4030)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zhks4" podUID="e643e7c4-0931-432c-9761-f364e4ac4030" Mar 10 00:49:47.702067 containerd[1725]: time="2026-03-10T00:49:47.701590291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-djf48,Uid:fe1fca07-2fb3-438a-bce9-8b3a59b0fe53,Namespace:kube-system,Attempt:0,}" Mar 10 00:49:47.729740 containerd[1725]: time="2026-03-10T00:49:47.729286008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-2rhxk,Uid:8e1f6144-8d73-40ae-a431-d82abb11d87e,Namespace:kube-system,Attempt:0,}" Mar 10 00:49:47.743265 containerd[1725]: time="2026-03-10T00:49:47.743126947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646d9b8f6d-rsxcf,Uid:5cf6393d-0fb3-436b-8c41-2d69d998dc28,Namespace:calico-system,Attempt:0,}" Mar 10 00:49:47.749470 containerd[1725]: time="2026-03-10T00:49:47.749394955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79fb6d9494-k5jwc,Uid:587df943-1233-4d7e-919c-b8c045ed5b09,Namespace:calico-system,Attempt:0,}" Mar 10 00:49:47.756455 containerd[1725]: time="2026-03-10T00:49:47.756415844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-767bcc99db-5sbjp,Uid:53655394-cd20-4d9b-8838-dda6114377a5,Namespace:calico-system,Attempt:0,}" Mar 10 00:49:47.767410 containerd[1725]: time="2026-03-10T00:49:47.767174459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646d9b8f6d-tqwb4,Uid:7e5ad127-4df9-4c78-b9f6-97b3790dc3ca,Namespace:calico-system,Attempt:0,}" Mar 10 00:49:47.791617 containerd[1725]: time="2026-03-10T00:49:47.791577172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-c5qrt,Uid:1484e1bd-a157-4269-85f5-50c715d3704d,Namespace:calico-system,Attempt:0,}" Mar 10 00:49:47.796062 containerd[1725]: time="2026-03-10T00:49:47.796013217Z" level=error msg="Failed to destroy network for sandbox \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:47.796622 containerd[1725]: time="2026-03-10T00:49:47.796460738Z" level=error msg="encountered an error cleaning up failed sandbox \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:47.796622 containerd[1725]: time="2026-03-10T00:49:47.796525618Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-djf48,Uid:fe1fca07-2fb3-438a-bce9-8b3a59b0fe53,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:47.796857 kubelet[3138]: E0310 00:49:47.796765 3138 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:47.796857 kubelet[3138]: E0310 00:49:47.796833 3138 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-djf48" Mar 10 00:49:47.796857 kubelet[3138]: E0310 00:49:47.796850 3138 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-djf48" Mar 10 00:49:47.796953 kubelet[3138]: E0310 00:49:47.796906 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-djf48_kube-system(fe1fca07-2fb3-438a-bce9-8b3a59b0fe53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-djf48_kube-system(fe1fca07-2fb3-438a-bce9-8b3a59b0fe53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-djf48" podUID="fe1fca07-2fb3-438a-bce9-8b3a59b0fe53" Mar 10 00:49:47.903488 containerd[1725]: time="2026-03-10T00:49:47.903224001Z" level=error msg="Failed to destroy network for sandbox \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:47.903751 containerd[1725]: time="2026-03-10T00:49:47.903525121Z" level=error msg="encountered an error cleaning up failed sandbox \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:47.903812 containerd[1725]: time="2026-03-10T00:49:47.903774122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-2rhxk,Uid:8e1f6144-8d73-40ae-a431-d82abb11d87e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:47.904024 kubelet[3138]: E0310 00:49:47.903990 3138 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:47.904093 kubelet[3138]: E0310 00:49:47.904045 3138 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-2rhxk" Mar 10 00:49:47.904093 kubelet[3138]: E0310 00:49:47.904063 3138 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-2rhxk" Mar 10 00:49:47.904146 kubelet[3138]: E0310 00:49:47.904124 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-2rhxk_kube-system(8e1f6144-8d73-40ae-a431-d82abb11d87e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-2rhxk_kube-system(8e1f6144-8d73-40ae-a431-d82abb11d87e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-2rhxk" podUID="8e1f6144-8d73-40ae-a431-d82abb11d87e" Mar 10 00:49:47.982661 containerd[1725]: time="2026-03-10T00:49:47.982229267Z" level=error msg="Failed to destroy network for sandbox \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:47.983372 containerd[1725]: time="2026-03-10T00:49:47.983337588Z" level=error msg="encountered an error cleaning up failed sandbox \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:47.983452 containerd[1725]: time="2026-03-10T00:49:47.983395308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646d9b8f6d-rsxcf,Uid:5cf6393d-0fb3-436b-8c41-2d69d998dc28,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:47.988872 kubelet[3138]: E0310 00:49:47.988782 3138 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:47.988872 kubelet[3138]: E0310 00:49:47.988851 3138 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-646d9b8f6d-rsxcf" Mar 10 00:49:47.988872 kubelet[3138]: E0310 00:49:47.988870 3138 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-646d9b8f6d-rsxcf" Mar 10 00:49:47.989650 kubelet[3138]: E0310 00:49:47.989370 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-646d9b8f6d-rsxcf_calico-system(5cf6393d-0fb3-436b-8c41-2d69d998dc28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-646d9b8f6d-rsxcf_calico-system(5cf6393d-0fb3-436b-8c41-2d69d998dc28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-646d9b8f6d-rsxcf" podUID="5cf6393d-0fb3-436b-8c41-2d69d998dc28" Mar 10 00:49:48.036521 containerd[1725]: time="2026-03-10T00:49:48.036469259Z" level=error msg="Failed to destroy network for sandbox \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.037018 containerd[1725]: time="2026-03-10T00:49:48.036987660Z" level=error msg="encountered an error cleaning up failed sandbox \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.038868 containerd[1725]: time="2026-03-10T00:49:48.038745302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-c5qrt,Uid:1484e1bd-a157-4269-85f5-50c715d3704d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.039021 kubelet[3138]: E0310 00:49:48.038979 3138 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.039148 kubelet[3138]: E0310 00:49:48.039043 3138 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-c5qrt" Mar 10 00:49:48.039148 kubelet[3138]: E0310 00:49:48.039063 3138 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-c5qrt" Mar 10 00:49:48.039148 kubelet[3138]: E0310 00:49:48.039107 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-c5qrt_calico-system(1484e1bd-a157-4269-85f5-50c715d3704d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-c5qrt_calico-system(1484e1bd-a157-4269-85f5-50c715d3704d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-c5qrt" podUID="1484e1bd-a157-4269-85f5-50c715d3704d" Mar 10 00:49:48.041748 containerd[1725]: time="2026-03-10T00:49:48.041712426Z" level=error msg="Failed to destroy network for sandbox \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.042155 containerd[1725]: time="2026-03-10T00:49:48.042125467Z" level=error msg="encountered an error cleaning up failed sandbox \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.042314 containerd[1725]: time="2026-03-10T00:49:48.042185347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646d9b8f6d-tqwb4,Uid:7e5ad127-4df9-4c78-b9f6-97b3790dc3ca,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.042482 kubelet[3138]: E0310 00:49:48.042375 3138 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.042482 kubelet[3138]: E0310 00:49:48.042436 3138 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-646d9b8f6d-tqwb4" Mar 10 00:49:48.042482 kubelet[3138]: E0310 00:49:48.042453 3138 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-646d9b8f6d-tqwb4" Mar 10 00:49:48.043141 kubelet[3138]: E0310 00:49:48.042501 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-646d9b8f6d-tqwb4_calico-system(7e5ad127-4df9-4c78-b9f6-97b3790dc3ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-646d9b8f6d-tqwb4_calico-system(7e5ad127-4df9-4c78-b9f6-97b3790dc3ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-646d9b8f6d-tqwb4" podUID="7e5ad127-4df9-4c78-b9f6-97b3790dc3ca" Mar 10 00:49:48.043352 containerd[1725]: time="2026-03-10T00:49:48.043326308Z" level=error msg="Failed to destroy network for sandbox \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.043767 containerd[1725]: time="2026-03-10T00:49:48.043738469Z" level=error msg="encountered an error cleaning up failed sandbox \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.043879 containerd[1725]: time="2026-03-10T00:49:48.043858069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-767bcc99db-5sbjp,Uid:53655394-cd20-4d9b-8838-dda6114377a5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.044308 kubelet[3138]: E0310 00:49:48.044122 3138 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.044371 kubelet[3138]: E0310 00:49:48.044323 3138 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-767bcc99db-5sbjp" Mar 10 00:49:48.044371 kubelet[3138]: E0310 00:49:48.044339 3138 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-767bcc99db-5sbjp" Mar 10 00:49:48.044427 kubelet[3138]: E0310 00:49:48.044388 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-767bcc99db-5sbjp_calico-system(53655394-cd20-4d9b-8838-dda6114377a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-767bcc99db-5sbjp_calico-system(53655394-cd20-4d9b-8838-dda6114377a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-767bcc99db-5sbjp" podUID="53655394-cd20-4d9b-8838-dda6114377a5" Mar 10 00:49:48.052055 containerd[1725]: time="2026-03-10T00:49:48.051871640Z" level=error msg="Failed to destroy network for sandbox \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.052180 containerd[1725]: time="2026-03-10T00:49:48.052150040Z" level=error msg="encountered an error cleaning up failed sandbox \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.052216 containerd[1725]: time="2026-03-10T00:49:48.052197240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79fb6d9494-k5jwc,Uid:587df943-1233-4d7e-919c-b8c045ed5b09,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.052575 kubelet[3138]: E0310 00:49:48.052400 3138 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.052575 kubelet[3138]: E0310 00:49:48.052453 3138 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79fb6d9494-k5jwc" Mar 10 00:49:48.052575 kubelet[3138]: E0310 00:49:48.052469 3138 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79fb6d9494-k5jwc" Mar 10 00:49:48.052733 kubelet[3138]: E0310 00:49:48.052509 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79fb6d9494-k5jwc_calico-system(587df943-1233-4d7e-919c-b8c045ed5b09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79fb6d9494-k5jwc_calico-system(587df943-1233-4d7e-919c-b8c045ed5b09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79fb6d9494-k5jwc" podUID="587df943-1233-4d7e-919c-b8c045ed5b09" Mar 10 00:49:48.364748 kubelet[3138]: I0310 00:49:48.364714 3138 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Mar 10 00:49:48.366935 containerd[1725]: time="2026-03-10T00:49:48.366889061Z" level=info msg="StopPodSandbox for \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\"" Mar 10 00:49:48.367345 containerd[1725]: time="2026-03-10T00:49:48.367064582Z" level=info msg="Ensure that sandbox ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f in task-service has been cleanup successfully" Mar 10 00:49:48.368341 kubelet[3138]: I0310 00:49:48.368322 3138 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Mar 10 00:49:48.370254 kubelet[3138]: I0310 00:49:48.369979 3138 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Mar 10 00:49:48.370501 containerd[1725]: time="2026-03-10T00:49:48.370468586Z" level=info msg="StopPodSandbox for \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\"" Mar 10 00:49:48.371025 containerd[1725]: time="2026-03-10T00:49:48.370959827Z" level=info msg="StopPodSandbox for \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\"" Mar 10 00:49:48.371625 containerd[1725]: time="2026-03-10T00:49:48.371412867Z" level=info msg="Ensure that sandbox 6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a in task-service has been cleanup successfully" Mar 10 00:49:48.372464 containerd[1725]: time="2026-03-10T00:49:48.372437309Z" level=info msg="Ensure that sandbox 828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68 in task-service has been cleanup successfully" Mar 10 00:49:48.377082 kubelet[3138]: I0310 00:49:48.376722 3138 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Mar 10 00:49:48.377170 containerd[1725]: time="2026-03-10T00:49:48.377146915Z" level=info msg="StopPodSandbox for \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\"" Mar 10 00:49:48.377301 containerd[1725]: time="2026-03-10T00:49:48.377280515Z" level=info msg="Ensure that sandbox 8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974 in task-service has been cleanup successfully" Mar 10 00:49:48.382246 containerd[1725]: time="2026-03-10T00:49:48.380998520Z" level=info msg="CreateContainer within sandbox \"ae5a25c89670960be3db69af1903295bf5fc95007d0041b44cc05e4abd627c94\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 10 00:49:48.387671 kubelet[3138]: I0310 00:49:48.387607 3138 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Mar 10 00:49:48.389065 containerd[1725]: time="2026-03-10T00:49:48.388908571Z" level=info msg="StopPodSandbox for \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\"" Mar 10 00:49:48.389585 containerd[1725]: time="2026-03-10T00:49:48.389561012Z" level=info msg="Ensure that sandbox 42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb in task-service has been cleanup successfully" Mar 10 00:49:48.392478 kubelet[3138]: I0310 00:49:48.392114 3138 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Mar 10 00:49:48.398155 containerd[1725]: time="2026-03-10T00:49:48.397536502Z" level=info msg="StopPodSandbox for \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\"" Mar 10 00:49:48.398866 containerd[1725]: time="2026-03-10T00:49:48.398842304Z" level=info msg="Ensure that sandbox 38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1 in task-service has been cleanup successfully" Mar 10 00:49:48.399396 kubelet[3138]: I0310 00:49:48.399376 3138 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Mar 10 00:49:48.403666 containerd[1725]: time="2026-03-10T00:49:48.400391826Z" level=info msg="StopPodSandbox for \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\"" Mar 10 00:49:48.404811 containerd[1725]: time="2026-03-10T00:49:48.404779952Z" level=info msg="Ensure that sandbox 2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411 in task-service has been cleanup successfully" Mar 10 00:49:48.421599 kubelet[3138]: I0310 00:49:48.420933 3138 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Mar 10 00:49:48.426417 containerd[1725]: time="2026-03-10T00:49:48.426375621Z" level=info msg="StopPodSandbox for \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\"" Mar 10 00:49:48.426588 containerd[1725]: time="2026-03-10T00:49:48.426568741Z" level=info msg="Ensure that sandbox 01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4 in task-service has been cleanup successfully" Mar 10 00:49:48.480927 containerd[1725]: time="2026-03-10T00:49:48.480883614Z" level=info msg="CreateContainer within sandbox \"ae5a25c89670960be3db69af1903295bf5fc95007d0041b44cc05e4abd627c94\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9601dae5eea9bf59270605d7cd9664dc94bd61dedb681928986acd866bc5484b\"" Mar 10 00:49:48.483795 containerd[1725]: time="2026-03-10T00:49:48.483758018Z" level=info msg="StartContainer for \"9601dae5eea9bf59270605d7cd9664dc94bd61dedb681928986acd866bc5484b\"" Mar 10 00:49:48.486159 containerd[1725]: time="2026-03-10T00:49:48.486061341Z" level=error msg="StopPodSandbox for \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\" failed" error="failed to destroy network for sandbox \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.487175 kubelet[3138]: E0310 00:49:48.487146 3138 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Mar 10 00:49:48.487525 kubelet[3138]: E0310 00:49:48.487333 3138 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a"} Mar 10 00:49:48.487525 kubelet[3138]: E0310 00:49:48.487440 3138 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5cf6393d-0fb3-436b-8c41-2d69d998dc28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 00:49:48.487525 kubelet[3138]: E0310 00:49:48.487466 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5cf6393d-0fb3-436b-8c41-2d69d998dc28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-646d9b8f6d-rsxcf" podUID="5cf6393d-0fb3-436b-8c41-2d69d998dc28" Mar 10 00:49:48.505684 containerd[1725]: time="2026-03-10T00:49:48.505340487Z" level=error msg="StopPodSandbox for \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\" failed" error="failed to destroy network for sandbox \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.505684 containerd[1725]: time="2026-03-10T00:49:48.505463967Z" level=error msg="StopPodSandbox for \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\" failed" error="failed to destroy network for sandbox \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.506050 kubelet[3138]: E0310 00:49:48.505880 3138 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Mar 10 00:49:48.506050 kubelet[3138]: E0310 00:49:48.505931 3138 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f"} Mar 10 00:49:48.506050 kubelet[3138]: E0310 00:49:48.505963 3138 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"587df943-1233-4d7e-919c-b8c045ed5b09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 00:49:48.506050 kubelet[3138]: E0310 00:49:48.505990 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"587df943-1233-4d7e-919c-b8c045ed5b09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79fb6d9494-k5jwc" podUID="587df943-1233-4d7e-919c-b8c045ed5b09" Mar 10 00:49:48.506368 kubelet[3138]: E0310 00:49:48.505608 3138 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Mar 10 00:49:48.506368 kubelet[3138]: E0310 00:49:48.506280 3138 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1"} Mar 10 00:49:48.506368 kubelet[3138]: E0310 00:49:48.506301 3138 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"53655394-cd20-4d9b-8838-dda6114377a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 00:49:48.506368 kubelet[3138]: E0310 00:49:48.506336 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"53655394-cd20-4d9b-8838-dda6114377a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-767bcc99db-5sbjp" podUID="53655394-cd20-4d9b-8838-dda6114377a5" Mar 10 00:49:48.508193 containerd[1725]: time="2026-03-10T00:49:48.508145610Z" level=error msg="StopPodSandbox for \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\" failed" error="failed to destroy network for sandbox \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.508689 containerd[1725]: time="2026-03-10T00:49:48.508218331Z" level=error msg="StopPodSandbox for \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\" failed" error="failed to destroy network for sandbox \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.508807 kubelet[3138]: E0310 00:49:48.508484 3138 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Mar 10 00:49:48.508807 kubelet[3138]: E0310 00:49:48.508518 3138 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68"} Mar 10 00:49:48.508807 kubelet[3138]: E0310 00:49:48.508545 3138 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1484e1bd-a157-4269-85f5-50c715d3704d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 00:49:48.508807 kubelet[3138]: E0310 00:49:48.508571 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1484e1bd-a157-4269-85f5-50c715d3704d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-c5qrt" podUID="1484e1bd-a157-4269-85f5-50c715d3704d" Mar 10 00:49:48.508946 kubelet[3138]: E0310 00:49:48.508605 3138 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Mar 10 00:49:48.508946 kubelet[3138]: E0310 00:49:48.508618 3138 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974"} Mar 10 00:49:48.509375 kubelet[3138]: E0310 00:49:48.509236 3138 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e1f6144-8d73-40ae-a431-d82abb11d87e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 00:49:48.509375 kubelet[3138]: E0310 00:49:48.509270 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e1f6144-8d73-40ae-a431-d82abb11d87e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-2rhxk" podUID="8e1f6144-8d73-40ae-a431-d82abb11d87e" Mar 10 00:49:48.513725 containerd[1725]: time="2026-03-10T00:49:48.513684658Z" level=error msg="StopPodSandbox for \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\" failed" error="failed to destroy network for sandbox \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.514036 kubelet[3138]: E0310 00:49:48.513874 3138 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Mar 10 00:49:48.514036 kubelet[3138]: E0310 00:49:48.513916 3138 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb"} Mar 10 00:49:48.514036 kubelet[3138]: E0310 00:49:48.513940 3138 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7e5ad127-4df9-4c78-b9f6-97b3790dc3ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 00:49:48.514036 kubelet[3138]: E0310 00:49:48.513960 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7e5ad127-4df9-4c78-b9f6-97b3790dc3ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-646d9b8f6d-tqwb4" podUID="7e5ad127-4df9-4c78-b9f6-97b3790dc3ca" Mar 10 00:49:48.522094 containerd[1725]: time="2026-03-10T00:49:48.521640989Z" level=error msg="StopPodSandbox for \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\" failed" error="failed to destroy network for sandbox \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.522216 kubelet[3138]: E0310 00:49:48.522042 3138 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Mar 10 00:49:48.522570 kubelet[3138]: E0310 00:49:48.522286 3138 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411"} Mar 10 00:49:48.522570 kubelet[3138]: E0310 00:49:48.522322 3138 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe1fca07-2fb3-438a-bce9-8b3a59b0fe53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 00:49:48.522570 kubelet[3138]: E0310 00:49:48.522361 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe1fca07-2fb3-438a-bce9-8b3a59b0fe53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-djf48" podUID="fe1fca07-2fb3-438a-bce9-8b3a59b0fe53" Mar 10 00:49:48.528237 containerd[1725]: time="2026-03-10T00:49:48.528068157Z" level=error msg="StopPodSandbox for \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\" failed" error="failed to destroy network for sandbox \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 00:49:48.532732 kubelet[3138]: E0310 00:49:48.528423 3138 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Mar 10 00:49:48.532732 kubelet[3138]: E0310 00:49:48.528466 3138 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4"} Mar 10 00:49:48.532995 kubelet[3138]: E0310 00:49:48.532738 3138 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e643e7c4-0931-432c-9761-f364e4ac4030\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 10 00:49:48.532995 kubelet[3138]: E0310 00:49:48.532800 3138 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e643e7c4-0931-432c-9761-f364e4ac4030\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zhks4" podUID="e643e7c4-0931-432c-9761-f364e4ac4030" Mar 10 00:49:48.559826 systemd[1]: Started cri-containerd-9601dae5eea9bf59270605d7cd9664dc94bd61dedb681928986acd866bc5484b.scope - libcontainer container 9601dae5eea9bf59270605d7cd9664dc94bd61dedb681928986acd866bc5484b. Mar 10 00:49:48.592770 containerd[1725]: time="2026-03-10T00:49:48.592728564Z" level=info msg="StartContainer for \"9601dae5eea9bf59270605d7cd9664dc94bd61dedb681928986acd866bc5484b\" returns successfully" Mar 10 00:49:49.426787 containerd[1725]: time="2026-03-10T00:49:49.426624990Z" level=info msg="StopPodSandbox for \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\"" Mar 10 00:49:49.461986 systemd[1]: run-containerd-runc-k8s.io-9601dae5eea9bf59270605d7cd9664dc94bd61dedb681928986acd866bc5484b-runc.ApvaCL.mount: Deactivated successfully. Mar 10 00:49:49.522519 kubelet[3138]: I0310 00:49:49.522447 3138 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-bq9wj" podStartSLOduration=2.20853849 podStartE2EDuration="20.522429733s" podCreationTimestamp="2026-03-10 00:49:29 +0000 UTC" firstStartedPulling="2026-03-10 00:49:30.040564762 +0000 UTC m=+22.944339562" lastFinishedPulling="2026-03-10 00:49:48.354456045 +0000 UTC m=+41.258230805" observedRunningTime="2026-03-10 00:49:49.474591502 +0000 UTC m=+42.378366302" watchObservedRunningTime="2026-03-10 00:49:49.522429733 +0000 UTC m=+42.426204533" Mar 10 00:49:49.568742 containerd[1725]: 2026-03-10 00:49:49.524 [INFO][4378] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Mar 10 00:49:49.568742 containerd[1725]: 2026-03-10 00:49:49.525 [INFO][4378] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" iface="eth0" netns="/var/run/netns/cni-a729b4ed-3577-5f99-8826-3af0f4016a9f" Mar 10 00:49:49.568742 containerd[1725]: 2026-03-10 00:49:49.526 [INFO][4378] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" iface="eth0" netns="/var/run/netns/cni-a729b4ed-3577-5f99-8826-3af0f4016a9f" Mar 10 00:49:49.568742 containerd[1725]: 2026-03-10 00:49:49.526 [INFO][4378] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" iface="eth0" netns="/var/run/netns/cni-a729b4ed-3577-5f99-8826-3af0f4016a9f" Mar 10 00:49:49.568742 containerd[1725]: 2026-03-10 00:49:49.526 [INFO][4378] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Mar 10 00:49:49.568742 containerd[1725]: 2026-03-10 00:49:49.526 [INFO][4378] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Mar 10 00:49:49.568742 containerd[1725]: 2026-03-10 00:49:49.551 [INFO][4401] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" HandleID="k8s-pod-network.38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Workload="ci--4081.3.6--n--9b959526b1-k8s-whisker--767bcc99db--5sbjp-eth0" Mar 10 00:49:49.568742 containerd[1725]: 2026-03-10 00:49:49.551 [INFO][4401] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:49:49.568742 containerd[1725]: 2026-03-10 00:49:49.552 [INFO][4401] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:49:49.568742 containerd[1725]: 2026-03-10 00:49:49.563 [WARNING][4401] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" HandleID="k8s-pod-network.38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Workload="ci--4081.3.6--n--9b959526b1-k8s-whisker--767bcc99db--5sbjp-eth0" Mar 10 00:49:49.568742 containerd[1725]: 2026-03-10 00:49:49.563 [INFO][4401] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" HandleID="k8s-pod-network.38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Workload="ci--4081.3.6--n--9b959526b1-k8s-whisker--767bcc99db--5sbjp-eth0" Mar 10 00:49:49.568742 containerd[1725]: 2026-03-10 00:49:49.564 [INFO][4401] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:49:49.568742 containerd[1725]: 2026-03-10 00:49:49.566 [INFO][4378] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Mar 10 00:49:49.570488 containerd[1725]: time="2026-03-10T00:49:49.569408204Z" level=info msg="TearDown network for sandbox \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\" successfully" Mar 10 00:49:49.570488 containerd[1725]: time="2026-03-10T00:49:49.569538204Z" level=info msg="StopPodSandbox for \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\" returns successfully" Mar 10 00:49:49.572261 systemd[1]: run-netns-cni\x2da729b4ed\x2d3577\x2d5f99\x2d8826\x2d3af0f4016a9f.mount: Deactivated successfully. Mar 10 00:49:49.685974 kubelet[3138]: I0310 00:49:49.685419 3138 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/53655394-cd20-4d9b-8838-dda6114377a5-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/53655394-cd20-4d9b-8838-dda6114377a5-whisker-backend-key-pair\") pod \"53655394-cd20-4d9b-8838-dda6114377a5\" (UID: \"53655394-cd20-4d9b-8838-dda6114377a5\") " Mar 10 00:49:49.685974 kubelet[3138]: I0310 00:49:49.685482 3138 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/53655394-cd20-4d9b-8838-dda6114377a5-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53655394-cd20-4d9b-8838-dda6114377a5-whisker-ca-bundle\") pod \"53655394-cd20-4d9b-8838-dda6114377a5\" (UID: \"53655394-cd20-4d9b-8838-dda6114377a5\") " Mar 10 00:49:49.685974 kubelet[3138]: I0310 00:49:49.685501 3138 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/53655394-cd20-4d9b-8838-dda6114377a5-nginx-config\" (UniqueName: \"kubernetes.io/configmap/53655394-cd20-4d9b-8838-dda6114377a5-nginx-config\") pod \"53655394-cd20-4d9b-8838-dda6114377a5\" (UID: \"53655394-cd20-4d9b-8838-dda6114377a5\") " Mar 10 00:49:49.685974 kubelet[3138]: I0310 00:49:49.685519 3138 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/53655394-cd20-4d9b-8838-dda6114377a5-kube-api-access-dx2c9\" (UniqueName: \"kubernetes.io/projected/53655394-cd20-4d9b-8838-dda6114377a5-kube-api-access-dx2c9\") pod \"53655394-cd20-4d9b-8838-dda6114377a5\" (UID: \"53655394-cd20-4d9b-8838-dda6114377a5\") " Mar 10 00:49:49.685974 kubelet[3138]: I0310 00:49:49.685923 3138 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53655394-cd20-4d9b-8838-dda6114377a5-whisker-ca-bundle" pod "53655394-cd20-4d9b-8838-dda6114377a5" (UID: "53655394-cd20-4d9b-8838-dda6114377a5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 00:49:49.691195 kubelet[3138]: I0310 00:49:49.690169 3138 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53655394-cd20-4d9b-8838-dda6114377a5-kube-api-access-dx2c9" pod "53655394-cd20-4d9b-8838-dda6114377a5" (UID: "53655394-cd20-4d9b-8838-dda6114377a5"). InnerVolumeSpecName "kube-api-access-dx2c9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 00:49:49.690816 systemd[1]: var-lib-kubelet-pods-53655394\x2dcd20\x2d4d9b\x2d8838\x2ddda6114377a5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddx2c9.mount: Deactivated successfully. Mar 10 00:49:49.693784 kubelet[3138]: I0310 00:49:49.692553 3138 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53655394-cd20-4d9b-8838-dda6114377a5-nginx-config" pod "53655394-cd20-4d9b-8838-dda6114377a5" (UID: "53655394-cd20-4d9b-8838-dda6114377a5"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 00:49:49.694089 kubelet[3138]: I0310 00:49:49.694044 3138 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53655394-cd20-4d9b-8838-dda6114377a5-whisker-backend-key-pair" pod "53655394-cd20-4d9b-8838-dda6114377a5" (UID: "53655394-cd20-4d9b-8838-dda6114377a5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 10 00:49:49.694950 systemd[1]: var-lib-kubelet-pods-53655394\x2dcd20\x2d4d9b\x2d8838\x2ddda6114377a5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 10 00:49:49.786511 kubelet[3138]: I0310 00:49:49.786433 3138 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/53655394-cd20-4d9b-8838-dda6114377a5-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-9b959526b1\" DevicePath \"\"" Mar 10 00:49:49.786511 kubelet[3138]: I0310 00:49:49.786469 3138 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53655394-cd20-4d9b-8838-dda6114377a5-whisker-ca-bundle\") on node \"ci-4081.3.6-n-9b959526b1\" DevicePath \"\"" Mar 10 00:49:49.786511 kubelet[3138]: I0310 00:49:49.786481 3138 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/53655394-cd20-4d9b-8838-dda6114377a5-nginx-config\") on node \"ci-4081.3.6-n-9b959526b1\" DevicePath \"\"" Mar 10 00:49:49.786511 kubelet[3138]: I0310 00:49:49.786490 3138 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dx2c9\" (UniqueName: \"kubernetes.io/projected/53655394-cd20-4d9b-8838-dda6114377a5-kube-api-access-dx2c9\") on node \"ci-4081.3.6-n-9b959526b1\" DevicePath \"\"" Mar 10 00:49:50.437394 systemd[1]: Removed slice kubepods-besteffort-pod53655394_cd20_4d9b_8838_dda6114377a5.slice - libcontainer container kubepods-besteffort-pod53655394_cd20_4d9b_8838_dda6114377a5.slice. Mar 10 00:49:50.453834 systemd[1]: run-containerd-runc-k8s.io-9601dae5eea9bf59270605d7cd9664dc94bd61dedb681928986acd866bc5484b-runc.BYdKOQ.mount: Deactivated successfully. Mar 10 00:49:50.551135 systemd[1]: Created slice kubepods-besteffort-podf1c65a72_28c0_4838_9519_06a42c4fe04f.slice - libcontainer container kubepods-besteffort-podf1c65a72_28c0_4838_9519_06a42c4fe04f.slice. Mar 10 00:49:50.593005 kubelet[3138]: I0310 00:49:50.592810 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1c65a72-28c0-4838-9519-06a42c4fe04f-whisker-backend-key-pair\") pod \"whisker-5fffb4dbff-q4dr2\" (UID: \"f1c65a72-28c0-4838-9519-06a42c4fe04f\") " pod="calico-system/whisker-5fffb4dbff-q4dr2" Mar 10 00:49:50.593005 kubelet[3138]: I0310 00:49:50.592865 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1c65a72-28c0-4838-9519-06a42c4fe04f-whisker-ca-bundle\") pod \"whisker-5fffb4dbff-q4dr2\" (UID: \"f1c65a72-28c0-4838-9519-06a42c4fe04f\") " pod="calico-system/whisker-5fffb4dbff-q4dr2" Mar 10 00:49:50.593005 kubelet[3138]: I0310 00:49:50.592890 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7q6m\" (UniqueName: \"kubernetes.io/projected/f1c65a72-28c0-4838-9519-06a42c4fe04f-kube-api-access-b7q6m\") pod \"whisker-5fffb4dbff-q4dr2\" (UID: \"f1c65a72-28c0-4838-9519-06a42c4fe04f\") " pod="calico-system/whisker-5fffb4dbff-q4dr2" Mar 10 00:49:50.593005 kubelet[3138]: I0310 00:49:50.592937 3138 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f1c65a72-28c0-4838-9519-06a42c4fe04f-nginx-config\") pod \"whisker-5fffb4dbff-q4dr2\" (UID: \"f1c65a72-28c0-4838-9519-06a42c4fe04f\") " pod="calico-system/whisker-5fffb4dbff-q4dr2" Mar 10 00:49:50.839236 kubelet[3138]: I0310 00:49:50.839124 3138 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 10 00:49:50.862540 containerd[1725]: time="2026-03-10T00:49:50.862433382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fffb4dbff-q4dr2,Uid:f1c65a72-28c0-4838-9519-06a42c4fe04f,Namespace:calico-system,Attempt:0,}" Mar 10 00:49:51.044900 systemd-networkd[1510]: cali8a90605ed72: Link UP Mar 10 00:49:51.045707 systemd-networkd[1510]: cali8a90605ed72: Gained carrier Mar 10 00:49:51.064376 containerd[1725]: 2026-03-10 00:49:50.936 [ERROR][4529] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 10 00:49:51.064376 containerd[1725]: 2026-03-10 00:49:50.950 [INFO][4529] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--9b959526b1-k8s-whisker--5fffb4dbff--q4dr2-eth0 whisker-5fffb4dbff- calico-system f1c65a72-28c0-4838-9519-06a42c4fe04f 902 0 2026-03-10 00:49:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5fffb4dbff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-9b959526b1 whisker-5fffb4dbff-q4dr2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8a90605ed72 [] [] }} ContainerID="797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3" Namespace="calico-system" Pod="whisker-5fffb4dbff-q4dr2" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-whisker--5fffb4dbff--q4dr2-" Mar 10 00:49:51.064376 containerd[1725]: 2026-03-10 00:49:50.950 [INFO][4529] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3" Namespace="calico-system" Pod="whisker-5fffb4dbff-q4dr2" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-whisker--5fffb4dbff--q4dr2-eth0" Mar 10 00:49:51.064376 containerd[1725]: 2026-03-10 00:49:50.975 [INFO][4542] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3" HandleID="k8s-pod-network.797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3" Workload="ci--4081.3.6--n--9b959526b1-k8s-whisker--5fffb4dbff--q4dr2-eth0" Mar 10 00:49:51.064376 containerd[1725]: 2026-03-10 00:49:50.984 [INFO][4542] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3" HandleID="k8s-pod-network.797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3" Workload="ci--4081.3.6--n--9b959526b1-k8s-whisker--5fffb4dbff--q4dr2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002731d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-9b959526b1", "pod":"whisker-5fffb4dbff-q4dr2", "timestamp":"2026-03-10 00:49:50.975381471 +0000 UTC"}, Hostname:"ci-4081.3.6-n-9b959526b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004e4f20)} Mar 10 00:49:51.064376 containerd[1725]: 2026-03-10 00:49:50.984 [INFO][4542] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:49:51.064376 containerd[1725]: 2026-03-10 00:49:50.984 [INFO][4542] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:49:51.064376 containerd[1725]: 2026-03-10 00:49:50.984 [INFO][4542] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-9b959526b1' Mar 10 00:49:51.064376 containerd[1725]: 2026-03-10 00:49:50.986 [INFO][4542] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:51.064376 containerd[1725]: 2026-03-10 00:49:50.992 [INFO][4542] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:51.064376 containerd[1725]: 2026-03-10 00:49:50.996 [INFO][4542] ipam/ipam.go 526: Trying affinity for 192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:51.064376 containerd[1725]: 2026-03-10 00:49:50.997 [INFO][4542] ipam/ipam.go 160: Attempting to load block cidr=192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:51.064376 containerd[1725]: 2026-03-10 00:49:50.999 [INFO][4542] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:51.064376 containerd[1725]: 2026-03-10 00:49:50.999 [INFO][4542] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.30.192/26 handle="k8s-pod-network.797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:51.064376 containerd[1725]: 2026-03-10 00:49:51.000 [INFO][4542] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3 Mar 10 00:49:51.064376 containerd[1725]: 2026-03-10 00:49:51.006 [INFO][4542] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.30.192/26 handle="k8s-pod-network.797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:51.064376 containerd[1725]: 2026-03-10 00:49:51.014 [INFO][4542] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.30.193/26] block=192.168.30.192/26 handle="k8s-pod-network.797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:51.064376 containerd[1725]: 2026-03-10 00:49:51.014 [INFO][4542] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.30.193/26] handle="k8s-pod-network.797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:49:51.064376 containerd[1725]: 2026-03-10 00:49:51.014 [INFO][4542] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:49:51.064376 containerd[1725]: 2026-03-10 00:49:51.014 [INFO][4542] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.30.193/26] IPv6=[] ContainerID="797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3" HandleID="k8s-pod-network.797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3" Workload="ci--4081.3.6--n--9b959526b1-k8s-whisker--5fffb4dbff--q4dr2-eth0" Mar 10 00:49:51.065006 containerd[1725]: 2026-03-10 00:49:51.016 [INFO][4529] cni-plugin/k8s.go 418: Populated endpoint ContainerID="797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3" Namespace="calico-system" Pod="whisker-5fffb4dbff-q4dr2" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-whisker--5fffb4dbff--q4dr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-whisker--5fffb4dbff--q4dr2-eth0", GenerateName:"whisker-5fffb4dbff-", Namespace:"calico-system", SelfLink:"", UID:"f1c65a72-28c0-4838-9519-06a42c4fe04f", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5fffb4dbff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"", Pod:"whisker-5fffb4dbff-q4dr2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.30.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8a90605ed72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:49:51.065006 containerd[1725]: 2026-03-10 00:49:51.016 [INFO][4529] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.193/32] ContainerID="797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3" Namespace="calico-system" Pod="whisker-5fffb4dbff-q4dr2" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-whisker--5fffb4dbff--q4dr2-eth0" Mar 10 00:49:51.065006 containerd[1725]: 2026-03-10 00:49:51.016 [INFO][4529] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a90605ed72 ContainerID="797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3" Namespace="calico-system" Pod="whisker-5fffb4dbff-q4dr2" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-whisker--5fffb4dbff--q4dr2-eth0" Mar 10 00:49:51.065006 containerd[1725]: 2026-03-10 00:49:51.045 [INFO][4529] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3" Namespace="calico-system" Pod="whisker-5fffb4dbff-q4dr2" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-whisker--5fffb4dbff--q4dr2-eth0" Mar 10 00:49:51.065006 containerd[1725]: 2026-03-10 00:49:51.046 [INFO][4529] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3" Namespace="calico-system" Pod="whisker-5fffb4dbff-q4dr2" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-whisker--5fffb4dbff--q4dr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-whisker--5fffb4dbff--q4dr2-eth0", GenerateName:"whisker-5fffb4dbff-", Namespace:"calico-system", SelfLink:"", UID:"f1c65a72-28c0-4838-9519-06a42c4fe04f", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5fffb4dbff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3", Pod:"whisker-5fffb4dbff-q4dr2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.30.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8a90605ed72", MAC:"2e:31:55:ba:a9:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:49:51.065006 containerd[1725]: 2026-03-10 00:49:51.061 [INFO][4529] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3" Namespace="calico-system" Pod="whisker-5fffb4dbff-q4dr2" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-whisker--5fffb4dbff--q4dr2-eth0" Mar 10 00:49:51.084751 containerd[1725]: time="2026-03-10T00:49:51.083762834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 00:49:51.084751 containerd[1725]: time="2026-03-10T00:49:51.083826434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 00:49:51.084751 containerd[1725]: time="2026-03-10T00:49:51.083841354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:49:51.084751 containerd[1725]: time="2026-03-10T00:49:51.083921514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:49:51.108854 systemd[1]: Started cri-containerd-797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3.scope - libcontainer container 797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3. Mar 10 00:49:51.175773 containerd[1725]: time="2026-03-10T00:49:51.175614652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fffb4dbff-q4dr2,Uid:f1c65a72-28c0-4838-9519-06a42c4fe04f,Namespace:calico-system,Attempt:0,} returns sandbox id \"797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3\"" Mar 10 00:49:51.215892 containerd[1725]: time="2026-03-10T00:49:51.215617032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 10 00:49:51.217972 kubelet[3138]: I0310 00:49:51.217573 3138 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="53655394-cd20-4d9b-8838-dda6114377a5" path="/var/lib/kubelet/pods/53655394-cd20-4d9b-8838-dda6114377a5/volumes" Mar 10 00:49:51.385718 kernel: calico-node[4617]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 10 00:49:51.798400 systemd-networkd[1510]: vxlan.calico: Link UP Mar 10 00:49:51.798409 systemd-networkd[1510]: vxlan.calico: Gained carrier Mar 10 00:49:52.530841 systemd-networkd[1510]: cali8a90605ed72: Gained IPv6LL Mar 10 00:49:52.899939 containerd[1725]: time="2026-03-10T00:49:52.899872956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:52.903464 containerd[1725]: time="2026-03-10T00:49:52.903257841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 10 00:49:52.907527 containerd[1725]: time="2026-03-10T00:49:52.906848327Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:52.912805 containerd[1725]: time="2026-03-10T00:49:52.912724456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:52.913839 containerd[1725]: time="2026-03-10T00:49:52.913493417Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.697734465s" Mar 10 00:49:52.913839 containerd[1725]: time="2026-03-10T00:49:52.913527017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 10 00:49:52.923461 containerd[1725]: time="2026-03-10T00:49:52.923423032Z" level=info msg="CreateContainer within sandbox \"797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 10 00:49:52.964010 containerd[1725]: time="2026-03-10T00:49:52.963881532Z" level=info msg="CreateContainer within sandbox \"797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"901594b2087c92d8ca5c18b221d259e51f75fe5167e116ab7cab7fad35b9bfbd\"" Mar 10 00:49:52.965602 containerd[1725]: time="2026-03-10T00:49:52.965018734Z" level=info msg="StartContainer for \"901594b2087c92d8ca5c18b221d259e51f75fe5167e116ab7cab7fad35b9bfbd\"" Mar 10 00:49:52.999824 systemd[1]: Started cri-containerd-901594b2087c92d8ca5c18b221d259e51f75fe5167e116ab7cab7fad35b9bfbd.scope - libcontainer container 901594b2087c92d8ca5c18b221d259e51f75fe5167e116ab7cab7fad35b9bfbd. Mar 10 00:49:53.035930 containerd[1725]: time="2026-03-10T00:49:53.035889360Z" level=info msg="StartContainer for \"901594b2087c92d8ca5c18b221d259e51f75fe5167e116ab7cab7fad35b9bfbd\" returns successfully" Mar 10 00:49:53.038966 containerd[1725]: time="2026-03-10T00:49:53.038885645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 10 00:49:53.299096 systemd-networkd[1510]: vxlan.calico: Gained IPv6LL Mar 10 00:49:54.831311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2565651334.mount: Deactivated successfully. Mar 10 00:49:54.896875 containerd[1725]: time="2026-03-10T00:49:54.896820870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:54.900244 containerd[1725]: time="2026-03-10T00:49:54.900206315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 10 00:49:54.904929 containerd[1725]: time="2026-03-10T00:49:54.903732000Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:54.908432 containerd[1725]: time="2026-03-10T00:49:54.908392407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:49:54.909232 containerd[1725]: time="2026-03-10T00:49:54.909200368Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.870036643s" Mar 10 00:49:54.909285 containerd[1725]: time="2026-03-10T00:49:54.909233248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 10 00:49:54.917480 containerd[1725]: time="2026-03-10T00:49:54.917440421Z" level=info msg="CreateContainer within sandbox \"797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 10 00:49:54.956621 containerd[1725]: time="2026-03-10T00:49:54.956576119Z" level=info msg="CreateContainer within sandbox \"797554eed029942e680a61d8a30b2951de93c5a04e3896d7b726bb59ef50d4d3\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6a09b581c74e839d83af3ce73cd1112f9a054d3893f369b8c920be435c56c7c4\"" Mar 10 00:49:54.958355 containerd[1725]: time="2026-03-10T00:49:54.957391321Z" level=info msg="StartContainer for \"6a09b581c74e839d83af3ce73cd1112f9a054d3893f369b8c920be435c56c7c4\"" Mar 10 00:49:54.983782 systemd[1]: Started cri-containerd-6a09b581c74e839d83af3ce73cd1112f9a054d3893f369b8c920be435c56c7c4.scope - libcontainer container 6a09b581c74e839d83af3ce73cd1112f9a054d3893f369b8c920be435c56c7c4. Mar 10 00:49:55.016553 containerd[1725]: time="2026-03-10T00:49:55.016504369Z" level=info msg="StartContainer for \"6a09b581c74e839d83af3ce73cd1112f9a054d3893f369b8c920be435c56c7c4\" returns successfully" Mar 10 00:50:00.199742 containerd[1725]: time="2026-03-10T00:50:00.199429783Z" level=info msg="StopPodSandbox for \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\"" Mar 10 00:50:00.200424 containerd[1725]: time="2026-03-10T00:50:00.200188344Z" level=info msg="StopPodSandbox for \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\"" Mar 10 00:50:00.271122 kubelet[3138]: I0310 00:50:00.270978 3138 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-5fffb4dbff-q4dr2" podStartSLOduration=6.576131911 podStartE2EDuration="10.27095537s" podCreationTimestamp="2026-03-10 00:49:50 +0000 UTC" firstStartedPulling="2026-03-10 00:49:51.215208591 +0000 UTC m=+44.118983391" lastFinishedPulling="2026-03-10 00:49:54.91003205 +0000 UTC m=+47.813806850" observedRunningTime="2026-03-10 00:49:55.466417764 +0000 UTC m=+48.370192564" watchObservedRunningTime="2026-03-10 00:50:00.27095537 +0000 UTC m=+53.174730170" Mar 10 00:50:00.314714 containerd[1725]: 2026-03-10 00:50:00.272 [INFO][4864] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Mar 10 00:50:00.314714 containerd[1725]: 2026-03-10 00:50:00.273 [INFO][4864] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" iface="eth0" netns="/var/run/netns/cni-7024f321-64e4-a4ee-8388-01ac3ab8dfe6" Mar 10 00:50:00.314714 containerd[1725]: 2026-03-10 00:50:00.275 [INFO][4864] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" iface="eth0" netns="/var/run/netns/cni-7024f321-64e4-a4ee-8388-01ac3ab8dfe6" Mar 10 00:50:00.314714 containerd[1725]: 2026-03-10 00:50:00.277 [INFO][4864] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" iface="eth0" netns="/var/run/netns/cni-7024f321-64e4-a4ee-8388-01ac3ab8dfe6" Mar 10 00:50:00.314714 containerd[1725]: 2026-03-10 00:50:00.277 [INFO][4864] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Mar 10 00:50:00.314714 containerd[1725]: 2026-03-10 00:50:00.277 [INFO][4864] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Mar 10 00:50:00.314714 containerd[1725]: 2026-03-10 00:50:00.298 [INFO][4881] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" HandleID="k8s-pod-network.ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0" Mar 10 00:50:00.314714 containerd[1725]: 2026-03-10 00:50:00.298 [INFO][4881] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:00.314714 containerd[1725]: 2026-03-10 00:50:00.298 [INFO][4881] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:00.314714 containerd[1725]: 2026-03-10 00:50:00.307 [WARNING][4881] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" HandleID="k8s-pod-network.ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0" Mar 10 00:50:00.314714 containerd[1725]: 2026-03-10 00:50:00.307 [INFO][4881] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" HandleID="k8s-pod-network.ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0" Mar 10 00:50:00.314714 containerd[1725]: 2026-03-10 00:50:00.308 [INFO][4881] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:00.314714 containerd[1725]: 2026-03-10 00:50:00.310 [INFO][4864] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Mar 10 00:50:00.314714 containerd[1725]: time="2026-03-10T00:50:00.313169433Z" level=info msg="TearDown network for sandbox \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\" successfully" Mar 10 00:50:00.314714 containerd[1725]: time="2026-03-10T00:50:00.313195794Z" level=info msg="StopPodSandbox for \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\" returns successfully" Mar 10 00:50:00.317193 systemd[1]: run-netns-cni\x2d7024f321\x2d64e4\x2da4ee\x2d8388\x2d01ac3ab8dfe6.mount: Deactivated successfully. Mar 10 00:50:00.321992 containerd[1725]: time="2026-03-10T00:50:00.321578446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79fb6d9494-k5jwc,Uid:587df943-1233-4d7e-919c-b8c045ed5b09,Namespace:calico-system,Attempt:1,}" Mar 10 00:50:00.331143 containerd[1725]: 2026-03-10 00:50:00.263 [INFO][4863] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Mar 10 00:50:00.331143 containerd[1725]: 2026-03-10 00:50:00.263 [INFO][4863] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" iface="eth0" netns="/var/run/netns/cni-9ea2efbf-1ba3-7910-d285-7c2381ea0194" Mar 10 00:50:00.331143 containerd[1725]: 2026-03-10 00:50:00.264 [INFO][4863] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" iface="eth0" netns="/var/run/netns/cni-9ea2efbf-1ba3-7910-d285-7c2381ea0194" Mar 10 00:50:00.331143 containerd[1725]: 2026-03-10 00:50:00.264 [INFO][4863] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" iface="eth0" netns="/var/run/netns/cni-9ea2efbf-1ba3-7910-d285-7c2381ea0194" Mar 10 00:50:00.331143 containerd[1725]: 2026-03-10 00:50:00.264 [INFO][4863] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Mar 10 00:50:00.331143 containerd[1725]: 2026-03-10 00:50:00.264 [INFO][4863] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Mar 10 00:50:00.331143 containerd[1725]: 2026-03-10 00:50:00.300 [INFO][4876] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" HandleID="k8s-pod-network.6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0" Mar 10 00:50:00.331143 containerd[1725]: 2026-03-10 00:50:00.300 [INFO][4876] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:00.331143 containerd[1725]: 2026-03-10 00:50:00.308 [INFO][4876] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:00.331143 containerd[1725]: 2026-03-10 00:50:00.325 [WARNING][4876] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" HandleID="k8s-pod-network.6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0" Mar 10 00:50:00.331143 containerd[1725]: 2026-03-10 00:50:00.325 [INFO][4876] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" HandleID="k8s-pod-network.6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0" Mar 10 00:50:00.331143 containerd[1725]: 2026-03-10 00:50:00.327 [INFO][4876] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:00.331143 containerd[1725]: 2026-03-10 00:50:00.329 [INFO][4863] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Mar 10 00:50:00.333127 containerd[1725]: time="2026-03-10T00:50:00.333081623Z" level=info msg="TearDown network for sandbox \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\" successfully" Mar 10 00:50:00.333127 containerd[1725]: time="2026-03-10T00:50:00.333120183Z" level=info msg="StopPodSandbox for \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\" returns successfully" Mar 10 00:50:00.334108 systemd[1]: run-netns-cni\x2d9ea2efbf\x2d1ba3\x2d7910\x2dd285\x2d7c2381ea0194.mount: Deactivated successfully. Mar 10 00:50:00.340708 containerd[1725]: time="2026-03-10T00:50:00.340410154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646d9b8f6d-rsxcf,Uid:5cf6393d-0fb3-436b-8c41-2d69d998dc28,Namespace:calico-system,Attempt:1,}" Mar 10 00:50:00.516466 systemd-networkd[1510]: caliec5534f3ebc: Link UP Mar 10 00:50:00.518063 systemd-networkd[1510]: caliec5534f3ebc: Gained carrier Mar 10 00:50:00.564253 containerd[1725]: 2026-03-10 00:50:00.433 [INFO][4889] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0 calico-kube-controllers-79fb6d9494- calico-system 587df943-1233-4d7e-919c-b8c045ed5b09 952 0 2026-03-10 00:49:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79fb6d9494 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-9b959526b1 calico-kube-controllers-79fb6d9494-k5jwc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliec5534f3ebc [] [] }} ContainerID="237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207" Namespace="calico-system" Pod="calico-kube-controllers-79fb6d9494-k5jwc" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-" Mar 10 00:50:00.564253 containerd[1725]: 2026-03-10 00:50:00.433 [INFO][4889] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207" Namespace="calico-system" Pod="calico-kube-controllers-79fb6d9494-k5jwc" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0" Mar 10 00:50:00.564253 containerd[1725]: 2026-03-10 00:50:00.466 [INFO][4912] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207" HandleID="k8s-pod-network.237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0" Mar 10 00:50:00.564253 containerd[1725]: 2026-03-10 00:50:00.476 [INFO][4912] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207" HandleID="k8s-pod-network.237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbdd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-9b959526b1", "pod":"calico-kube-controllers-79fb6d9494-k5jwc", "timestamp":"2026-03-10 00:50:00.466691424 +0000 UTC"}, Hostname:"ci-4081.3.6-n-9b959526b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004eb340)} Mar 10 00:50:00.564253 containerd[1725]: 2026-03-10 00:50:00.477 [INFO][4912] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:00.564253 containerd[1725]: 2026-03-10 00:50:00.477 [INFO][4912] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:00.564253 containerd[1725]: 2026-03-10 00:50:00.477 [INFO][4912] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-9b959526b1' Mar 10 00:50:00.564253 containerd[1725]: 2026-03-10 00:50:00.479 [INFO][4912] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:00.564253 containerd[1725]: 2026-03-10 00:50:00.485 [INFO][4912] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:00.564253 containerd[1725]: 2026-03-10 00:50:00.490 [INFO][4912] ipam/ipam.go 526: Trying affinity for 192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:00.564253 containerd[1725]: 2026-03-10 00:50:00.492 [INFO][4912] ipam/ipam.go 160: Attempting to load block cidr=192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:00.564253 containerd[1725]: 2026-03-10 00:50:00.493 [INFO][4912] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:00.564253 containerd[1725]: 2026-03-10 00:50:00.493 [INFO][4912] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.30.192/26 handle="k8s-pod-network.237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:00.564253 containerd[1725]: 2026-03-10 00:50:00.496 [INFO][4912] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207 Mar 10 00:50:00.564253 containerd[1725]: 2026-03-10 00:50:00.501 [INFO][4912] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.30.192/26 handle="k8s-pod-network.237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:00.564253 containerd[1725]: 2026-03-10 00:50:00.507 [INFO][4912] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.30.194/26] block=192.168.30.192/26 handle="k8s-pod-network.237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:00.564253 containerd[1725]: 2026-03-10 00:50:00.507 [INFO][4912] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.30.194/26] handle="k8s-pod-network.237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:00.564253 containerd[1725]: 2026-03-10 00:50:00.507 [INFO][4912] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:00.564253 containerd[1725]: 2026-03-10 00:50:00.507 [INFO][4912] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.30.194/26] IPv6=[] ContainerID="237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207" HandleID="k8s-pod-network.237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0" Mar 10 00:50:00.567148 containerd[1725]: 2026-03-10 00:50:00.513 [INFO][4889] cni-plugin/k8s.go 418: Populated endpoint ContainerID="237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207" Namespace="calico-system" Pod="calico-kube-controllers-79fb6d9494-k5jwc" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0", GenerateName:"calico-kube-controllers-79fb6d9494-", Namespace:"calico-system", SelfLink:"", UID:"587df943-1233-4d7e-919c-b8c045ed5b09", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79fb6d9494", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"", Pod:"calico-kube-controllers-79fb6d9494-k5jwc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliec5534f3ebc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:00.567148 containerd[1725]: 2026-03-10 00:50:00.513 [INFO][4889] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.194/32] ContainerID="237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207" Namespace="calico-system" Pod="calico-kube-controllers-79fb6d9494-k5jwc" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0" Mar 10 00:50:00.567148 containerd[1725]: 2026-03-10 00:50:00.513 [INFO][4889] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec5534f3ebc ContainerID="237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207" Namespace="calico-system" Pod="calico-kube-controllers-79fb6d9494-k5jwc" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0" Mar 10 00:50:00.567148 containerd[1725]: 2026-03-10 00:50:00.518 [INFO][4889] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207" Namespace="calico-system" Pod="calico-kube-controllers-79fb6d9494-k5jwc" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0" Mar 10 00:50:00.567148 containerd[1725]: 2026-03-10 00:50:00.518 [INFO][4889] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207" Namespace="calico-system" Pod="calico-kube-controllers-79fb6d9494-k5jwc" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0", GenerateName:"calico-kube-controllers-79fb6d9494-", Namespace:"calico-system", SelfLink:"", UID:"587df943-1233-4d7e-919c-b8c045ed5b09", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79fb6d9494", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207", Pod:"calico-kube-controllers-79fb6d9494-k5jwc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliec5534f3ebc", MAC:"d2:24:c1:02:e6:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:00.567148 containerd[1725]: 2026-03-10 00:50:00.562 [INFO][4889] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207" Namespace="calico-system" Pod="calico-kube-controllers-79fb6d9494-k5jwc" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0" Mar 10 00:50:00.591200 containerd[1725]: time="2026-03-10T00:50:00.590570810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 00:50:00.591200 containerd[1725]: time="2026-03-10T00:50:00.590649970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 00:50:00.591200 containerd[1725]: time="2026-03-10T00:50:00.590679130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:50:00.592187 containerd[1725]: time="2026-03-10T00:50:00.591805412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:50:00.617345 systemd[1]: Started cri-containerd-237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207.scope - libcontainer container 237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207. Mar 10 00:50:00.626832 systemd-networkd[1510]: calia6eeedbb882: Link UP Mar 10 00:50:00.628994 systemd-networkd[1510]: calia6eeedbb882: Gained carrier Mar 10 00:50:00.659440 containerd[1725]: 2026-03-10 00:50:00.443 [INFO][4898] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0 calico-apiserver-646d9b8f6d- calico-system 5cf6393d-0fb3-436b-8c41-2d69d998dc28 951 0 2026-03-10 00:49:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:646d9b8f6d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-9b959526b1 calico-apiserver-646d9b8f6d-rsxcf eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calia6eeedbb882 [] [] }} ContainerID="75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb" Namespace="calico-system" Pod="calico-apiserver-646d9b8f6d-rsxcf" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-" Mar 10 00:50:00.659440 containerd[1725]: 2026-03-10 00:50:00.443 [INFO][4898] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb" Namespace="calico-system" Pod="calico-apiserver-646d9b8f6d-rsxcf" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0" Mar 10 00:50:00.659440 containerd[1725]: 2026-03-10 00:50:00.477 [INFO][4917] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb" HandleID="k8s-pod-network.75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0" Mar 10 00:50:00.659440 containerd[1725]: 2026-03-10 00:50:00.491 [INFO][4917] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb" HandleID="k8s-pod-network.75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-9b959526b1", "pod":"calico-apiserver-646d9b8f6d-rsxcf", "timestamp":"2026-03-10 00:50:00.47753908 +0000 UTC"}, Hostname:"ci-4081.3.6-n-9b959526b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000327080)} Mar 10 00:50:00.659440 containerd[1725]: 2026-03-10 00:50:00.491 [INFO][4917] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:00.659440 containerd[1725]: 2026-03-10 00:50:00.509 [INFO][4917] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:00.659440 containerd[1725]: 2026-03-10 00:50:00.510 [INFO][4917] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-9b959526b1' Mar 10 00:50:00.659440 containerd[1725]: 2026-03-10 00:50:00.581 [INFO][4917] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:00.659440 containerd[1725]: 2026-03-10 00:50:00.585 [INFO][4917] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:00.659440 containerd[1725]: 2026-03-10 00:50:00.590 [INFO][4917] ipam/ipam.go 526: Trying affinity for 192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:00.659440 containerd[1725]: 2026-03-10 00:50:00.593 [INFO][4917] ipam/ipam.go 160: Attempting to load block cidr=192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:00.659440 containerd[1725]: 2026-03-10 00:50:00.595 [INFO][4917] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:00.659440 containerd[1725]: 2026-03-10 00:50:00.596 [INFO][4917] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.30.192/26 handle="k8s-pod-network.75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:00.659440 containerd[1725]: 2026-03-10 00:50:00.598 [INFO][4917] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb Mar 10 00:50:00.659440 containerd[1725]: 2026-03-10 00:50:00.605 [INFO][4917] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.30.192/26 handle="k8s-pod-network.75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:00.659440 containerd[1725]: 2026-03-10 00:50:00.620 [INFO][4917] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.30.195/26] block=192.168.30.192/26 handle="k8s-pod-network.75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:00.659440 containerd[1725]: 2026-03-10 00:50:00.620 [INFO][4917] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.30.195/26] handle="k8s-pod-network.75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:00.659440 containerd[1725]: 2026-03-10 00:50:00.620 [INFO][4917] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:00.659440 containerd[1725]: 2026-03-10 00:50:00.620 [INFO][4917] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.30.195/26] IPv6=[] ContainerID="75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb" HandleID="k8s-pod-network.75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0" Mar 10 00:50:00.660321 containerd[1725]: 2026-03-10 00:50:00.622 [INFO][4898] cni-plugin/k8s.go 418: Populated endpoint ContainerID="75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb" Namespace="calico-system" Pod="calico-apiserver-646d9b8f6d-rsxcf" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0", GenerateName:"calico-apiserver-646d9b8f6d-", Namespace:"calico-system", SelfLink:"", UID:"5cf6393d-0fb3-436b-8c41-2d69d998dc28", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646d9b8f6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"", Pod:"calico-apiserver-646d9b8f6d-rsxcf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia6eeedbb882", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:00.660321 containerd[1725]: 2026-03-10 00:50:00.622 [INFO][4898] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.195/32] ContainerID="75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb" Namespace="calico-system" Pod="calico-apiserver-646d9b8f6d-rsxcf" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0" Mar 10 00:50:00.660321 containerd[1725]: 2026-03-10 00:50:00.622 [INFO][4898] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6eeedbb882 ContainerID="75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb" Namespace="calico-system" Pod="calico-apiserver-646d9b8f6d-rsxcf" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0" Mar 10 00:50:00.660321 containerd[1725]: 2026-03-10 00:50:00.629 [INFO][4898] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb" Namespace="calico-system" Pod="calico-apiserver-646d9b8f6d-rsxcf" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0" Mar 10 00:50:00.660321 containerd[1725]: 2026-03-10 00:50:00.631 [INFO][4898] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb" Namespace="calico-system" Pod="calico-apiserver-646d9b8f6d-rsxcf" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0", GenerateName:"calico-apiserver-646d9b8f6d-", Namespace:"calico-system", SelfLink:"", UID:"5cf6393d-0fb3-436b-8c41-2d69d998dc28", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646d9b8f6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb", Pod:"calico-apiserver-646d9b8f6d-rsxcf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia6eeedbb882", MAC:"8e:56:ab:00:28:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:00.660321 containerd[1725]: 2026-03-10 00:50:00.655 [INFO][4898] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb" Namespace="calico-system" Pod="calico-apiserver-646d9b8f6d-rsxcf" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0" Mar 10 00:50:00.683396 containerd[1725]: time="2026-03-10T00:50:00.683131149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 00:50:00.683396 containerd[1725]: time="2026-03-10T00:50:00.683216749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 00:50:00.683396 containerd[1725]: time="2026-03-10T00:50:00.683232909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:50:00.683644 containerd[1725]: time="2026-03-10T00:50:00.683530189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:50:00.687380 containerd[1725]: time="2026-03-10T00:50:00.687313435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79fb6d9494-k5jwc,Uid:587df943-1233-4d7e-919c-b8c045ed5b09,Namespace:calico-system,Attempt:1,} returns sandbox id \"237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207\"" Mar 10 00:50:00.690590 containerd[1725]: time="2026-03-10T00:50:00.690170239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 10 00:50:00.713051 systemd[1]: Started cri-containerd-75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb.scope - libcontainer container 75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb. Mar 10 00:50:00.747258 containerd[1725]: time="2026-03-10T00:50:00.747209565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646d9b8f6d-rsxcf,Uid:5cf6393d-0fb3-436b-8c41-2d69d998dc28,Namespace:calico-system,Attempt:1,} returns sandbox id \"75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb\"" Mar 10 00:50:01.682800 systemd-networkd[1510]: calia6eeedbb882: Gained IPv6LL Mar 10 00:50:02.066829 systemd-networkd[1510]: caliec5534f3ebc: Gained IPv6LL Mar 10 00:50:02.200301 containerd[1725]: time="2026-03-10T00:50:02.200026584Z" level=info msg="StopPodSandbox for \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\"" Mar 10 00:50:02.200610 containerd[1725]: time="2026-03-10T00:50:02.200043464Z" level=info msg="StopPodSandbox for \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\"" Mar 10 00:50:02.203223 containerd[1725]: time="2026-03-10T00:50:02.200026544Z" level=info msg="StopPodSandbox for \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\"" Mar 10 00:50:02.364945 containerd[1725]: 2026-03-10 00:50:02.305 [INFO][5091] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Mar 10 00:50:02.364945 containerd[1725]: 2026-03-10 00:50:02.305 [INFO][5091] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" iface="eth0" netns="/var/run/netns/cni-c4188b7b-a79b-8e16-a441-ce93eb46bd02" Mar 10 00:50:02.364945 containerd[1725]: 2026-03-10 00:50:02.305 [INFO][5091] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" iface="eth0" netns="/var/run/netns/cni-c4188b7b-a79b-8e16-a441-ce93eb46bd02" Mar 10 00:50:02.364945 containerd[1725]: 2026-03-10 00:50:02.305 [INFO][5091] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" iface="eth0" netns="/var/run/netns/cni-c4188b7b-a79b-8e16-a441-ce93eb46bd02" Mar 10 00:50:02.364945 containerd[1725]: 2026-03-10 00:50:02.305 [INFO][5091] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Mar 10 00:50:02.364945 containerd[1725]: 2026-03-10 00:50:02.305 [INFO][5091] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Mar 10 00:50:02.364945 containerd[1725]: 2026-03-10 00:50:02.338 [INFO][5114] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" HandleID="k8s-pod-network.8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0" Mar 10 00:50:02.364945 containerd[1725]: 2026-03-10 00:50:02.338 [INFO][5114] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:02.364945 containerd[1725]: 2026-03-10 00:50:02.338 [INFO][5114] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:02.364945 containerd[1725]: 2026-03-10 00:50:02.356 [WARNING][5114] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" HandleID="k8s-pod-network.8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0" Mar 10 00:50:02.364945 containerd[1725]: 2026-03-10 00:50:02.356 [INFO][5114] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" HandleID="k8s-pod-network.8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0" Mar 10 00:50:02.364945 containerd[1725]: 2026-03-10 00:50:02.357 [INFO][5114] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:02.364945 containerd[1725]: 2026-03-10 00:50:02.361 [INFO][5091] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Mar 10 00:50:02.369112 containerd[1725]: time="2026-03-10T00:50:02.367758636Z" level=info msg="TearDown network for sandbox \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\" successfully" Mar 10 00:50:02.369112 containerd[1725]: time="2026-03-10T00:50:02.367799796Z" level=info msg="StopPodSandbox for \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\" returns successfully" Mar 10 00:50:02.369702 systemd[1]: run-netns-cni\x2dc4188b7b\x2da79b\x2d8e16\x2da441\x2dce93eb46bd02.mount: Deactivated successfully. Mar 10 00:50:02.375911 containerd[1725]: time="2026-03-10T00:50:02.375855608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-2rhxk,Uid:8e1f6144-8d73-40ae-a431-d82abb11d87e,Namespace:kube-system,Attempt:1,}" Mar 10 00:50:02.379376 containerd[1725]: 2026-03-10 00:50:02.299 [INFO][5084] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Mar 10 00:50:02.379376 containerd[1725]: 2026-03-10 00:50:02.299 [INFO][5084] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" iface="eth0" netns="/var/run/netns/cni-6406e753-16bf-7350-d38c-052242b61326" Mar 10 00:50:02.379376 containerd[1725]: 2026-03-10 00:50:02.299 [INFO][5084] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" iface="eth0" netns="/var/run/netns/cni-6406e753-16bf-7350-d38c-052242b61326" Mar 10 00:50:02.379376 containerd[1725]: 2026-03-10 00:50:02.300 [INFO][5084] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" iface="eth0" netns="/var/run/netns/cni-6406e753-16bf-7350-d38c-052242b61326" Mar 10 00:50:02.379376 containerd[1725]: 2026-03-10 00:50:02.300 [INFO][5084] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Mar 10 00:50:02.379376 containerd[1725]: 2026-03-10 00:50:02.300 [INFO][5084] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Mar 10 00:50:02.379376 containerd[1725]: 2026-03-10 00:50:02.348 [INFO][5111] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" HandleID="k8s-pod-network.2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0" Mar 10 00:50:02.379376 containerd[1725]: 2026-03-10 00:50:02.349 [INFO][5111] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:02.379376 containerd[1725]: 2026-03-10 00:50:02.357 [INFO][5111] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:02.379376 containerd[1725]: 2026-03-10 00:50:02.373 [WARNING][5111] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" HandleID="k8s-pod-network.2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0" Mar 10 00:50:02.379376 containerd[1725]: 2026-03-10 00:50:02.373 [INFO][5111] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" HandleID="k8s-pod-network.2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0" Mar 10 00:50:02.379376 containerd[1725]: 2026-03-10 00:50:02.376 [INFO][5111] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:02.379376 containerd[1725]: 2026-03-10 00:50:02.378 [INFO][5084] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Mar 10 00:50:02.379931 containerd[1725]: time="2026-03-10T00:50:02.379897374Z" level=info msg="TearDown network for sandbox \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\" successfully" Mar 10 00:50:02.379931 containerd[1725]: time="2026-03-10T00:50:02.379928774Z" level=info msg="StopPodSandbox for \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\" returns successfully" Mar 10 00:50:02.383492 systemd[1]: run-netns-cni\x2d6406e753\x2d16bf\x2d7350\x2dd38c\x2d052242b61326.mount: Deactivated successfully. Mar 10 00:50:02.400745 containerd[1725]: time="2026-03-10T00:50:02.400677926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-djf48,Uid:fe1fca07-2fb3-438a-bce9-8b3a59b0fe53,Namespace:kube-system,Attempt:1,}" Mar 10 00:50:02.402728 containerd[1725]: 2026-03-10 00:50:02.325 [INFO][5095] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Mar 10 00:50:02.402728 containerd[1725]: 2026-03-10 00:50:02.325 [INFO][5095] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" iface="eth0" netns="/var/run/netns/cni-f965a056-ccc5-66b7-427a-cde24365d13c" Mar 10 00:50:02.402728 containerd[1725]: 2026-03-10 00:50:02.326 [INFO][5095] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" iface="eth0" netns="/var/run/netns/cni-f965a056-ccc5-66b7-427a-cde24365d13c" Mar 10 00:50:02.402728 containerd[1725]: 2026-03-10 00:50:02.326 [INFO][5095] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" iface="eth0" netns="/var/run/netns/cni-f965a056-ccc5-66b7-427a-cde24365d13c" Mar 10 00:50:02.402728 containerd[1725]: 2026-03-10 00:50:02.326 [INFO][5095] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Mar 10 00:50:02.402728 containerd[1725]: 2026-03-10 00:50:02.326 [INFO][5095] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Mar 10 00:50:02.402728 containerd[1725]: 2026-03-10 00:50:02.358 [INFO][5122] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" HandleID="k8s-pod-network.01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Workload="ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0" Mar 10 00:50:02.402728 containerd[1725]: 2026-03-10 00:50:02.358 [INFO][5122] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:02.402728 containerd[1725]: 2026-03-10 00:50:02.374 [INFO][5122] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:02.402728 containerd[1725]: 2026-03-10 00:50:02.395 [WARNING][5122] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" HandleID="k8s-pod-network.01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Workload="ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0" Mar 10 00:50:02.402728 containerd[1725]: 2026-03-10 00:50:02.395 [INFO][5122] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" HandleID="k8s-pod-network.01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Workload="ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0" Mar 10 00:50:02.402728 containerd[1725]: 2026-03-10 00:50:02.397 [INFO][5122] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:02.402728 containerd[1725]: 2026-03-10 00:50:02.401 [INFO][5095] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Mar 10 00:50:02.405035 containerd[1725]: time="2026-03-10T00:50:02.402999929Z" level=info msg="TearDown network for sandbox \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\" successfully" Mar 10 00:50:02.405035 containerd[1725]: time="2026-03-10T00:50:02.403026729Z" level=info msg="StopPodSandbox for \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\" returns successfully" Mar 10 00:50:02.406448 systemd[1]: run-netns-cni\x2df965a056\x2dccc5\x2d66b7\x2d427a\x2dcde24365d13c.mount: Deactivated successfully. Mar 10 00:50:02.409716 containerd[1725]: time="2026-03-10T00:50:02.409579059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zhks4,Uid:e643e7c4-0931-432c-9761-f364e4ac4030,Namespace:calico-system,Attempt:1,}" Mar 10 00:50:02.631753 systemd-networkd[1510]: cali4931bd3578b: Link UP Mar 10 00:50:02.632024 systemd-networkd[1510]: cali4931bd3578b: Gained carrier Mar 10 00:50:02.664732 containerd[1725]: 2026-03-10 00:50:02.477 [INFO][5133] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0 coredns-7d764666f9- kube-system 8e1f6144-8d73-40ae-a431-d82abb11d87e 968 0 2026-03-10 00:49:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-9b959526b1 coredns-7d764666f9-2rhxk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4931bd3578b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41" Namespace="kube-system" Pod="coredns-7d764666f9-2rhxk" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-" Mar 10 00:50:02.664732 containerd[1725]: 2026-03-10 00:50:02.477 [INFO][5133] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41" Namespace="kube-system" Pod="coredns-7d764666f9-2rhxk" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0" Mar 10 00:50:02.664732 containerd[1725]: 2026-03-10 00:50:02.541 [INFO][5155] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41" HandleID="k8s-pod-network.5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0" Mar 10 00:50:02.664732 containerd[1725]: 2026-03-10 00:50:02.556 [INFO][5155] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41" HandleID="k8s-pod-network.5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb3e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-9b959526b1", "pod":"coredns-7d764666f9-2rhxk", "timestamp":"2026-03-10 00:50:02.541227136 +0000 UTC"}, Hostname:"ci-4081.3.6-n-9b959526b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002691e0)} Mar 10 00:50:02.664732 containerd[1725]: 2026-03-10 00:50:02.556 [INFO][5155] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:02.664732 containerd[1725]: 2026-03-10 00:50:02.556 [INFO][5155] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:02.664732 containerd[1725]: 2026-03-10 00:50:02.557 [INFO][5155] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-9b959526b1' Mar 10 00:50:02.664732 containerd[1725]: 2026-03-10 00:50:02.561 [INFO][5155] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.664732 containerd[1725]: 2026-03-10 00:50:02.568 [INFO][5155] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.664732 containerd[1725]: 2026-03-10 00:50:02.577 [INFO][5155] ipam/ipam.go 526: Trying affinity for 192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.664732 containerd[1725]: 2026-03-10 00:50:02.582 [INFO][5155] ipam/ipam.go 160: Attempting to load block cidr=192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.664732 containerd[1725]: 2026-03-10 00:50:02.589 [INFO][5155] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.664732 containerd[1725]: 2026-03-10 00:50:02.589 [INFO][5155] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.30.192/26 handle="k8s-pod-network.5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.664732 containerd[1725]: 2026-03-10 00:50:02.591 [INFO][5155] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41 Mar 10 00:50:02.664732 containerd[1725]: 2026-03-10 00:50:02.599 [INFO][5155] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.30.192/26 handle="k8s-pod-network.5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.664732 containerd[1725]: 2026-03-10 00:50:02.615 [INFO][5155] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.30.196/26] block=192.168.30.192/26 handle="k8s-pod-network.5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.664732 containerd[1725]: 2026-03-10 00:50:02.616 [INFO][5155] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.30.196/26] handle="k8s-pod-network.5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.664732 containerd[1725]: 2026-03-10 00:50:02.616 [INFO][5155] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:02.664732 containerd[1725]: 2026-03-10 00:50:02.616 [INFO][5155] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.30.196/26] IPv6=[] ContainerID="5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41" HandleID="k8s-pod-network.5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0" Mar 10 00:50:02.666326 containerd[1725]: 2026-03-10 00:50:02.621 [INFO][5133] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41" Namespace="kube-system" Pod="coredns-7d764666f9-2rhxk" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"8e1f6144-8d73-40ae-a431-d82abb11d87e", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"", Pod:"coredns-7d764666f9-2rhxk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4931bd3578b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:02.666326 containerd[1725]: 2026-03-10 00:50:02.624 [INFO][5133] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.196/32] ContainerID="5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41" Namespace="kube-system" Pod="coredns-7d764666f9-2rhxk" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0" Mar 10 00:50:02.666326 containerd[1725]: 2026-03-10 00:50:02.624 [INFO][5133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4931bd3578b ContainerID="5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41" Namespace="kube-system" Pod="coredns-7d764666f9-2rhxk" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0" Mar 10 00:50:02.666326 containerd[1725]: 2026-03-10 00:50:02.630 [INFO][5133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41" Namespace="kube-system" Pod="coredns-7d764666f9-2rhxk" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0" Mar 10 00:50:02.666326 containerd[1725]: 2026-03-10 00:50:02.631 [INFO][5133] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41" Namespace="kube-system" Pod="coredns-7d764666f9-2rhxk" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"8e1f6144-8d73-40ae-a431-d82abb11d87e", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41", Pod:"coredns-7d764666f9-2rhxk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4931bd3578b", MAC:"6a:c1:71:78:22:59", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:02.666526 containerd[1725]: 2026-03-10 00:50:02.662 [INFO][5133] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41" Namespace="kube-system" Pod="coredns-7d764666f9-2rhxk" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0" Mar 10 00:50:02.711021 systemd-networkd[1510]: cali158fcb71383: Link UP Mar 10 00:50:02.711611 systemd-networkd[1510]: cali158fcb71383: Gained carrier Mar 10 00:50:02.731502 containerd[1725]: time="2026-03-10T00:50:02.731402222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 00:50:02.731502 containerd[1725]: time="2026-03-10T00:50:02.731460302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 00:50:02.731502 containerd[1725]: time="2026-03-10T00:50:02.731476462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:50:02.732025 containerd[1725]: time="2026-03-10T00:50:02.731838422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:50:02.741268 containerd[1725]: 2026-03-10 00:50:02.551 [INFO][5150] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0 csi-node-driver- calico-system e643e7c4-0931-432c-9761-f364e4ac4030 969 0 2026-03-10 00:49:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-9b959526b1 csi-node-driver-zhks4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali158fcb71383 [] [] }} ContainerID="23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65" Namespace="calico-system" Pod="csi-node-driver-zhks4" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-" Mar 10 00:50:02.741268 containerd[1725]: 2026-03-10 00:50:02.552 [INFO][5150] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65" Namespace="calico-system" Pod="csi-node-driver-zhks4" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0" Mar 10 00:50:02.741268 containerd[1725]: 2026-03-10 00:50:02.634 [INFO][5177] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65" HandleID="k8s-pod-network.23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65" Workload="ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0" Mar 10 00:50:02.741268 containerd[1725]: 2026-03-10 00:50:02.660 [INFO][5177] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65" HandleID="k8s-pod-network.23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65" Workload="ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003f0190), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-9b959526b1", "pod":"csi-node-driver-zhks4", "timestamp":"2026-03-10 00:50:02.634753677 +0000 UTC"}, Hostname:"ci-4081.3.6-n-9b959526b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000296dc0)} Mar 10 00:50:02.741268 containerd[1725]: 2026-03-10 00:50:02.660 [INFO][5177] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:02.741268 containerd[1725]: 2026-03-10 00:50:02.660 [INFO][5177] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:02.741268 containerd[1725]: 2026-03-10 00:50:02.660 [INFO][5177] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-9b959526b1' Mar 10 00:50:02.741268 containerd[1725]: 2026-03-10 00:50:02.665 [INFO][5177] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.741268 containerd[1725]: 2026-03-10 00:50:02.672 [INFO][5177] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.741268 containerd[1725]: 2026-03-10 00:50:02.678 [INFO][5177] ipam/ipam.go 526: Trying affinity for 192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.741268 containerd[1725]: 2026-03-10 00:50:02.680 [INFO][5177] ipam/ipam.go 160: Attempting to load block cidr=192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.741268 containerd[1725]: 2026-03-10 00:50:02.683 [INFO][5177] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.741268 containerd[1725]: 2026-03-10 00:50:02.683 [INFO][5177] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.30.192/26 handle="k8s-pod-network.23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.741268 containerd[1725]: 2026-03-10 00:50:02.685 [INFO][5177] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65 Mar 10 00:50:02.741268 containerd[1725]: 2026-03-10 00:50:02.691 [INFO][5177] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.30.192/26 handle="k8s-pod-network.23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.741268 containerd[1725]: 2026-03-10 00:50:02.701 [INFO][5177] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.30.197/26] block=192.168.30.192/26 handle="k8s-pod-network.23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.741268 containerd[1725]: 2026-03-10 00:50:02.701 [INFO][5177] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.30.197/26] handle="k8s-pod-network.23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.741268 containerd[1725]: 2026-03-10 00:50:02.701 [INFO][5177] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:02.741268 containerd[1725]: 2026-03-10 00:50:02.702 [INFO][5177] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.30.197/26] IPv6=[] ContainerID="23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65" HandleID="k8s-pod-network.23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65" Workload="ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0" Mar 10 00:50:02.741810 containerd[1725]: 2026-03-10 00:50:02.704 [INFO][5150] cni-plugin/k8s.go 418: Populated endpoint ContainerID="23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65" Namespace="calico-system" Pod="csi-node-driver-zhks4" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e643e7c4-0931-432c-9761-f364e4ac4030", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"", Pod:"csi-node-driver-zhks4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali158fcb71383", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:02.741810 containerd[1725]: 2026-03-10 00:50:02.704 [INFO][5150] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.197/32] ContainerID="23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65" Namespace="calico-system" Pod="csi-node-driver-zhks4" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0" Mar 10 00:50:02.741810 containerd[1725]: 2026-03-10 00:50:02.704 [INFO][5150] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali158fcb71383 ContainerID="23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65" Namespace="calico-system" Pod="csi-node-driver-zhks4" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0" Mar 10 00:50:02.741810 containerd[1725]: 2026-03-10 00:50:02.712 [INFO][5150] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65" Namespace="calico-system" Pod="csi-node-driver-zhks4" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0" Mar 10 00:50:02.741810 containerd[1725]: 2026-03-10 00:50:02.712 [INFO][5150] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65" Namespace="calico-system" Pod="csi-node-driver-zhks4" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e643e7c4-0931-432c-9761-f364e4ac4030", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65", Pod:"csi-node-driver-zhks4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali158fcb71383", MAC:"8a:45:dd:4c:82:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:02.741810 containerd[1725]: 2026-03-10 00:50:02.736 [INFO][5150] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65" Namespace="calico-system" Pod="csi-node-driver-zhks4" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0" Mar 10 00:50:02.758818 systemd[1]: Started cri-containerd-5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41.scope - libcontainer container 5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41. Mar 10 00:50:02.778201 containerd[1725]: time="2026-03-10T00:50:02.777992132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 00:50:02.778548 containerd[1725]: time="2026-03-10T00:50:02.778085452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 00:50:02.778790 containerd[1725]: time="2026-03-10T00:50:02.778514252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:50:02.779363 containerd[1725]: time="2026-03-10T00:50:02.779225893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:50:02.810296 systemd[1]: Started cri-containerd-23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65.scope - libcontainer container 23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65. Mar 10 00:50:02.840798 containerd[1725]: time="2026-03-10T00:50:02.840710346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-2rhxk,Uid:8e1f6144-8d73-40ae-a431-d82abb11d87e,Namespace:kube-system,Attempt:1,} returns sandbox id \"5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41\"" Mar 10 00:50:02.852414 containerd[1725]: time="2026-03-10T00:50:02.852354283Z" level=info msg="CreateContainer within sandbox \"5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 00:50:02.856773 systemd-networkd[1510]: calid9c7affcdb1: Link UP Mar 10 00:50:02.860414 systemd-networkd[1510]: calid9c7affcdb1: Gained carrier Mar 10 00:50:02.897046 containerd[1725]: 2026-03-10 00:50:02.551 [INFO][5144] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0 coredns-7d764666f9- kube-system fe1fca07-2fb3-438a-bce9-8b3a59b0fe53 967 0 2026-03-10 00:49:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-9b959526b1 coredns-7d764666f9-djf48 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid9c7affcdb1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef" Namespace="kube-system" Pod="coredns-7d764666f9-djf48" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-" Mar 10 00:50:02.897046 containerd[1725]: 2026-03-10 00:50:02.552 [INFO][5144] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef" Namespace="kube-system" Pod="coredns-7d764666f9-djf48" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0" Mar 10 00:50:02.897046 containerd[1725]: 2026-03-10 00:50:02.621 [INFO][5178] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef" HandleID="k8s-pod-network.f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0" Mar 10 00:50:02.897046 containerd[1725]: 2026-03-10 00:50:02.662 [INFO][5178] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef" HandleID="k8s-pod-network.f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbd20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-9b959526b1", "pod":"coredns-7d764666f9-djf48", "timestamp":"2026-03-10 00:50:02.621047976 +0000 UTC"}, Hostname:"ci-4081.3.6-n-9b959526b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004e02c0)} Mar 10 00:50:02.897046 containerd[1725]: 2026-03-10 00:50:02.662 [INFO][5178] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:02.897046 containerd[1725]: 2026-03-10 00:50:02.702 [INFO][5178] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:02.897046 containerd[1725]: 2026-03-10 00:50:02.702 [INFO][5178] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-9b959526b1' Mar 10 00:50:02.897046 containerd[1725]: 2026-03-10 00:50:02.770 [INFO][5178] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.897046 containerd[1725]: 2026-03-10 00:50:02.782 [INFO][5178] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.897046 containerd[1725]: 2026-03-10 00:50:02.792 [INFO][5178] ipam/ipam.go 526: Trying affinity for 192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.897046 containerd[1725]: 2026-03-10 00:50:02.799 [INFO][5178] ipam/ipam.go 160: Attempting to load block cidr=192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.897046 containerd[1725]: 2026-03-10 00:50:02.807 [INFO][5178] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.897046 containerd[1725]: 2026-03-10 00:50:02.807 [INFO][5178] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.30.192/26 handle="k8s-pod-network.f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.897046 containerd[1725]: 2026-03-10 00:50:02.812 [INFO][5178] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef Mar 10 00:50:02.897046 containerd[1725]: 2026-03-10 00:50:02.827 [INFO][5178] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.30.192/26 handle="k8s-pod-network.f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.897046 containerd[1725]: 2026-03-10 00:50:02.845 [INFO][5178] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.30.198/26] block=192.168.30.192/26 handle="k8s-pod-network.f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.897046 containerd[1725]: 2026-03-10 00:50:02.845 [INFO][5178] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.30.198/26] handle="k8s-pod-network.f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:02.897046 containerd[1725]: 2026-03-10 00:50:02.846 [INFO][5178] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:02.897046 containerd[1725]: 2026-03-10 00:50:02.846 [INFO][5178] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.30.198/26] IPv6=[] ContainerID="f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef" HandleID="k8s-pod-network.f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0" Mar 10 00:50:02.897576 containerd[1725]: 2026-03-10 00:50:02.850 [INFO][5144] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef" Namespace="kube-system" Pod="coredns-7d764666f9-djf48" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"fe1fca07-2fb3-438a-bce9-8b3a59b0fe53", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"", Pod:"coredns-7d764666f9-djf48", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9c7affcdb1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:02.897576 containerd[1725]: 2026-03-10 00:50:02.850 [INFO][5144] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.198/32] ContainerID="f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef" Namespace="kube-system" Pod="coredns-7d764666f9-djf48" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0" Mar 10 00:50:02.897576 containerd[1725]: 2026-03-10 00:50:02.850 [INFO][5144] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9c7affcdb1 ContainerID="f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef" Namespace="kube-system" Pod="coredns-7d764666f9-djf48" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0" Mar 10 00:50:02.897576 containerd[1725]: 2026-03-10 00:50:02.866 [INFO][5144] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef" Namespace="kube-system" Pod="coredns-7d764666f9-djf48" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0" Mar 10 00:50:02.897576 containerd[1725]: 2026-03-10 00:50:02.868 [INFO][5144] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef" Namespace="kube-system" Pod="coredns-7d764666f9-djf48" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"fe1fca07-2fb3-438a-bce9-8b3a59b0fe53", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef", Pod:"coredns-7d764666f9-djf48", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9c7affcdb1", MAC:"52:c8:99:68:7e:18", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:02.901052 containerd[1725]: 2026-03-10 00:50:02.891 [INFO][5144] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef" Namespace="kube-system" Pod="coredns-7d764666f9-djf48" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0" Mar 10 00:50:02.906981 containerd[1725]: time="2026-03-10T00:50:02.906942045Z" level=info msg="CreateContainer within sandbox \"5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"144004493eaa5a1db75ecebb533e0f317aa2f6d68e185bcfa07b0a93dec491a1\"" Mar 10 00:50:02.908617 containerd[1725]: time="2026-03-10T00:50:02.908067167Z" level=info msg="StartContainer for \"144004493eaa5a1db75ecebb533e0f317aa2f6d68e185bcfa07b0a93dec491a1\"" Mar 10 00:50:02.920489 containerd[1725]: time="2026-03-10T00:50:02.920451665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zhks4,Uid:e643e7c4-0931-432c-9761-f364e4ac4030,Namespace:calico-system,Attempt:1,} returns sandbox id \"23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65\"" Mar 10 00:50:02.937847 containerd[1725]: time="2026-03-10T00:50:02.937752651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 00:50:02.938005 containerd[1725]: time="2026-03-10T00:50:02.937986052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 00:50:02.938049 containerd[1725]: time="2026-03-10T00:50:02.938006212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:50:02.938134 containerd[1725]: time="2026-03-10T00:50:02.938107132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:50:02.956874 systemd[1]: Started cri-containerd-144004493eaa5a1db75ecebb533e0f317aa2f6d68e185bcfa07b0a93dec491a1.scope - libcontainer container 144004493eaa5a1db75ecebb533e0f317aa2f6d68e185bcfa07b0a93dec491a1. Mar 10 00:50:02.963831 systemd[1]: Started cri-containerd-f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef.scope - libcontainer container f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef. Mar 10 00:50:03.011088 containerd[1725]: time="2026-03-10T00:50:03.010941921Z" level=info msg="StartContainer for \"144004493eaa5a1db75ecebb533e0f317aa2f6d68e185bcfa07b0a93dec491a1\" returns successfully" Mar 10 00:50:03.017989 containerd[1725]: time="2026-03-10T00:50:03.017953492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-djf48,Uid:fe1fca07-2fb3-438a-bce9-8b3a59b0fe53,Namespace:kube-system,Attempt:1,} returns sandbox id \"f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef\"" Mar 10 00:50:03.031787 containerd[1725]: time="2026-03-10T00:50:03.031748832Z" level=info msg="CreateContainer within sandbox \"f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 00:50:03.068611 containerd[1725]: time="2026-03-10T00:50:03.068485967Z" level=info msg="CreateContainer within sandbox \"f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"414cb8b490d984dd1de1e10651ea502f22257d9dfa84290127d347e493968011\"" Mar 10 00:50:03.070681 containerd[1725]: time="2026-03-10T00:50:03.069364009Z" level=info msg="StartContainer for \"414cb8b490d984dd1de1e10651ea502f22257d9dfa84290127d347e493968011\"" Mar 10 00:50:03.095821 systemd[1]: Started cri-containerd-414cb8b490d984dd1de1e10651ea502f22257d9dfa84290127d347e493968011.scope - libcontainer container 414cb8b490d984dd1de1e10651ea502f22257d9dfa84290127d347e493968011. Mar 10 00:50:03.124532 containerd[1725]: time="2026-03-10T00:50:03.124471971Z" level=info msg="StartContainer for \"414cb8b490d984dd1de1e10651ea502f22257d9dfa84290127d347e493968011\" returns successfully" Mar 10 00:50:03.201940 containerd[1725]: time="2026-03-10T00:50:03.201824248Z" level=info msg="StopPodSandbox for \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\"" Mar 10 00:50:03.203138 containerd[1725]: time="2026-03-10T00:50:03.201840568Z" level=info msg="StopPodSandbox for \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\"" Mar 10 00:50:03.315953 containerd[1725]: 2026-03-10 00:50:03.268 [INFO][5468] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Mar 10 00:50:03.315953 containerd[1725]: 2026-03-10 00:50:03.269 [INFO][5468] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" iface="eth0" netns="/var/run/netns/cni-5ad13eda-d0c0-1c09-2690-a3bfbcadc2ac" Mar 10 00:50:03.315953 containerd[1725]: 2026-03-10 00:50:03.269 [INFO][5468] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" iface="eth0" netns="/var/run/netns/cni-5ad13eda-d0c0-1c09-2690-a3bfbcadc2ac" Mar 10 00:50:03.315953 containerd[1725]: 2026-03-10 00:50:03.269 [INFO][5468] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" iface="eth0" netns="/var/run/netns/cni-5ad13eda-d0c0-1c09-2690-a3bfbcadc2ac" Mar 10 00:50:03.315953 containerd[1725]: 2026-03-10 00:50:03.269 [INFO][5468] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Mar 10 00:50:03.315953 containerd[1725]: 2026-03-10 00:50:03.269 [INFO][5468] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Mar 10 00:50:03.315953 containerd[1725]: 2026-03-10 00:50:03.297 [INFO][5483] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" HandleID="k8s-pod-network.828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Workload="ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0" Mar 10 00:50:03.315953 containerd[1725]: 2026-03-10 00:50:03.297 [INFO][5483] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:03.315953 containerd[1725]: 2026-03-10 00:50:03.298 [INFO][5483] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:03.315953 containerd[1725]: 2026-03-10 00:50:03.308 [WARNING][5483] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" HandleID="k8s-pod-network.828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Workload="ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0" Mar 10 00:50:03.315953 containerd[1725]: 2026-03-10 00:50:03.309 [INFO][5483] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" HandleID="k8s-pod-network.828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Workload="ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0" Mar 10 00:50:03.315953 containerd[1725]: 2026-03-10 00:50:03.311 [INFO][5483] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:03.315953 containerd[1725]: 2026-03-10 00:50:03.313 [INFO][5468] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Mar 10 00:50:03.316776 containerd[1725]: time="2026-03-10T00:50:03.316374419Z" level=info msg="TearDown network for sandbox \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\" successfully" Mar 10 00:50:03.316776 containerd[1725]: time="2026-03-10T00:50:03.316665900Z" level=info msg="StopPodSandbox for \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\" returns successfully" Mar 10 00:50:03.324170 containerd[1725]: time="2026-03-10T00:50:03.324119711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-c5qrt,Uid:1484e1bd-a157-4269-85f5-50c715d3704d,Namespace:calico-system,Attempt:1,}" Mar 10 00:50:03.330168 containerd[1725]: 2026-03-10 00:50:03.274 [INFO][5469] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Mar 10 00:50:03.330168 containerd[1725]: 2026-03-10 00:50:03.274 [INFO][5469] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" iface="eth0" netns="/var/run/netns/cni-b462d20b-5efb-1887-950b-ae5a4ade7e83" Mar 10 00:50:03.330168 containerd[1725]: 2026-03-10 00:50:03.275 [INFO][5469] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" iface="eth0" netns="/var/run/netns/cni-b462d20b-5efb-1887-950b-ae5a4ade7e83" Mar 10 00:50:03.330168 containerd[1725]: 2026-03-10 00:50:03.275 [INFO][5469] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" iface="eth0" netns="/var/run/netns/cni-b462d20b-5efb-1887-950b-ae5a4ade7e83" Mar 10 00:50:03.330168 containerd[1725]: 2026-03-10 00:50:03.275 [INFO][5469] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Mar 10 00:50:03.330168 containerd[1725]: 2026-03-10 00:50:03.275 [INFO][5469] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Mar 10 00:50:03.330168 containerd[1725]: 2026-03-10 00:50:03.313 [INFO][5488] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" HandleID="k8s-pod-network.42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0" Mar 10 00:50:03.330168 containerd[1725]: 2026-03-10 00:50:03.313 [INFO][5488] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:03.330168 containerd[1725]: 2026-03-10 00:50:03.313 [INFO][5488] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:03.330168 containerd[1725]: 2026-03-10 00:50:03.323 [WARNING][5488] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" HandleID="k8s-pod-network.42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0" Mar 10 00:50:03.330168 containerd[1725]: 2026-03-10 00:50:03.323 [INFO][5488] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" HandleID="k8s-pod-network.42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0" Mar 10 00:50:03.330168 containerd[1725]: 2026-03-10 00:50:03.325 [INFO][5488] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:03.330168 containerd[1725]: 2026-03-10 00:50:03.327 [INFO][5469] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Mar 10 00:50:03.330972 containerd[1725]: time="2026-03-10T00:50:03.330731441Z" level=info msg="TearDown network for sandbox \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\" successfully" Mar 10 00:50:03.330972 containerd[1725]: time="2026-03-10T00:50:03.330761601Z" level=info msg="StopPodSandbox for \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\" returns successfully" Mar 10 00:50:03.337318 containerd[1725]: time="2026-03-10T00:50:03.337277971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646d9b8f6d-tqwb4,Uid:7e5ad127-4df9-4c78-b9f6-97b3790dc3ca,Namespace:calico-system,Attempt:1,}" Mar 10 00:50:03.377203 systemd[1]: run-netns-cni\x2db462d20b\x2d5efb\x2d1887\x2d950b\x2dae5a4ade7e83.mount: Deactivated successfully. Mar 10 00:50:03.377719 systemd[1]: run-netns-cni\x2d5ad13eda\x2dd0c0\x2d1c09\x2d2690\x2da3bfbcadc2ac.mount: Deactivated successfully. Mar 10 00:50:03.658091 kubelet[3138]: I0310 00:50:03.544839 3138 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-2rhxk" podStartSLOduration=50.544821522 podStartE2EDuration="50.544821522s" podCreationTimestamp="2026-03-10 00:49:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 00:50:03.511126592 +0000 UTC m=+56.414901392" watchObservedRunningTime="2026-03-10 00:50:03.544821522 +0000 UTC m=+56.448596322" Mar 10 00:50:03.658091 kubelet[3138]: I0310 00:50:03.634569 3138 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-djf48" podStartSLOduration=50.634551857 podStartE2EDuration="50.634551857s" podCreationTimestamp="2026-03-10 00:49:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 00:50:03.57032532 +0000 UTC m=+56.474100120" watchObservedRunningTime="2026-03-10 00:50:03.634551857 +0000 UTC m=+56.538326697" Mar 10 00:50:03.658615 containerd[1725]: 2026-03-10 00:50:03.428 [INFO][5497] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0 goldmane-9f7667bb8- calico-system 1484e1bd-a157-4269-85f5-50c715d3704d 989 0 2026-03-10 00:49:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-9b959526b1 goldmane-9f7667bb8-c5qrt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif1e17318e65 [] [] }} ContainerID="dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f" Namespace="calico-system" Pod="goldmane-9f7667bb8-c5qrt" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-" Mar 10 00:50:03.658615 containerd[1725]: 2026-03-10 00:50:03.429 [INFO][5497] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f" Namespace="calico-system" Pod="goldmane-9f7667bb8-c5qrt" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0" Mar 10 00:50:03.658615 containerd[1725]: 2026-03-10 00:50:03.469 [INFO][5519] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f" HandleID="k8s-pod-network.dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f" Workload="ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0" Mar 10 00:50:03.658615 containerd[1725]: 2026-03-10 00:50:03.494 [INFO][5519] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f" HandleID="k8s-pod-network.dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f" Workload="ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273220), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-9b959526b1", "pod":"goldmane-9f7667bb8-c5qrt", "timestamp":"2026-03-10 00:50:03.469476569 +0000 UTC"}, Hostname:"ci-4081.3.6-n-9b959526b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002def20)} Mar 10 00:50:03.658615 containerd[1725]: 2026-03-10 00:50:03.494 [INFO][5519] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:03.658615 containerd[1725]: 2026-03-10 00:50:03.494 [INFO][5519] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:03.658615 containerd[1725]: 2026-03-10 00:50:03.494 [INFO][5519] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-9b959526b1' Mar 10 00:50:03.658615 containerd[1725]: 2026-03-10 00:50:03.496 [INFO][5519] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:03.658615 containerd[1725]: 2026-03-10 00:50:03.512 [INFO][5519] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:03.658615 containerd[1725]: 2026-03-10 00:50:03.541 [INFO][5519] ipam/ipam.go 526: Trying affinity for 192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:03.658615 containerd[1725]: 2026-03-10 00:50:03.547 [INFO][5519] ipam/ipam.go 160: Attempting to load block cidr=192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:03.658615 containerd[1725]: 2026-03-10 00:50:03.556 [INFO][5519] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:03.658615 containerd[1725]: 2026-03-10 00:50:03.556 [INFO][5519] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.30.192/26 handle="k8s-pod-network.dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:03.658615 containerd[1725]: 2026-03-10 00:50:03.568 [INFO][5519] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f Mar 10 00:50:03.658615 containerd[1725]: 2026-03-10 00:50:03.577 [INFO][5519] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.30.192/26 handle="k8s-pod-network.dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:03.658615 containerd[1725]: 2026-03-10 00:50:03.603 [INFO][5519] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.30.199/26] block=192.168.30.192/26 handle="k8s-pod-network.dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:03.658615 containerd[1725]: 2026-03-10 00:50:03.604 [INFO][5519] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.30.199/26] handle="k8s-pod-network.dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:03.658615 containerd[1725]: 2026-03-10 00:50:03.604 [INFO][5519] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:03.658615 containerd[1725]: 2026-03-10 00:50:03.604 [INFO][5519] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.30.199/26] IPv6=[] ContainerID="dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f" HandleID="k8s-pod-network.dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f" Workload="ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0" Mar 10 00:50:03.614042 systemd-networkd[1510]: calif1e17318e65: Link UP Mar 10 00:50:03.662054 containerd[1725]: 2026-03-10 00:50:03.607 [INFO][5497] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f" Namespace="calico-system" Pod="goldmane-9f7667bb8-c5qrt" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"1484e1bd-a157-4269-85f5-50c715d3704d", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"", Pod:"goldmane-9f7667bb8-c5qrt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.30.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif1e17318e65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:03.662054 containerd[1725]: 2026-03-10 00:50:03.607 [INFO][5497] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.199/32] ContainerID="dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f" Namespace="calico-system" Pod="goldmane-9f7667bb8-c5qrt" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0" Mar 10 00:50:03.662054 containerd[1725]: 2026-03-10 00:50:03.607 [INFO][5497] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1e17318e65 ContainerID="dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f" Namespace="calico-system" Pod="goldmane-9f7667bb8-c5qrt" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0" Mar 10 00:50:03.662054 containerd[1725]: 2026-03-10 00:50:03.612 [INFO][5497] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f" Namespace="calico-system" Pod="goldmane-9f7667bb8-c5qrt" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0" Mar 10 00:50:03.662054 containerd[1725]: 2026-03-10 00:50:03.615 [INFO][5497] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f" Namespace="calico-system" Pod="goldmane-9f7667bb8-c5qrt" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"1484e1bd-a157-4269-85f5-50c715d3704d", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f", Pod:"goldmane-9f7667bb8-c5qrt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.30.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif1e17318e65", MAC:"72:6a:60:93:03:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:03.662054 containerd[1725]: 2026-03-10 00:50:03.632 [INFO][5497] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f" Namespace="calico-system" Pod="goldmane-9f7667bb8-c5qrt" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0" Mar 10 00:50:03.614198 systemd-networkd[1510]: calif1e17318e65: Gained carrier Mar 10 00:50:03.682398 systemd-networkd[1510]: cali27b4907a866: Link UP Mar 10 00:50:03.684183 systemd-networkd[1510]: cali27b4907a866: Gained carrier Mar 10 00:50:03.712589 containerd[1725]: 2026-03-10 00:50:03.448 [INFO][5509] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0 calico-apiserver-646d9b8f6d- calico-system 7e5ad127-4df9-4c78-b9f6-97b3790dc3ca 990 0 2026-03-10 00:49:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:646d9b8f6d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-9b959526b1 calico-apiserver-646d9b8f6d-tqwb4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali27b4907a866 [] [] }} ContainerID="22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1" Namespace="calico-system" Pod="calico-apiserver-646d9b8f6d-tqwb4" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-" Mar 10 00:50:03.712589 containerd[1725]: 2026-03-10 00:50:03.449 [INFO][5509] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1" Namespace="calico-system" Pod="calico-apiserver-646d9b8f6d-tqwb4" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0" Mar 10 00:50:03.712589 containerd[1725]: 2026-03-10 00:50:03.495 [INFO][5525] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1" HandleID="k8s-pod-network.22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0" Mar 10 00:50:03.712589 containerd[1725]: 2026-03-10 00:50:03.513 [INFO][5525] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1" HandleID="k8s-pod-network.22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb440), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-9b959526b1", "pod":"calico-apiserver-646d9b8f6d-tqwb4", "timestamp":"2026-03-10 00:50:03.495445888 +0000 UTC"}, Hostname:"ci-4081.3.6-n-9b959526b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400036f080)} Mar 10 00:50:03.712589 containerd[1725]: 2026-03-10 00:50:03.514 [INFO][5525] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:03.712589 containerd[1725]: 2026-03-10 00:50:03.604 [INFO][5525] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:03.712589 containerd[1725]: 2026-03-10 00:50:03.604 [INFO][5525] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-9b959526b1' Mar 10 00:50:03.712589 containerd[1725]: 2026-03-10 00:50:03.619 [INFO][5525] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:03.712589 containerd[1725]: 2026-03-10 00:50:03.635 [INFO][5525] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:03.712589 containerd[1725]: 2026-03-10 00:50:03.649 [INFO][5525] ipam/ipam.go 526: Trying affinity for 192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:03.712589 containerd[1725]: 2026-03-10 00:50:03.652 [INFO][5525] ipam/ipam.go 160: Attempting to load block cidr=192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:03.712589 containerd[1725]: 2026-03-10 00:50:03.655 [INFO][5525] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.30.192/26 host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:03.712589 containerd[1725]: 2026-03-10 00:50:03.656 [INFO][5525] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.30.192/26 handle="k8s-pod-network.22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:03.712589 containerd[1725]: 2026-03-10 00:50:03.659 [INFO][5525] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1 Mar 10 00:50:03.712589 containerd[1725]: 2026-03-10 00:50:03.665 [INFO][5525] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.30.192/26 handle="k8s-pod-network.22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:03.712589 containerd[1725]: 2026-03-10 00:50:03.676 [INFO][5525] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.30.200/26] block=192.168.30.192/26 handle="k8s-pod-network.22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:03.712589 containerd[1725]: 2026-03-10 00:50:03.676 [INFO][5525] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.30.200/26] handle="k8s-pod-network.22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1" host="ci-4081.3.6-n-9b959526b1" Mar 10 00:50:03.712589 containerd[1725]: 2026-03-10 00:50:03.676 [INFO][5525] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:03.712589 containerd[1725]: 2026-03-10 00:50:03.676 [INFO][5525] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.30.200/26] IPv6=[] ContainerID="22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1" HandleID="k8s-pod-network.22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0" Mar 10 00:50:03.713255 containerd[1725]: 2026-03-10 00:50:03.678 [INFO][5509] cni-plugin/k8s.go 418: Populated endpoint ContainerID="22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1" Namespace="calico-system" Pod="calico-apiserver-646d9b8f6d-tqwb4" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0", GenerateName:"calico-apiserver-646d9b8f6d-", Namespace:"calico-system", SelfLink:"", UID:"7e5ad127-4df9-4c78-b9f6-97b3790dc3ca", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646d9b8f6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"", Pod:"calico-apiserver-646d9b8f6d-tqwb4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali27b4907a866", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:03.713255 containerd[1725]: 2026-03-10 00:50:03.678 [INFO][5509] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.200/32] ContainerID="22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1" Namespace="calico-system" Pod="calico-apiserver-646d9b8f6d-tqwb4" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0" Mar 10 00:50:03.713255 containerd[1725]: 2026-03-10 00:50:03.679 [INFO][5509] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27b4907a866 ContainerID="22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1" Namespace="calico-system" Pod="calico-apiserver-646d9b8f6d-tqwb4" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0" Mar 10 00:50:03.713255 containerd[1725]: 2026-03-10 00:50:03.685 [INFO][5509] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1" Namespace="calico-system" Pod="calico-apiserver-646d9b8f6d-tqwb4" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0" Mar 10 00:50:03.713255 containerd[1725]: 2026-03-10 00:50:03.687 [INFO][5509] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1" Namespace="calico-system" Pod="calico-apiserver-646d9b8f6d-tqwb4" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0", GenerateName:"calico-apiserver-646d9b8f6d-", Namespace:"calico-system", SelfLink:"", UID:"7e5ad127-4df9-4c78-b9f6-97b3790dc3ca", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646d9b8f6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1", Pod:"calico-apiserver-646d9b8f6d-tqwb4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali27b4907a866", MAC:"0a:68:82:ea:10:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:03.713255 containerd[1725]: 2026-03-10 00:50:03.705 [INFO][5509] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1" Namespace="calico-system" Pod="calico-apiserver-646d9b8f6d-tqwb4" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0" Mar 10 00:50:03.859711 systemd-networkd[1510]: cali158fcb71383: Gained IPv6LL Mar 10 00:50:03.880560 containerd[1725]: time="2026-03-10T00:50:03.880442866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 00:50:03.880794 containerd[1725]: time="2026-03-10T00:50:03.880559226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 00:50:03.880794 containerd[1725]: time="2026-03-10T00:50:03.880581746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:50:03.882146 containerd[1725]: time="2026-03-10T00:50:03.881138347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:50:03.882937 containerd[1725]: time="2026-03-10T00:50:03.882821869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 00:50:03.882937 containerd[1725]: time="2026-03-10T00:50:03.882894949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 00:50:03.882937 containerd[1725]: time="2026-03-10T00:50:03.882914309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:50:03.883490 containerd[1725]: time="2026-03-10T00:50:03.883193270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 00:50:03.927830 systemd[1]: Started cri-containerd-22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1.scope - libcontainer container 22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1. Mar 10 00:50:03.929280 systemd[1]: Started cri-containerd-dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f.scope - libcontainer container dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f. Mar 10 00:50:03.968114 containerd[1725]: time="2026-03-10T00:50:03.968075677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-c5qrt,Uid:1484e1bd-a157-4269-85f5-50c715d3704d,Namespace:calico-system,Attempt:1,} returns sandbox id \"dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f\"" Mar 10 00:50:03.976829 containerd[1725]: time="2026-03-10T00:50:03.976621450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-646d9b8f6d-tqwb4,Uid:7e5ad127-4df9-4c78-b9f6-97b3790dc3ca,Namespace:calico-system,Attempt:1,} returns sandbox id \"22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1\"" Mar 10 00:50:03.986880 systemd-networkd[1510]: calid9c7affcdb1: Gained IPv6LL Mar 10 00:50:04.243588 systemd-networkd[1510]: cali4931bd3578b: Gained IPv6LL Mar 10 00:50:04.614362 containerd[1725]: time="2026-03-10T00:50:04.614314407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:50:04.617731 containerd[1725]: time="2026-03-10T00:50:04.617693772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 10 00:50:04.622390 containerd[1725]: time="2026-03-10T00:50:04.621292777Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:50:04.625823 containerd[1725]: time="2026-03-10T00:50:04.625785864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:50:04.626707 containerd[1725]: time="2026-03-10T00:50:04.626675505Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.936470706s" Mar 10 00:50:04.626773 containerd[1725]: time="2026-03-10T00:50:04.626709425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 10 00:50:04.628488 containerd[1725]: time="2026-03-10T00:50:04.628462548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 10 00:50:04.654179 containerd[1725]: time="2026-03-10T00:50:04.654140427Z" level=info msg="CreateContainer within sandbox \"237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 10 00:50:04.690284 containerd[1725]: time="2026-03-10T00:50:04.690163921Z" level=info msg="CreateContainer within sandbox \"237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cffe5dabf139ae1a2f2ba5dce41ab07cba9577ec6a09263c603ed88216657e8f\"" Mar 10 00:50:04.692747 containerd[1725]: time="2026-03-10T00:50:04.691062842Z" level=info msg="StartContainer for \"cffe5dabf139ae1a2f2ba5dce41ab07cba9577ec6a09263c603ed88216657e8f\"" Mar 10 00:50:04.691303 systemd-networkd[1510]: calif1e17318e65: Gained IPv6LL Mar 10 00:50:04.728850 systemd[1]: Started cri-containerd-cffe5dabf139ae1a2f2ba5dce41ab07cba9577ec6a09263c603ed88216657e8f.scope - libcontainer container cffe5dabf139ae1a2f2ba5dce41ab07cba9577ec6a09263c603ed88216657e8f. Mar 10 00:50:04.766015 containerd[1725]: time="2026-03-10T00:50:04.765971834Z" level=info msg="StartContainer for \"cffe5dabf139ae1a2f2ba5dce41ab07cba9577ec6a09263c603ed88216657e8f\" returns successfully" Mar 10 00:50:05.523036 kubelet[3138]: I0310 00:50:05.522772 3138 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79fb6d9494-k5jwc" podStartSLOduration=32.584388313 podStartE2EDuration="36.522755862s" podCreationTimestamp="2026-03-10 00:49:29 +0000 UTC" firstStartedPulling="2026-03-10 00:50:00.689883239 +0000 UTC m=+53.593658039" lastFinishedPulling="2026-03-10 00:50:04.628250788 +0000 UTC m=+57.532025588" observedRunningTime="2026-03-10 00:50:05.52153022 +0000 UTC m=+58.425305020" watchObservedRunningTime="2026-03-10 00:50:05.522755862 +0000 UTC m=+58.426530662" Mar 10 00:50:05.714939 systemd-networkd[1510]: cali27b4907a866: Gained IPv6LL Mar 10 00:50:07.211872 containerd[1725]: time="2026-03-10T00:50:07.211837983Z" level=info msg="StopPodSandbox for \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\"" Mar 10 00:50:07.294921 containerd[1725]: 2026-03-10 00:50:07.252 [WARNING][5769] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0", GenerateName:"calico-apiserver-646d9b8f6d-", Namespace:"calico-system", SelfLink:"", UID:"5cf6393d-0fb3-436b-8c41-2d69d998dc28", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646d9b8f6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb", Pod:"calico-apiserver-646d9b8f6d-rsxcf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia6eeedbb882", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:07.294921 containerd[1725]: 2026-03-10 00:50:07.252 [INFO][5769] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Mar 10 00:50:07.294921 containerd[1725]: 2026-03-10 00:50:07.252 [INFO][5769] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" iface="eth0" netns="" Mar 10 00:50:07.294921 containerd[1725]: 2026-03-10 00:50:07.252 [INFO][5769] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Mar 10 00:50:07.294921 containerd[1725]: 2026-03-10 00:50:07.252 [INFO][5769] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Mar 10 00:50:07.294921 containerd[1725]: 2026-03-10 00:50:07.276 [INFO][5776] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" HandleID="k8s-pod-network.6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0" Mar 10 00:50:07.294921 containerd[1725]: 2026-03-10 00:50:07.276 [INFO][5776] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:07.294921 containerd[1725]: 2026-03-10 00:50:07.276 [INFO][5776] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:07.294921 containerd[1725]: 2026-03-10 00:50:07.288 [WARNING][5776] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" HandleID="k8s-pod-network.6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0" Mar 10 00:50:07.294921 containerd[1725]: 2026-03-10 00:50:07.288 [INFO][5776] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" HandleID="k8s-pod-network.6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0" Mar 10 00:50:07.294921 containerd[1725]: 2026-03-10 00:50:07.290 [INFO][5776] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:07.294921 containerd[1725]: 2026-03-10 00:50:07.293 [INFO][5769] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Mar 10 00:50:07.295446 containerd[1725]: time="2026-03-10T00:50:07.294958783Z" level=info msg="TearDown network for sandbox \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\" successfully" Mar 10 00:50:07.295446 containerd[1725]: time="2026-03-10T00:50:07.294984343Z" level=info msg="StopPodSandbox for \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\" returns successfully" Mar 10 00:50:07.295496 containerd[1725]: time="2026-03-10T00:50:07.295465463Z" level=info msg="RemovePodSandbox for \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\"" Mar 10 00:50:07.295521 containerd[1725]: time="2026-03-10T00:50:07.295492584Z" level=info msg="Forcibly stopping sandbox \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\"" Mar 10 00:50:07.365725 containerd[1725]: 2026-03-10 00:50:07.329 [WARNING][5790] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0", GenerateName:"calico-apiserver-646d9b8f6d-", Namespace:"calico-system", SelfLink:"", UID:"5cf6393d-0fb3-436b-8c41-2d69d998dc28", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646d9b8f6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb", Pod:"calico-apiserver-646d9b8f6d-rsxcf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia6eeedbb882", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:07.365725 containerd[1725]: 2026-03-10 00:50:07.329 [INFO][5790] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Mar 10 00:50:07.365725 containerd[1725]: 2026-03-10 00:50:07.329 [INFO][5790] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" iface="eth0" netns="" Mar 10 00:50:07.365725 containerd[1725]: 2026-03-10 00:50:07.329 [INFO][5790] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Mar 10 00:50:07.365725 containerd[1725]: 2026-03-10 00:50:07.329 [INFO][5790] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Mar 10 00:50:07.365725 containerd[1725]: 2026-03-10 00:50:07.351 [INFO][5797] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" HandleID="k8s-pod-network.6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0" Mar 10 00:50:07.365725 containerd[1725]: 2026-03-10 00:50:07.351 [INFO][5797] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:07.365725 containerd[1725]: 2026-03-10 00:50:07.351 [INFO][5797] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:07.365725 containerd[1725]: 2026-03-10 00:50:07.360 [WARNING][5797] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" HandleID="k8s-pod-network.6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0" Mar 10 00:50:07.365725 containerd[1725]: 2026-03-10 00:50:07.360 [INFO][5797] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" HandleID="k8s-pod-network.6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--rsxcf-eth0" Mar 10 00:50:07.365725 containerd[1725]: 2026-03-10 00:50:07.362 [INFO][5797] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:07.365725 containerd[1725]: 2026-03-10 00:50:07.363 [INFO][5790] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a" Mar 10 00:50:07.366174 containerd[1725]: time="2026-03-10T00:50:07.365777445Z" level=info msg="TearDown network for sandbox \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\" successfully" Mar 10 00:50:07.420441 containerd[1725]: time="2026-03-10T00:50:07.419780803Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 00:50:07.420441 containerd[1725]: time="2026-03-10T00:50:07.419843083Z" level=info msg="RemovePodSandbox \"6dad24c90694bf9bb32e629d333a3c1989b39c2c306840feb32bd6c9eb942d4a\" returns successfully" Mar 10 00:50:07.420441 containerd[1725]: time="2026-03-10T00:50:07.420185524Z" level=info msg="StopPodSandbox for \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\"" Mar 10 00:50:07.551263 containerd[1725]: 2026-03-10 00:50:07.496 [WARNING][5816] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0", GenerateName:"calico-kube-controllers-79fb6d9494-", Namespace:"calico-system", SelfLink:"", UID:"587df943-1233-4d7e-919c-b8c045ed5b09", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79fb6d9494", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207", Pod:"calico-kube-controllers-79fb6d9494-k5jwc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliec5534f3ebc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:07.551263 containerd[1725]: 2026-03-10 00:50:07.499 [INFO][5816] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Mar 10 00:50:07.551263 containerd[1725]: 2026-03-10 00:50:07.499 [INFO][5816] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" iface="eth0" netns="" Mar 10 00:50:07.551263 containerd[1725]: 2026-03-10 00:50:07.499 [INFO][5816] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Mar 10 00:50:07.551263 containerd[1725]: 2026-03-10 00:50:07.499 [INFO][5816] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Mar 10 00:50:07.551263 containerd[1725]: 2026-03-10 00:50:07.528 [INFO][5825] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" HandleID="k8s-pod-network.ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0" Mar 10 00:50:07.551263 containerd[1725]: 2026-03-10 00:50:07.528 [INFO][5825] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:07.551263 containerd[1725]: 2026-03-10 00:50:07.529 [INFO][5825] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:07.551263 containerd[1725]: 2026-03-10 00:50:07.540 [WARNING][5825] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" HandleID="k8s-pod-network.ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0" Mar 10 00:50:07.551263 containerd[1725]: 2026-03-10 00:50:07.540 [INFO][5825] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" HandleID="k8s-pod-network.ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0" Mar 10 00:50:07.551263 containerd[1725]: 2026-03-10 00:50:07.544 [INFO][5825] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:07.551263 containerd[1725]: 2026-03-10 00:50:07.546 [INFO][5816] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Mar 10 00:50:07.551697 containerd[1725]: time="2026-03-10T00:50:07.551260233Z" level=info msg="TearDown network for sandbox \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\" successfully" Mar 10 00:50:07.551697 containerd[1725]: time="2026-03-10T00:50:07.551294513Z" level=info msg="StopPodSandbox for \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\" returns successfully" Mar 10 00:50:07.552296 containerd[1725]: time="2026-03-10T00:50:07.552108274Z" level=info msg="RemovePodSandbox for \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\"" Mar 10 00:50:07.552368 containerd[1725]: time="2026-03-10T00:50:07.552301915Z" level=info msg="Forcibly stopping sandbox \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\"" Mar 10 00:50:07.647553 containerd[1725]: 2026-03-10 00:50:07.597 [WARNING][5841] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0", GenerateName:"calico-kube-controllers-79fb6d9494-", Namespace:"calico-system", SelfLink:"", UID:"587df943-1233-4d7e-919c-b8c045ed5b09", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79fb6d9494", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"237a61203c083991a0864c43675751a9991e31a4d8cde6b931e3c46d995ea207", Pod:"calico-kube-controllers-79fb6d9494-k5jwc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliec5534f3ebc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:07.647553 containerd[1725]: 2026-03-10 00:50:07.598 [INFO][5841] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Mar 10 00:50:07.647553 containerd[1725]: 2026-03-10 00:50:07.598 [INFO][5841] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" iface="eth0" netns="" Mar 10 00:50:07.647553 containerd[1725]: 2026-03-10 00:50:07.598 [INFO][5841] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Mar 10 00:50:07.647553 containerd[1725]: 2026-03-10 00:50:07.598 [INFO][5841] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Mar 10 00:50:07.647553 containerd[1725]: 2026-03-10 00:50:07.630 [INFO][5848] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" HandleID="k8s-pod-network.ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0" Mar 10 00:50:07.647553 containerd[1725]: 2026-03-10 00:50:07.630 [INFO][5848] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:07.647553 containerd[1725]: 2026-03-10 00:50:07.630 [INFO][5848] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:07.647553 containerd[1725]: 2026-03-10 00:50:07.641 [WARNING][5848] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" HandleID="k8s-pod-network.ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0" Mar 10 00:50:07.647553 containerd[1725]: 2026-03-10 00:50:07.641 [INFO][5848] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" HandleID="k8s-pod-network.ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--kube--controllers--79fb6d9494--k5jwc-eth0" Mar 10 00:50:07.647553 containerd[1725]: 2026-03-10 00:50:07.643 [INFO][5848] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:07.647553 containerd[1725]: 2026-03-10 00:50:07.645 [INFO][5841] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f" Mar 10 00:50:07.647553 containerd[1725]: time="2026-03-10T00:50:07.647485732Z" level=info msg="TearDown network for sandbox \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\" successfully" Mar 10 00:50:07.659422 containerd[1725]: time="2026-03-10T00:50:07.659102509Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 00:50:07.659422 containerd[1725]: time="2026-03-10T00:50:07.659210389Z" level=info msg="RemovePodSandbox \"ed96b9d0c338918e9833efc5a7e412c5b42f7353f18180836a8cf51e09c3d73f\" returns successfully" Mar 10 00:50:07.660097 containerd[1725]: time="2026-03-10T00:50:07.659817390Z" level=info msg="StopPodSandbox for \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\"" Mar 10 00:50:07.742620 containerd[1725]: 2026-03-10 00:50:07.702 [WARNING][5862] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e643e7c4-0931-432c-9761-f364e4ac4030", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65", Pod:"csi-node-driver-zhks4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali158fcb71383", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:07.742620 containerd[1725]: 2026-03-10 00:50:07.702 [INFO][5862] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Mar 10 00:50:07.742620 containerd[1725]: 2026-03-10 00:50:07.702 [INFO][5862] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" iface="eth0" netns="" Mar 10 00:50:07.742620 containerd[1725]: 2026-03-10 00:50:07.702 [INFO][5862] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Mar 10 00:50:07.742620 containerd[1725]: 2026-03-10 00:50:07.702 [INFO][5862] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Mar 10 00:50:07.742620 containerd[1725]: 2026-03-10 00:50:07.727 [INFO][5869] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" HandleID="k8s-pod-network.01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Workload="ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0" Mar 10 00:50:07.742620 containerd[1725]: 2026-03-10 00:50:07.727 [INFO][5869] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:07.742620 containerd[1725]: 2026-03-10 00:50:07.727 [INFO][5869] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:07.742620 containerd[1725]: 2026-03-10 00:50:07.736 [WARNING][5869] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" HandleID="k8s-pod-network.01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Workload="ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0" Mar 10 00:50:07.742620 containerd[1725]: 2026-03-10 00:50:07.736 [INFO][5869] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" HandleID="k8s-pod-network.01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Workload="ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0" Mar 10 00:50:07.742620 containerd[1725]: 2026-03-10 00:50:07.738 [INFO][5869] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:07.742620 containerd[1725]: 2026-03-10 00:50:07.740 [INFO][5862] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Mar 10 00:50:07.743753 containerd[1725]: time="2026-03-10T00:50:07.743702231Z" level=info msg="TearDown network for sandbox \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\" successfully" Mar 10 00:50:07.743753 containerd[1725]: time="2026-03-10T00:50:07.743732431Z" level=info msg="StopPodSandbox for \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\" returns successfully" Mar 10 00:50:07.744409 containerd[1725]: time="2026-03-10T00:50:07.744210952Z" level=info msg="RemovePodSandbox for \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\"" Mar 10 00:50:07.744409 containerd[1725]: time="2026-03-10T00:50:07.744242152Z" level=info msg="Forcibly stopping sandbox \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\"" Mar 10 00:50:07.875341 containerd[1725]: 2026-03-10 00:50:07.810 [WARNING][5883] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e643e7c4-0931-432c-9761-f364e4ac4030", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65", Pod:"csi-node-driver-zhks4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali158fcb71383", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:07.875341 containerd[1725]: 2026-03-10 00:50:07.810 [INFO][5883] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Mar 10 00:50:07.875341 containerd[1725]: 2026-03-10 00:50:07.810 [INFO][5883] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" iface="eth0" netns="" Mar 10 00:50:07.875341 containerd[1725]: 2026-03-10 00:50:07.810 [INFO][5883] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Mar 10 00:50:07.875341 containerd[1725]: 2026-03-10 00:50:07.810 [INFO][5883] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Mar 10 00:50:07.875341 containerd[1725]: 2026-03-10 00:50:07.846 [INFO][5890] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" HandleID="k8s-pod-network.01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Workload="ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0" Mar 10 00:50:07.875341 containerd[1725]: 2026-03-10 00:50:07.846 [INFO][5890] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:07.875341 containerd[1725]: 2026-03-10 00:50:07.846 [INFO][5890] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:07.875341 containerd[1725]: 2026-03-10 00:50:07.865 [WARNING][5890] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" HandleID="k8s-pod-network.01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Workload="ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0" Mar 10 00:50:07.875341 containerd[1725]: 2026-03-10 00:50:07.865 [INFO][5890] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" HandleID="k8s-pod-network.01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Workload="ci--4081.3.6--n--9b959526b1-k8s-csi--node--driver--zhks4-eth0" Mar 10 00:50:07.875341 containerd[1725]: 2026-03-10 00:50:07.867 [INFO][5890] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:07.875341 containerd[1725]: 2026-03-10 00:50:07.871 [INFO][5883] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4" Mar 10 00:50:07.876658 containerd[1725]: time="2026-03-10T00:50:07.876157543Z" level=info msg="TearDown network for sandbox \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\" successfully" Mar 10 00:50:08.529921 containerd[1725]: time="2026-03-10T00:50:08.529869367Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 00:50:08.530535 containerd[1725]: time="2026-03-10T00:50:08.529947328Z" level=info msg="RemovePodSandbox \"01f2dd492c2a1326d84e01c5e533ec692c6700e03871c67ba1da004f436d49e4\" returns successfully" Mar 10 00:50:08.531028 containerd[1725]: time="2026-03-10T00:50:08.530736169Z" level=info msg="StopPodSandbox for \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\"" Mar 10 00:50:08.568013 containerd[1725]: time="2026-03-10T00:50:08.566606501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:50:08.570466 containerd[1725]: time="2026-03-10T00:50:08.570428026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 10 00:50:08.576839 containerd[1725]: time="2026-03-10T00:50:08.576772115Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:50:08.583045 containerd[1725]: time="2026-03-10T00:50:08.582220523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:50:08.583182 containerd[1725]: time="2026-03-10T00:50:08.583142684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 3.954089895s" Mar 10 00:50:08.583182 containerd[1725]: time="2026-03-10T00:50:08.583172804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 10 00:50:08.585839 containerd[1725]: time="2026-03-10T00:50:08.585800648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 10 00:50:08.597885 containerd[1725]: time="2026-03-10T00:50:08.597846986Z" level=info msg="CreateContainer within sandbox \"75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 10 00:50:08.639515 containerd[1725]: time="2026-03-10T00:50:08.639473566Z" level=info msg="CreateContainer within sandbox \"75663025d48a2d220adfffeaca50e6153514cdcfd07cd7a21011125419af54cb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"95efb639b12abd5a66d026516832005ad3451eea398e324f4c22485d4e54c4fc\"" Mar 10 00:50:08.640462 containerd[1725]: time="2026-03-10T00:50:08.640382167Z" level=info msg="StartContainer for \"95efb639b12abd5a66d026516832005ad3451eea398e324f4c22485d4e54c4fc\"" Mar 10 00:50:08.648329 containerd[1725]: 2026-03-10 00:50:08.583 [WARNING][5909] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"8e1f6144-8d73-40ae-a431-d82abb11d87e", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41", Pod:"coredns-7d764666f9-2rhxk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4931bd3578b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:08.648329 containerd[1725]: 2026-03-10 00:50:08.584 [INFO][5909] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Mar 10 00:50:08.648329 containerd[1725]: 2026-03-10 00:50:08.584 [INFO][5909] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" iface="eth0" netns="" Mar 10 00:50:08.648329 containerd[1725]: 2026-03-10 00:50:08.584 [INFO][5909] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Mar 10 00:50:08.648329 containerd[1725]: 2026-03-10 00:50:08.584 [INFO][5909] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Mar 10 00:50:08.648329 containerd[1725]: 2026-03-10 00:50:08.627 [INFO][5916] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" HandleID="k8s-pod-network.8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0" Mar 10 00:50:08.648329 containerd[1725]: 2026-03-10 00:50:08.627 [INFO][5916] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:08.648329 containerd[1725]: 2026-03-10 00:50:08.627 [INFO][5916] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:08.648329 containerd[1725]: 2026-03-10 00:50:08.640 [WARNING][5916] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" HandleID="k8s-pod-network.8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0" Mar 10 00:50:08.648329 containerd[1725]: 2026-03-10 00:50:08.640 [INFO][5916] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" HandleID="k8s-pod-network.8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0" Mar 10 00:50:08.648329 containerd[1725]: 2026-03-10 00:50:08.643 [INFO][5916] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:08.648329 containerd[1725]: 2026-03-10 00:50:08.646 [INFO][5909] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Mar 10 00:50:08.648842 containerd[1725]: time="2026-03-10T00:50:08.648815939Z" level=info msg="TearDown network for sandbox \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\" successfully" Mar 10 00:50:08.648926 containerd[1725]: time="2026-03-10T00:50:08.648899699Z" level=info msg="StopPodSandbox for \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\" returns successfully" Mar 10 00:50:08.649465 containerd[1725]: time="2026-03-10T00:50:08.649414580Z" level=info msg="RemovePodSandbox for \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\"" Mar 10 00:50:08.649682 containerd[1725]: time="2026-03-10T00:50:08.649666101Z" level=info msg="Forcibly stopping sandbox \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\"" Mar 10 00:50:08.716013 systemd[1]: Started cri-containerd-95efb639b12abd5a66d026516832005ad3451eea398e324f4c22485d4e54c4fc.scope - libcontainer container 95efb639b12abd5a66d026516832005ad3451eea398e324f4c22485d4e54c4fc. Mar 10 00:50:08.757130 containerd[1725]: 2026-03-10 00:50:08.698 [WARNING][5931] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"8e1f6144-8d73-40ae-a431-d82abb11d87e", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"5cbd3d37aca068d06fb890710e82e94574e78413119862ccaa0d4ad7b165ce41", Pod:"coredns-7d764666f9-2rhxk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4931bd3578b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:08.757130 containerd[1725]: 2026-03-10 00:50:08.698 [INFO][5931] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Mar 10 00:50:08.757130 containerd[1725]: 2026-03-10 00:50:08.698 [INFO][5931] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" iface="eth0" netns="" Mar 10 00:50:08.757130 containerd[1725]: 2026-03-10 00:50:08.699 [INFO][5931] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Mar 10 00:50:08.757130 containerd[1725]: 2026-03-10 00:50:08.699 [INFO][5931] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Mar 10 00:50:08.757130 containerd[1725]: 2026-03-10 00:50:08.732 [INFO][5954] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" HandleID="k8s-pod-network.8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0" Mar 10 00:50:08.757130 containerd[1725]: 2026-03-10 00:50:08.732 [INFO][5954] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:08.757130 containerd[1725]: 2026-03-10 00:50:08.732 [INFO][5954] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:08.757130 containerd[1725]: 2026-03-10 00:50:08.744 [WARNING][5954] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" HandleID="k8s-pod-network.8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0" Mar 10 00:50:08.757130 containerd[1725]: 2026-03-10 00:50:08.744 [INFO][5954] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" HandleID="k8s-pod-network.8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--2rhxk-eth0" Mar 10 00:50:08.757130 containerd[1725]: 2026-03-10 00:50:08.748 [INFO][5954] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:08.757130 containerd[1725]: 2026-03-10 00:50:08.752 [INFO][5931] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974" Mar 10 00:50:08.757574 containerd[1725]: time="2026-03-10T00:50:08.757165016Z" level=info msg="TearDown network for sandbox \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\" successfully" Mar 10 00:50:08.817800 containerd[1725]: time="2026-03-10T00:50:08.816477662Z" level=info msg="StartContainer for \"95efb639b12abd5a66d026516832005ad3451eea398e324f4c22485d4e54c4fc\" returns successfully" Mar 10 00:50:08.819797 containerd[1725]: time="2026-03-10T00:50:08.819753826Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 00:50:08.819911 containerd[1725]: time="2026-03-10T00:50:08.819833986Z" level=info msg="RemovePodSandbox \"8eefcb71925cfc65150ef0440475f5d650c869fd7ecc28c001ab7ca68a13e974\" returns successfully" Mar 10 00:50:08.820769 containerd[1725]: time="2026-03-10T00:50:08.820741788Z" level=info msg="StopPodSandbox for \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\"" Mar 10 00:50:08.906251 containerd[1725]: 2026-03-10 00:50:08.868 [WARNING][5992] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"fe1fca07-2fb3-438a-bce9-8b3a59b0fe53", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef", Pod:"coredns-7d764666f9-djf48", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9c7affcdb1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:08.906251 containerd[1725]: 2026-03-10 00:50:08.869 [INFO][5992] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Mar 10 00:50:08.906251 containerd[1725]: 2026-03-10 00:50:08.869 [INFO][5992] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" iface="eth0" netns="" Mar 10 00:50:08.906251 containerd[1725]: 2026-03-10 00:50:08.869 [INFO][5992] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Mar 10 00:50:08.906251 containerd[1725]: 2026-03-10 00:50:08.869 [INFO][5992] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Mar 10 00:50:08.906251 containerd[1725]: 2026-03-10 00:50:08.891 [INFO][6005] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" HandleID="k8s-pod-network.2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0" Mar 10 00:50:08.906251 containerd[1725]: 2026-03-10 00:50:08.892 [INFO][6005] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:08.906251 containerd[1725]: 2026-03-10 00:50:08.892 [INFO][6005] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:08.906251 containerd[1725]: 2026-03-10 00:50:08.901 [WARNING][6005] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" HandleID="k8s-pod-network.2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0" Mar 10 00:50:08.906251 containerd[1725]: 2026-03-10 00:50:08.901 [INFO][6005] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" HandleID="k8s-pod-network.2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0" Mar 10 00:50:08.906251 containerd[1725]: 2026-03-10 00:50:08.902 [INFO][6005] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:08.906251 containerd[1725]: 2026-03-10 00:50:08.904 [INFO][5992] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Mar 10 00:50:08.906708 containerd[1725]: time="2026-03-10T00:50:08.906288951Z" level=info msg="TearDown network for sandbox \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\" successfully" Mar 10 00:50:08.906708 containerd[1725]: time="2026-03-10T00:50:08.906313311Z" level=info msg="StopPodSandbox for \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\" returns successfully" Mar 10 00:50:08.907293 containerd[1725]: time="2026-03-10T00:50:08.907268393Z" level=info msg="RemovePodSandbox for \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\"" Mar 10 00:50:08.907763 containerd[1725]: time="2026-03-10T00:50:08.907299873Z" level=info msg="Forcibly stopping sandbox \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\"" Mar 10 00:50:09.004847 containerd[1725]: 2026-03-10 00:50:08.961 [WARNING][6019] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"fe1fca07-2fb3-438a-bce9-8b3a59b0fe53", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"f5596dbfab5e26470b0915bcad84e4fb908479da7a165ef617ba062f55447fef", Pod:"coredns-7d764666f9-djf48", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid9c7affcdb1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:09.004847 containerd[1725]: 2026-03-10 00:50:08.961 [INFO][6019] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Mar 10 00:50:09.004847 containerd[1725]: 2026-03-10 00:50:08.961 [INFO][6019] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" iface="eth0" netns="" Mar 10 00:50:09.004847 containerd[1725]: 2026-03-10 00:50:08.961 [INFO][6019] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Mar 10 00:50:09.004847 containerd[1725]: 2026-03-10 00:50:08.961 [INFO][6019] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Mar 10 00:50:09.004847 containerd[1725]: 2026-03-10 00:50:08.984 [INFO][6026] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" HandleID="k8s-pod-network.2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0" Mar 10 00:50:09.004847 containerd[1725]: 2026-03-10 00:50:08.985 [INFO][6026] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:09.004847 containerd[1725]: 2026-03-10 00:50:08.985 [INFO][6026] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:09.004847 containerd[1725]: 2026-03-10 00:50:08.996 [WARNING][6026] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" HandleID="k8s-pod-network.2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0" Mar 10 00:50:09.004847 containerd[1725]: 2026-03-10 00:50:08.996 [INFO][6026] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" HandleID="k8s-pod-network.2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Workload="ci--4081.3.6--n--9b959526b1-k8s-coredns--7d764666f9--djf48-eth0" Mar 10 00:50:09.004847 containerd[1725]: 2026-03-10 00:50:08.998 [INFO][6026] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:09.004847 containerd[1725]: 2026-03-10 00:50:09.002 [INFO][6019] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411" Mar 10 00:50:09.005888 containerd[1725]: time="2026-03-10T00:50:09.005316575Z" level=info msg="TearDown network for sandbox \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\" successfully" Mar 10 00:50:09.028741 containerd[1725]: time="2026-03-10T00:50:09.028695208Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 00:50:09.029016 containerd[1725]: time="2026-03-10T00:50:09.028947409Z" level=info msg="RemovePodSandbox \"2f5fa2ef9ce55cd2bdb07f7c0b160ba2a3adcd0073d4fb183288978c7fcc6411\" returns successfully" Mar 10 00:50:09.029662 containerd[1725]: time="2026-03-10T00:50:09.029618770Z" level=info msg="StopPodSandbox for \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\"" Mar 10 00:50:09.106597 containerd[1725]: 2026-03-10 00:50:09.068 [WARNING][6040] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"1484e1bd-a157-4269-85f5-50c715d3704d", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f", Pod:"goldmane-9f7667bb8-c5qrt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.30.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif1e17318e65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:09.106597 containerd[1725]: 2026-03-10 00:50:09.069 [INFO][6040] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Mar 10 00:50:09.106597 containerd[1725]: 2026-03-10 00:50:09.069 [INFO][6040] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" iface="eth0" netns="" Mar 10 00:50:09.106597 containerd[1725]: 2026-03-10 00:50:09.069 [INFO][6040] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Mar 10 00:50:09.106597 containerd[1725]: 2026-03-10 00:50:09.069 [INFO][6040] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Mar 10 00:50:09.106597 containerd[1725]: 2026-03-10 00:50:09.091 [INFO][6047] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" HandleID="k8s-pod-network.828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Workload="ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0" Mar 10 00:50:09.106597 containerd[1725]: 2026-03-10 00:50:09.091 [INFO][6047] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:09.106597 containerd[1725]: 2026-03-10 00:50:09.091 [INFO][6047] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:09.106597 containerd[1725]: 2026-03-10 00:50:09.101 [WARNING][6047] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" HandleID="k8s-pod-network.828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Workload="ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0" Mar 10 00:50:09.106597 containerd[1725]: 2026-03-10 00:50:09.101 [INFO][6047] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" HandleID="k8s-pod-network.828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Workload="ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0" Mar 10 00:50:09.106597 containerd[1725]: 2026-03-10 00:50:09.102 [INFO][6047] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:09.106597 containerd[1725]: 2026-03-10 00:50:09.104 [INFO][6040] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Mar 10 00:50:09.106597 containerd[1725]: time="2026-03-10T00:50:09.106566961Z" level=info msg="TearDown network for sandbox \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\" successfully" Mar 10 00:50:09.106597 containerd[1725]: time="2026-03-10T00:50:09.106591961Z" level=info msg="StopPodSandbox for \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\" returns successfully" Mar 10 00:50:09.108988 containerd[1725]: time="2026-03-10T00:50:09.108395444Z" level=info msg="RemovePodSandbox for \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\"" Mar 10 00:50:09.108988 containerd[1725]: time="2026-03-10T00:50:09.108433364Z" level=info msg="Forcibly stopping sandbox \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\"" Mar 10 00:50:09.197433 containerd[1725]: 2026-03-10 00:50:09.155 [WARNING][6061] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"1484e1bd-a157-4269-85f5-50c715d3704d", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f", Pod:"goldmane-9f7667bb8-c5qrt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.30.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif1e17318e65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:09.197433 containerd[1725]: 2026-03-10 00:50:09.156 [INFO][6061] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Mar 10 00:50:09.197433 containerd[1725]: 2026-03-10 00:50:09.156 [INFO][6061] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" iface="eth0" netns="" Mar 10 00:50:09.197433 containerd[1725]: 2026-03-10 00:50:09.156 [INFO][6061] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Mar 10 00:50:09.197433 containerd[1725]: 2026-03-10 00:50:09.156 [INFO][6061] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Mar 10 00:50:09.197433 containerd[1725]: 2026-03-10 00:50:09.182 [INFO][6068] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" HandleID="k8s-pod-network.828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Workload="ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0" Mar 10 00:50:09.197433 containerd[1725]: 2026-03-10 00:50:09.183 [INFO][6068] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:09.197433 containerd[1725]: 2026-03-10 00:50:09.183 [INFO][6068] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:09.197433 containerd[1725]: 2026-03-10 00:50:09.192 [WARNING][6068] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" HandleID="k8s-pod-network.828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Workload="ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0" Mar 10 00:50:09.197433 containerd[1725]: 2026-03-10 00:50:09.192 [INFO][6068] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" HandleID="k8s-pod-network.828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Workload="ci--4081.3.6--n--9b959526b1-k8s-goldmane--9f7667bb8--c5qrt-eth0" Mar 10 00:50:09.197433 containerd[1725]: 2026-03-10 00:50:09.194 [INFO][6068] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:09.197433 containerd[1725]: 2026-03-10 00:50:09.195 [INFO][6061] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68" Mar 10 00:50:09.198043 containerd[1725]: time="2026-03-10T00:50:09.197476292Z" level=info msg="TearDown network for sandbox \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\" successfully" Mar 10 00:50:09.207931 containerd[1725]: time="2026-03-10T00:50:09.207880627Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 00:50:09.208126 containerd[1725]: time="2026-03-10T00:50:09.207956867Z" level=info msg="RemovePodSandbox \"828940e117249ebbc76fee4a43d8d37d6496e67d0b2aa42ea044e0ff212b8e68\" returns successfully" Mar 10 00:50:09.208516 containerd[1725]: time="2026-03-10T00:50:09.208493708Z" level=info msg="StopPodSandbox for \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\"" Mar 10 00:50:09.288613 containerd[1725]: 2026-03-10 00:50:09.244 [WARNING][6082] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-whisker--767bcc99db--5sbjp-eth0" Mar 10 00:50:09.288613 containerd[1725]: 2026-03-10 00:50:09.244 [INFO][6082] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Mar 10 00:50:09.288613 containerd[1725]: 2026-03-10 00:50:09.244 [INFO][6082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" iface="eth0" netns="" Mar 10 00:50:09.288613 containerd[1725]: 2026-03-10 00:50:09.244 [INFO][6082] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Mar 10 00:50:09.288613 containerd[1725]: 2026-03-10 00:50:09.244 [INFO][6082] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Mar 10 00:50:09.288613 containerd[1725]: 2026-03-10 00:50:09.272 [INFO][6089] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" HandleID="k8s-pod-network.38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Workload="ci--4081.3.6--n--9b959526b1-k8s-whisker--767bcc99db--5sbjp-eth0" Mar 10 00:50:09.288613 containerd[1725]: 2026-03-10 00:50:09.272 [INFO][6089] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:09.288613 containerd[1725]: 2026-03-10 00:50:09.272 [INFO][6089] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:09.288613 containerd[1725]: 2026-03-10 00:50:09.283 [WARNING][6089] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" HandleID="k8s-pod-network.38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Workload="ci--4081.3.6--n--9b959526b1-k8s-whisker--767bcc99db--5sbjp-eth0" Mar 10 00:50:09.288613 containerd[1725]: 2026-03-10 00:50:09.283 [INFO][6089] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" HandleID="k8s-pod-network.38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Workload="ci--4081.3.6--n--9b959526b1-k8s-whisker--767bcc99db--5sbjp-eth0" Mar 10 00:50:09.288613 containerd[1725]: 2026-03-10 00:50:09.284 [INFO][6089] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:09.288613 containerd[1725]: 2026-03-10 00:50:09.286 [INFO][6082] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Mar 10 00:50:09.288613 containerd[1725]: time="2026-03-10T00:50:09.288524144Z" level=info msg="TearDown network for sandbox \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\" successfully" Mar 10 00:50:09.288613 containerd[1725]: time="2026-03-10T00:50:09.288548944Z" level=info msg="StopPodSandbox for \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\" returns successfully" Mar 10 00:50:09.289287 containerd[1725]: time="2026-03-10T00:50:09.289240305Z" level=info msg="RemovePodSandbox for \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\"" Mar 10 00:50:09.289328 containerd[1725]: time="2026-03-10T00:50:09.289295985Z" level=info msg="Forcibly stopping sandbox \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\"" Mar 10 00:50:09.370856 containerd[1725]: 2026-03-10 00:50:09.331 [WARNING][6104] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" WorkloadEndpoint="ci--4081.3.6--n--9b959526b1-k8s-whisker--767bcc99db--5sbjp-eth0" Mar 10 00:50:09.370856 containerd[1725]: 2026-03-10 00:50:09.332 [INFO][6104] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Mar 10 00:50:09.370856 containerd[1725]: 2026-03-10 00:50:09.332 [INFO][6104] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" iface="eth0" netns="" Mar 10 00:50:09.370856 containerd[1725]: 2026-03-10 00:50:09.332 [INFO][6104] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Mar 10 00:50:09.370856 containerd[1725]: 2026-03-10 00:50:09.332 [INFO][6104] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Mar 10 00:50:09.370856 containerd[1725]: 2026-03-10 00:50:09.353 [INFO][6111] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" HandleID="k8s-pod-network.38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Workload="ci--4081.3.6--n--9b959526b1-k8s-whisker--767bcc99db--5sbjp-eth0" Mar 10 00:50:09.370856 containerd[1725]: 2026-03-10 00:50:09.354 [INFO][6111] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:09.370856 containerd[1725]: 2026-03-10 00:50:09.354 [INFO][6111] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:09.370856 containerd[1725]: 2026-03-10 00:50:09.364 [WARNING][6111] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" HandleID="k8s-pod-network.38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Workload="ci--4081.3.6--n--9b959526b1-k8s-whisker--767bcc99db--5sbjp-eth0" Mar 10 00:50:09.370856 containerd[1725]: 2026-03-10 00:50:09.364 [INFO][6111] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" HandleID="k8s-pod-network.38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Workload="ci--4081.3.6--n--9b959526b1-k8s-whisker--767bcc99db--5sbjp-eth0" Mar 10 00:50:09.370856 containerd[1725]: 2026-03-10 00:50:09.366 [INFO][6111] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:09.370856 containerd[1725]: 2026-03-10 00:50:09.368 [INFO][6104] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1" Mar 10 00:50:09.370856 containerd[1725]: time="2026-03-10T00:50:09.370042142Z" level=info msg="TearDown network for sandbox \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\" successfully" Mar 10 00:50:09.380409 containerd[1725]: time="2026-03-10T00:50:09.380367037Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 00:50:09.380616 containerd[1725]: time="2026-03-10T00:50:09.380598477Z" level=info msg="RemovePodSandbox \"38a511f4143ca574c802fbf360e7eb282ae1d060993c5f100f3e594eb8e63fd1\" returns successfully" Mar 10 00:50:09.381344 containerd[1725]: time="2026-03-10T00:50:09.381317158Z" level=info msg="StopPodSandbox for \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\"" Mar 10 00:50:09.458966 containerd[1725]: 2026-03-10 00:50:09.419 [WARNING][6125] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0", GenerateName:"calico-apiserver-646d9b8f6d-", Namespace:"calico-system", SelfLink:"", UID:"7e5ad127-4df9-4c78-b9f6-97b3790dc3ca", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646d9b8f6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1", Pod:"calico-apiserver-646d9b8f6d-tqwb4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali27b4907a866", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:09.458966 containerd[1725]: 2026-03-10 00:50:09.419 [INFO][6125] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Mar 10 00:50:09.458966 containerd[1725]: 2026-03-10 00:50:09.419 [INFO][6125] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" iface="eth0" netns="" Mar 10 00:50:09.458966 containerd[1725]: 2026-03-10 00:50:09.419 [INFO][6125] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Mar 10 00:50:09.458966 containerd[1725]: 2026-03-10 00:50:09.419 [INFO][6125] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Mar 10 00:50:09.458966 containerd[1725]: 2026-03-10 00:50:09.440 [INFO][6132] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" HandleID="k8s-pod-network.42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0" Mar 10 00:50:09.458966 containerd[1725]: 2026-03-10 00:50:09.440 [INFO][6132] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:09.458966 containerd[1725]: 2026-03-10 00:50:09.440 [INFO][6132] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:09.458966 containerd[1725]: 2026-03-10 00:50:09.450 [WARNING][6132] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" HandleID="k8s-pod-network.42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0" Mar 10 00:50:09.458966 containerd[1725]: 2026-03-10 00:50:09.450 [INFO][6132] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" HandleID="k8s-pod-network.42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0" Mar 10 00:50:09.458966 containerd[1725]: 2026-03-10 00:50:09.451 [INFO][6132] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:09.458966 containerd[1725]: 2026-03-10 00:50:09.455 [INFO][6125] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Mar 10 00:50:09.458966 containerd[1725]: time="2026-03-10T00:50:09.458846750Z" level=info msg="TearDown network for sandbox \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\" successfully" Mar 10 00:50:09.458966 containerd[1725]: time="2026-03-10T00:50:09.458874790Z" level=info msg="StopPodSandbox for \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\" returns successfully" Mar 10 00:50:09.459572 containerd[1725]: time="2026-03-10T00:50:09.459509791Z" level=info msg="RemovePodSandbox for \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\"" Mar 10 00:50:09.459624 containerd[1725]: time="2026-03-10T00:50:09.459605591Z" level=info msg="Forcibly stopping sandbox \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\"" Mar 10 00:50:09.550650 kubelet[3138]: I0310 00:50:09.550567 3138 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-646d9b8f6d-rsxcf" podStartSLOduration=34.715120685 podStartE2EDuration="42.550545883s" podCreationTimestamp="2026-03-10 00:49:27 +0000 UTC" firstStartedPulling="2026-03-10 00:50:00.749782849 +0000 UTC m=+53.653557649" lastFinishedPulling="2026-03-10 00:50:08.585208047 +0000 UTC m=+61.488982847" observedRunningTime="2026-03-10 00:50:09.54862828 +0000 UTC m=+62.452403080" watchObservedRunningTime="2026-03-10 00:50:09.550545883 +0000 UTC m=+62.454320683" Mar 10 00:50:09.557178 containerd[1725]: 2026-03-10 00:50:09.497 [WARNING][6146] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0", GenerateName:"calico-apiserver-646d9b8f6d-", Namespace:"calico-system", SelfLink:"", UID:"7e5ad127-4df9-4c78-b9f6-97b3790dc3ca", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 0, 49, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"646d9b8f6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-9b959526b1", ContainerID:"22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1", Pod:"calico-apiserver-646d9b8f6d-tqwb4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali27b4907a866", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 00:50:09.557178 containerd[1725]: 2026-03-10 00:50:09.498 [INFO][6146] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Mar 10 00:50:09.557178 containerd[1725]: 2026-03-10 00:50:09.498 [INFO][6146] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" iface="eth0" netns="" Mar 10 00:50:09.557178 containerd[1725]: 2026-03-10 00:50:09.498 [INFO][6146] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Mar 10 00:50:09.557178 containerd[1725]: 2026-03-10 00:50:09.498 [INFO][6146] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Mar 10 00:50:09.557178 containerd[1725]: 2026-03-10 00:50:09.529 [INFO][6154] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" HandleID="k8s-pod-network.42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0" Mar 10 00:50:09.557178 containerd[1725]: 2026-03-10 00:50:09.529 [INFO][6154] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 00:50:09.557178 containerd[1725]: 2026-03-10 00:50:09.530 [INFO][6154] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 00:50:09.557178 containerd[1725]: 2026-03-10 00:50:09.546 [WARNING][6154] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" HandleID="k8s-pod-network.42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0" Mar 10 00:50:09.557178 containerd[1725]: 2026-03-10 00:50:09.546 [INFO][6154] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" HandleID="k8s-pod-network.42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Workload="ci--4081.3.6--n--9b959526b1-k8s-calico--apiserver--646d9b8f6d--tqwb4-eth0" Mar 10 00:50:09.557178 containerd[1725]: 2026-03-10 00:50:09.548 [INFO][6154] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 00:50:09.557178 containerd[1725]: 2026-03-10 00:50:09.552 [INFO][6146] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb" Mar 10 00:50:09.557178 containerd[1725]: time="2026-03-10T00:50:09.556151811Z" level=info msg="TearDown network for sandbox \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\" successfully" Mar 10 00:50:09.583336 containerd[1725]: time="2026-03-10T00:50:09.583287410Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 10 00:50:09.583576 containerd[1725]: time="2026-03-10T00:50:09.583546130Z" level=info msg="RemovePodSandbox \"42d8dcaa631ba936b0bef9117354b4943002add567f9a27df096c6915cca25bb\" returns successfully" Mar 10 00:50:10.531651 kubelet[3138]: I0310 00:50:10.531614 3138 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 10 00:50:10.672425 containerd[1725]: time="2026-03-10T00:50:10.671720583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:50:10.674765 containerd[1725]: time="2026-03-10T00:50:10.674733467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 10 00:50:10.678792 containerd[1725]: time="2026-03-10T00:50:10.678729553Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:50:10.684875 containerd[1725]: time="2026-03-10T00:50:10.684814402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:50:10.685608 containerd[1725]: time="2026-03-10T00:50:10.685572123Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 2.099552354s" Mar 10 00:50:10.685608 containerd[1725]: time="2026-03-10T00:50:10.685607763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 10 00:50:10.687812 containerd[1725]: time="2026-03-10T00:50:10.687625886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 10 00:50:10.697960 containerd[1725]: time="2026-03-10T00:50:10.697920701Z" level=info msg="CreateContainer within sandbox \"23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 10 00:50:10.730351 containerd[1725]: time="2026-03-10T00:50:10.730232307Z" level=info msg="CreateContainer within sandbox \"23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"21a514a31d06c0461d06a5efb4f95cdfab2e893f05f27d21ecdfc536adecc2d4\"" Mar 10 00:50:10.732118 containerd[1725]: time="2026-03-10T00:50:10.730730708Z" level=info msg="StartContainer for \"21a514a31d06c0461d06a5efb4f95cdfab2e893f05f27d21ecdfc536adecc2d4\"" Mar 10 00:50:10.784817 systemd[1]: Started cri-containerd-21a514a31d06c0461d06a5efb4f95cdfab2e893f05f27d21ecdfc536adecc2d4.scope - libcontainer container 21a514a31d06c0461d06a5efb4f95cdfab2e893f05f27d21ecdfc536adecc2d4. Mar 10 00:50:10.836089 containerd[1725]: time="2026-03-10T00:50:10.835713620Z" level=info msg="StartContainer for \"21a514a31d06c0461d06a5efb4f95cdfab2e893f05f27d21ecdfc536adecc2d4\" returns successfully" Mar 10 00:50:13.027577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3541482080.mount: Deactivated successfully. Mar 10 00:50:13.422673 containerd[1725]: time="2026-03-10T00:50:13.422330864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:50:13.425512 containerd[1725]: time="2026-03-10T00:50:13.425478109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 10 00:50:13.429403 containerd[1725]: time="2026-03-10T00:50:13.429274994Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:50:13.434405 containerd[1725]: time="2026-03-10T00:50:13.434028081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:50:13.435105 containerd[1725]: time="2026-03-10T00:50:13.434795722Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.747106076s" Mar 10 00:50:13.435105 containerd[1725]: time="2026-03-10T00:50:13.434831842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 10 00:50:13.436077 containerd[1725]: time="2026-03-10T00:50:13.436044764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 10 00:50:13.444553 containerd[1725]: time="2026-03-10T00:50:13.444461415Z" level=info msg="CreateContainer within sandbox \"dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 10 00:50:13.485198 containerd[1725]: time="2026-03-10T00:50:13.485150153Z" level=info msg="CreateContainer within sandbox \"dd11823fd12d2388298d21f992f07687c1fd730fba501db57d78d625a796ec8f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"81f88f14cb6c1d55016eac0189d2f0c1493b2b5e85e628a49e93965c8f3a0772\"" Mar 10 00:50:13.486666 containerd[1725]: time="2026-03-10T00:50:13.486133194Z" level=info msg="StartContainer for \"81f88f14cb6c1d55016eac0189d2f0c1493b2b5e85e628a49e93965c8f3a0772\"" Mar 10 00:50:13.521854 systemd[1]: Started cri-containerd-81f88f14cb6c1d55016eac0189d2f0c1493b2b5e85e628a49e93965c8f3a0772.scope - libcontainer container 81f88f14cb6c1d55016eac0189d2f0c1493b2b5e85e628a49e93965c8f3a0772. Mar 10 00:50:13.560363 containerd[1725]: time="2026-03-10T00:50:13.560310259Z" level=info msg="StartContainer for \"81f88f14cb6c1d55016eac0189d2f0c1493b2b5e85e628a49e93965c8f3a0772\" returns successfully" Mar 10 00:50:13.672523 systemd[1]: run-containerd-runc-k8s.io-81f88f14cb6c1d55016eac0189d2f0c1493b2b5e85e628a49e93965c8f3a0772-runc.CflU6h.mount: Deactivated successfully. Mar 10 00:50:13.888097 containerd[1725]: time="2026-03-10T00:50:13.888042041Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:50:13.891132 containerd[1725]: time="2026-03-10T00:50:13.891047565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 10 00:50:13.893848 containerd[1725]: time="2026-03-10T00:50:13.893801649Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 457.720405ms" Mar 10 00:50:13.893848 containerd[1725]: time="2026-03-10T00:50:13.893847889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 10 00:50:13.895231 containerd[1725]: time="2026-03-10T00:50:13.894950611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 10 00:50:13.903205 containerd[1725]: time="2026-03-10T00:50:13.903019582Z" level=info msg="CreateContainer within sandbox \"22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 10 00:50:13.948624 containerd[1725]: time="2026-03-10T00:50:13.948455846Z" level=info msg="CreateContainer within sandbox \"22c854388d4c20cdd4794cca4b6e6dc2e08fc81d06fc3b4b18551d7888e82fd1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f31760addd5334c00ef180b84dcd564cb8b0a37a670043f39b40e6adb6533d1a\"" Mar 10 00:50:13.949318 containerd[1725]: time="2026-03-10T00:50:13.949243847Z" level=info msg="StartContainer for \"f31760addd5334c00ef180b84dcd564cb8b0a37a670043f39b40e6adb6533d1a\"" Mar 10 00:50:13.984174 systemd[1]: Started cri-containerd-f31760addd5334c00ef180b84dcd564cb8b0a37a670043f39b40e6adb6533d1a.scope - libcontainer container f31760addd5334c00ef180b84dcd564cb8b0a37a670043f39b40e6adb6533d1a. Mar 10 00:50:14.025500 containerd[1725]: time="2026-03-10T00:50:14.025346515Z" level=info msg="StartContainer for \"f31760addd5334c00ef180b84dcd564cb8b0a37a670043f39b40e6adb6533d1a\" returns successfully" Mar 10 00:50:14.577216 kubelet[3138]: I0310 00:50:14.576175 3138 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-c5qrt" podStartSLOduration=37.111789371 podStartE2EDuration="46.576162492s" podCreationTimestamp="2026-03-10 00:49:28 +0000 UTC" firstStartedPulling="2026-03-10 00:50:03.971580562 +0000 UTC m=+56.875355362" lastFinishedPulling="2026-03-10 00:50:13.435953683 +0000 UTC m=+66.339728483" observedRunningTime="2026-03-10 00:50:14.569240962 +0000 UTC m=+67.473015762" watchObservedRunningTime="2026-03-10 00:50:14.576162492 +0000 UTC m=+67.479937292" Mar 10 00:50:15.554354 kubelet[3138]: I0310 00:50:15.554068 3138 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 10 00:50:16.426474 containerd[1725]: time="2026-03-10T00:50:16.425935621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:50:16.429415 containerd[1725]: time="2026-03-10T00:50:16.429367906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 10 00:50:16.433203 containerd[1725]: time="2026-03-10T00:50:16.433175831Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:50:16.437980 containerd[1725]: time="2026-03-10T00:50:16.437853078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 00:50:16.438702 containerd[1725]: time="2026-03-10T00:50:16.438450599Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.543465828s" Mar 10 00:50:16.438702 containerd[1725]: time="2026-03-10T00:50:16.438485279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 10 00:50:16.448960 containerd[1725]: time="2026-03-10T00:50:16.448916254Z" level=info msg="CreateContainer within sandbox \"23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 10 00:50:16.484305 containerd[1725]: time="2026-03-10T00:50:16.484252864Z" level=info msg="CreateContainer within sandbox \"23dd02751d548b80d80f1fceb7b712b1bd760405cbc3d590452aaf85e356eb65\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"99d1fffefa14d6e3a8d7d87da9af30ae8765d4d26b65d2d3e63cbb2d683940b8\"" Mar 10 00:50:16.485026 containerd[1725]: time="2026-03-10T00:50:16.484843344Z" level=info msg="StartContainer for \"99d1fffefa14d6e3a8d7d87da9af30ae8765d4d26b65d2d3e63cbb2d683940b8\"" Mar 10 00:50:16.523825 systemd[1]: Started cri-containerd-99d1fffefa14d6e3a8d7d87da9af30ae8765d4d26b65d2d3e63cbb2d683940b8.scope - libcontainer container 99d1fffefa14d6e3a8d7d87da9af30ae8765d4d26b65d2d3e63cbb2d683940b8. Mar 10 00:50:16.572846 containerd[1725]: time="2026-03-10T00:50:16.572800468Z" level=info msg="StartContainer for \"99d1fffefa14d6e3a8d7d87da9af30ae8765d4d26b65d2d3e63cbb2d683940b8\" returns successfully" Mar 10 00:50:16.640395 kubelet[3138]: I0310 00:50:16.639679 3138 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-646d9b8f6d-tqwb4" podStartSLOduration=39.722424484 podStartE2EDuration="49.638690401s" podCreationTimestamp="2026-03-10 00:49:27 +0000 UTC" firstStartedPulling="2026-03-10 00:50:03.978384253 +0000 UTC m=+56.882159013" lastFinishedPulling="2026-03-10 00:50:13.89465013 +0000 UTC m=+66.798424930" observedRunningTime="2026-03-10 00:50:14.616037908 +0000 UTC m=+67.519812708" watchObservedRunningTime="2026-03-10 00:50:16.638690401 +0000 UTC m=+69.542465201" Mar 10 00:50:17.313245 kubelet[3138]: I0310 00:50:17.312920 3138 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 10 00:50:17.313245 kubelet[3138]: I0310 00:50:17.312955 3138 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 10 00:50:17.585095 kubelet[3138]: I0310 00:50:17.584799 3138 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-zhks4" podStartSLOduration=35.068171444 podStartE2EDuration="48.584779936s" podCreationTimestamp="2026-03-10 00:49:29 +0000 UTC" firstStartedPulling="2026-03-10 00:50:02.923061069 +0000 UTC m=+55.826835829" lastFinishedPulling="2026-03-10 00:50:16.439669521 +0000 UTC m=+69.343444321" observedRunningTime="2026-03-10 00:50:17.583904135 +0000 UTC m=+70.487678935" watchObservedRunningTime="2026-03-10 00:50:17.584779936 +0000 UTC m=+70.488554736" Mar 10 00:50:21.982941 systemd[1]: Started sshd@7-10.200.20.10:22-10.200.16.10:46540.service - OpenSSH per-connection server daemon (10.200.16.10:46540). Mar 10 00:50:22.472330 sshd[6450]: Accepted publickey for core from 10.200.16.10 port 46540 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:50:22.474553 sshd[6450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:50:22.480981 systemd-logind[1694]: New session 10 of user core. Mar 10 00:50:22.490067 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 10 00:50:22.893870 sshd[6450]: pam_unix(sshd:session): session closed for user core Mar 10 00:50:22.897572 systemd-logind[1694]: Session 10 logged out. Waiting for processes to exit. Mar 10 00:50:22.897693 systemd[1]: sshd@7-10.200.20.10:22-10.200.16.10:46540.service: Deactivated successfully. Mar 10 00:50:22.899671 systemd[1]: session-10.scope: Deactivated successfully. Mar 10 00:50:22.901818 systemd-logind[1694]: Removed session 10. Mar 10 00:50:27.981377 systemd[1]: Started sshd@8-10.200.20.10:22-10.200.16.10:46546.service - OpenSSH per-connection server daemon (10.200.16.10:46546). Mar 10 00:50:28.471289 sshd[6470]: Accepted publickey for core from 10.200.16.10 port 46546 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:50:28.472769 sshd[6470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:50:28.476837 systemd-logind[1694]: New session 11 of user core. Mar 10 00:50:28.482785 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 10 00:50:28.904495 sshd[6470]: pam_unix(sshd:session): session closed for user core Mar 10 00:50:28.908098 systemd[1]: sshd@8-10.200.20.10:22-10.200.16.10:46546.service: Deactivated successfully. Mar 10 00:50:28.909940 systemd[1]: session-11.scope: Deactivated successfully. Mar 10 00:50:28.910875 systemd-logind[1694]: Session 11 logged out. Waiting for processes to exit. Mar 10 00:50:28.911912 systemd-logind[1694]: Removed session 11. Mar 10 00:50:31.348265 kubelet[3138]: I0310 00:50:31.347859 3138 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 10 00:50:33.992588 systemd[1]: Started sshd@9-10.200.20.10:22-10.200.16.10:47670.service - OpenSSH per-connection server daemon (10.200.16.10:47670). Mar 10 00:50:34.486578 sshd[6506]: Accepted publickey for core from 10.200.16.10 port 47670 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:50:34.487862 sshd[6506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:50:34.492885 systemd-logind[1694]: New session 12 of user core. Mar 10 00:50:34.499815 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 10 00:50:34.900908 sshd[6506]: pam_unix(sshd:session): session closed for user core Mar 10 00:50:34.904973 systemd[1]: sshd@9-10.200.20.10:22-10.200.16.10:47670.service: Deactivated successfully. Mar 10 00:50:34.907408 systemd[1]: session-12.scope: Deactivated successfully. Mar 10 00:50:34.908838 systemd-logind[1694]: Session 12 logged out. Waiting for processes to exit. Mar 10 00:50:34.910430 systemd-logind[1694]: Removed session 12. Mar 10 00:50:35.523710 systemd[1]: run-containerd-runc-k8s.io-cffe5dabf139ae1a2f2ba5dce41ab07cba9577ec6a09263c603ed88216657e8f-runc.1GgqLZ.mount: Deactivated successfully. Mar 10 00:50:39.995004 systemd[1]: Started sshd@10-10.200.20.10:22-10.200.16.10:39398.service - OpenSSH per-connection server daemon (10.200.16.10:39398). Mar 10 00:50:40.480172 sshd[6539]: Accepted publickey for core from 10.200.16.10 port 39398 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:50:40.481994 sshd[6539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:50:40.485587 systemd-logind[1694]: New session 13 of user core. Mar 10 00:50:40.490808 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 10 00:50:40.895295 sshd[6539]: pam_unix(sshd:session): session closed for user core Mar 10 00:50:40.899037 systemd[1]: sshd@10-10.200.20.10:22-10.200.16.10:39398.service: Deactivated successfully. Mar 10 00:50:40.900759 systemd[1]: session-13.scope: Deactivated successfully. Mar 10 00:50:40.903129 systemd-logind[1694]: Session 13 logged out. Waiting for processes to exit. Mar 10 00:50:40.904413 systemd-logind[1694]: Removed session 13. Mar 10 00:50:41.718059 kubelet[3138]: I0310 00:50:41.717687 3138 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 10 00:50:45.994946 systemd[1]: Started sshd@11-10.200.20.10:22-10.200.16.10:39408.service - OpenSSH per-connection server daemon (10.200.16.10:39408). Mar 10 00:50:46.483079 sshd[6578]: Accepted publickey for core from 10.200.16.10 port 39408 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:50:46.485037 sshd[6578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:50:46.489099 systemd-logind[1694]: New session 14 of user core. Mar 10 00:50:46.493903 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 10 00:50:46.904264 sshd[6578]: pam_unix(sshd:session): session closed for user core Mar 10 00:50:46.909449 systemd[1]: sshd@11-10.200.20.10:22-10.200.16.10:39408.service: Deactivated successfully. Mar 10 00:50:46.913225 systemd[1]: session-14.scope: Deactivated successfully. Mar 10 00:50:46.915359 systemd-logind[1694]: Session 14 logged out. Waiting for processes to exit. Mar 10 00:50:46.916886 systemd-logind[1694]: Removed session 14. Mar 10 00:50:46.996000 systemd[1]: Started sshd@12-10.200.20.10:22-10.200.16.10:39422.service - OpenSSH per-connection server daemon (10.200.16.10:39422). Mar 10 00:50:47.482363 sshd[6613]: Accepted publickey for core from 10.200.16.10 port 39422 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:50:47.483906 sshd[6613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:50:47.487844 systemd-logind[1694]: New session 15 of user core. Mar 10 00:50:47.494799 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 10 00:50:47.925551 sshd[6613]: pam_unix(sshd:session): session closed for user core Mar 10 00:50:47.931428 systemd[1]: sshd@12-10.200.20.10:22-10.200.16.10:39422.service: Deactivated successfully. Mar 10 00:50:47.935890 systemd[1]: session-15.scope: Deactivated successfully. Mar 10 00:50:47.938073 systemd-logind[1694]: Session 15 logged out. Waiting for processes to exit. Mar 10 00:50:47.939327 systemd-logind[1694]: Removed session 15. Mar 10 00:50:48.017827 systemd[1]: Started sshd@13-10.200.20.10:22-10.200.16.10:39430.service - OpenSSH per-connection server daemon (10.200.16.10:39430). Mar 10 00:50:48.509108 sshd[6624]: Accepted publickey for core from 10.200.16.10 port 39430 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:50:48.510047 sshd[6624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:50:48.514011 systemd-logind[1694]: New session 16 of user core. Mar 10 00:50:48.523793 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 10 00:50:48.921842 sshd[6624]: pam_unix(sshd:session): session closed for user core Mar 10 00:50:48.926092 systemd[1]: sshd@13-10.200.20.10:22-10.200.16.10:39430.service: Deactivated successfully. Mar 10 00:50:48.928382 systemd[1]: session-16.scope: Deactivated successfully. Mar 10 00:50:48.929413 systemd-logind[1694]: Session 16 logged out. Waiting for processes to exit. Mar 10 00:50:48.931252 systemd-logind[1694]: Removed session 16. Mar 10 00:50:54.010426 systemd[1]: Started sshd@14-10.200.20.10:22-10.200.16.10:37578.service - OpenSSH per-connection server daemon (10.200.16.10:37578). Mar 10 00:50:54.507615 sshd[6659]: Accepted publickey for core from 10.200.16.10 port 37578 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:50:54.509734 sshd[6659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:50:54.514691 systemd-logind[1694]: New session 17 of user core. Mar 10 00:50:54.518779 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 10 00:50:54.927907 sshd[6659]: pam_unix(sshd:session): session closed for user core Mar 10 00:50:54.933289 systemd-logind[1694]: Session 17 logged out. Waiting for processes to exit. Mar 10 00:50:54.933856 systemd[1]: sshd@14-10.200.20.10:22-10.200.16.10:37578.service: Deactivated successfully. Mar 10 00:50:54.935748 systemd[1]: session-17.scope: Deactivated successfully. Mar 10 00:50:54.938711 systemd-logind[1694]: Removed session 17. Mar 10 00:50:55.030365 systemd[1]: Started sshd@15-10.200.20.10:22-10.200.16.10:37594.service - OpenSSH per-connection server daemon (10.200.16.10:37594). Mar 10 00:50:55.512985 sshd[6672]: Accepted publickey for core from 10.200.16.10 port 37594 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:50:55.513827 sshd[6672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:50:55.517475 systemd-logind[1694]: New session 18 of user core. Mar 10 00:50:55.522785 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 10 00:50:56.082341 sshd[6672]: pam_unix(sshd:session): session closed for user core Mar 10 00:50:56.086187 systemd[1]: sshd@15-10.200.20.10:22-10.200.16.10:37594.service: Deactivated successfully. Mar 10 00:50:56.089985 systemd[1]: session-18.scope: Deactivated successfully. Mar 10 00:50:56.091227 systemd-logind[1694]: Session 18 logged out. Waiting for processes to exit. Mar 10 00:50:56.092462 systemd-logind[1694]: Removed session 18. Mar 10 00:50:56.177205 systemd[1]: Started sshd@16-10.200.20.10:22-10.200.16.10:37598.service - OpenSSH per-connection server daemon (10.200.16.10:37598). Mar 10 00:50:56.669253 sshd[6683]: Accepted publickey for core from 10.200.16.10 port 37598 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:50:56.670797 sshd[6683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:50:56.675894 systemd-logind[1694]: New session 19 of user core. Mar 10 00:50:56.681773 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 10 00:50:57.859760 sshd[6683]: pam_unix(sshd:session): session closed for user core Mar 10 00:50:57.865013 systemd[1]: sshd@16-10.200.20.10:22-10.200.16.10:37598.service: Deactivated successfully. Mar 10 00:50:57.865242 systemd-logind[1694]: Session 19 logged out. Waiting for processes to exit. Mar 10 00:50:57.869684 systemd[1]: session-19.scope: Deactivated successfully. Mar 10 00:50:57.871112 systemd-logind[1694]: Removed session 19. Mar 10 00:50:57.944513 systemd[1]: Started sshd@17-10.200.20.10:22-10.200.16.10:37604.service - OpenSSH per-connection server daemon (10.200.16.10:37604). Mar 10 00:50:58.437575 sshd[6735]: Accepted publickey for core from 10.200.16.10 port 37604 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:50:58.439084 sshd[6735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:50:58.443705 systemd-logind[1694]: New session 20 of user core. Mar 10 00:50:58.447769 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 10 00:50:58.957425 sshd[6735]: pam_unix(sshd:session): session closed for user core Mar 10 00:50:58.961026 systemd-logind[1694]: Session 20 logged out. Waiting for processes to exit. Mar 10 00:50:58.962594 systemd[1]: sshd@17-10.200.20.10:22-10.200.16.10:37604.service: Deactivated successfully. Mar 10 00:50:58.964799 systemd[1]: session-20.scope: Deactivated successfully. Mar 10 00:50:58.966215 systemd-logind[1694]: Removed session 20. Mar 10 00:50:59.052924 systemd[1]: Started sshd@18-10.200.20.10:22-10.200.16.10:37612.service - OpenSSH per-connection server daemon (10.200.16.10:37612). Mar 10 00:50:59.535661 sshd[6748]: Accepted publickey for core from 10.200.16.10 port 37612 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:50:59.536712 sshd[6748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:50:59.541688 systemd-logind[1694]: New session 21 of user core. Mar 10 00:50:59.550807 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 10 00:50:59.956245 sshd[6748]: pam_unix(sshd:session): session closed for user core Mar 10 00:50:59.959556 systemd-logind[1694]: Session 21 logged out. Waiting for processes to exit. Mar 10 00:50:59.959791 systemd[1]: sshd@18-10.200.20.10:22-10.200.16.10:37612.service: Deactivated successfully. Mar 10 00:50:59.961427 systemd[1]: session-21.scope: Deactivated successfully. Mar 10 00:50:59.963516 systemd-logind[1694]: Removed session 21. Mar 10 00:51:01.061414 systemd[1]: run-containerd-runc-k8s.io-81f88f14cb6c1d55016eac0189d2f0c1493b2b5e85e628a49e93965c8f3a0772-runc.8UFLaG.mount: Deactivated successfully. Mar 10 00:51:05.051921 systemd[1]: Started sshd@19-10.200.20.10:22-10.200.16.10:60234.service - OpenSSH per-connection server daemon (10.200.16.10:60234). Mar 10 00:51:05.534918 sshd[6784]: Accepted publickey for core from 10.200.16.10 port 60234 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:51:05.554304 sshd[6784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:51:05.558287 systemd-logind[1694]: New session 22 of user core. Mar 10 00:51:05.564793 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 10 00:51:05.956556 sshd[6784]: pam_unix(sshd:session): session closed for user core Mar 10 00:51:05.959793 systemd[1]: sshd@19-10.200.20.10:22-10.200.16.10:60234.service: Deactivated successfully. Mar 10 00:51:05.963407 systemd[1]: session-22.scope: Deactivated successfully. Mar 10 00:51:05.964196 systemd-logind[1694]: Session 22 logged out. Waiting for processes to exit. Mar 10 00:51:05.965495 systemd-logind[1694]: Removed session 22. Mar 10 00:51:11.063169 systemd[1]: Started sshd@20-10.200.20.10:22-10.200.16.10:42582.service - OpenSSH per-connection server daemon (10.200.16.10:42582). Mar 10 00:51:11.551663 sshd[6819]: Accepted publickey for core from 10.200.16.10 port 42582 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:51:11.552730 sshd[6819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:51:11.556780 systemd-logind[1694]: New session 23 of user core. Mar 10 00:51:11.562859 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 10 00:51:11.965556 sshd[6819]: pam_unix(sshd:session): session closed for user core Mar 10 00:51:11.969233 systemd[1]: sshd@20-10.200.20.10:22-10.200.16.10:42582.service: Deactivated successfully. Mar 10 00:51:11.970902 systemd[1]: session-23.scope: Deactivated successfully. Mar 10 00:51:11.971653 systemd-logind[1694]: Session 23 logged out. Waiting for processes to exit. Mar 10 00:51:11.972836 systemd-logind[1694]: Removed session 23. Mar 10 00:51:17.053484 systemd[1]: Started sshd@21-10.200.20.10:22-10.200.16.10:42584.service - OpenSSH per-connection server daemon (10.200.16.10:42584). Mar 10 00:51:17.546356 sshd[6863]: Accepted publickey for core from 10.200.16.10 port 42584 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:51:17.547832 sshd[6863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:51:17.552061 systemd-logind[1694]: New session 24 of user core. Mar 10 00:51:17.556799 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 10 00:51:17.959869 sshd[6863]: pam_unix(sshd:session): session closed for user core Mar 10 00:51:17.963382 systemd[1]: sshd@21-10.200.20.10:22-10.200.16.10:42584.service: Deactivated successfully. Mar 10 00:51:17.965845 systemd[1]: session-24.scope: Deactivated successfully. Mar 10 00:51:17.966682 systemd-logind[1694]: Session 24 logged out. Waiting for processes to exit. Mar 10 00:51:17.967981 systemd-logind[1694]: Removed session 24. Mar 10 00:51:23.052440 systemd[1]: Started sshd@22-10.200.20.10:22-10.200.16.10:39730.service - OpenSSH per-connection server daemon (10.200.16.10:39730). Mar 10 00:51:23.543663 sshd[6897]: Accepted publickey for core from 10.200.16.10 port 39730 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:51:23.544559 sshd[6897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:51:23.548970 systemd-logind[1694]: New session 25 of user core. Mar 10 00:51:23.556832 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 10 00:51:23.961224 sshd[6897]: pam_unix(sshd:session): session closed for user core Mar 10 00:51:23.965397 systemd-logind[1694]: Session 25 logged out. Waiting for processes to exit. Mar 10 00:51:23.966560 systemd[1]: sshd@22-10.200.20.10:22-10.200.16.10:39730.service: Deactivated successfully. Mar 10 00:51:23.970705 systemd[1]: session-25.scope: Deactivated successfully. Mar 10 00:51:23.972444 systemd-logind[1694]: Removed session 25. Mar 10 00:51:29.054962 systemd[1]: Started sshd@23-10.200.20.10:22-10.200.16.10:39736.service - OpenSSH per-connection server daemon (10.200.16.10:39736). Mar 10 00:51:29.542009 sshd[6939]: Accepted publickey for core from 10.200.16.10 port 39736 ssh2: RSA SHA256:3yE2WPMdb18Yso8Q40oiwC6Ssaxyw01YkZxRo55vnO8 Mar 10 00:51:29.564427 sshd[6939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 00:51:29.569586 systemd-logind[1694]: New session 26 of user core. Mar 10 00:51:29.579803 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 10 00:51:29.954871 sshd[6939]: pam_unix(sshd:session): session closed for user core Mar 10 00:51:29.958839 systemd[1]: sshd@23-10.200.20.10:22-10.200.16.10:39736.service: Deactivated successfully. Mar 10 00:51:29.961831 systemd[1]: session-26.scope: Deactivated successfully. Mar 10 00:51:29.962579 systemd-logind[1694]: Session 26 logged out. Waiting for processes to exit. Mar 10 00:51:29.963384 systemd-logind[1694]: Removed session 26.