Mar 2 13:10:07.166452 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 2 13:10:07.166474 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Mar 2 11:11:01 -00 2026 Mar 2 13:10:07.166482 kernel: KASLR enabled Mar 2 13:10:07.166488 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 2 13:10:07.166495 kernel: printk: bootconsole [pl11] enabled Mar 2 13:10:07.166501 kernel: efi: EFI v2.7 by EDK II Mar 2 13:10:07.166508 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 2 13:10:07.166515 kernel: random: crng init done Mar 2 13:10:07.166521 kernel: ACPI: Early table checksum verification disabled Mar 2 13:10:07.166527 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 2 13:10:07.166533 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:10:07.166539 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:10:07.166546 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 2 13:10:07.166553 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:10:07.166560 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:10:07.166567 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:10:07.166573 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:10:07.166581 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:10:07.166588 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:10:07.166594 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 2 13:10:07.166601 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:10:07.166607 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 2 13:10:07.166614 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 2 13:10:07.166620 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 2 13:10:07.166627 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 2 13:10:07.166633 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 2 13:10:07.166640 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 2 13:10:07.166646 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 2 13:10:07.166654 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 2 13:10:07.166661 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 2 13:10:07.166667 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 2 13:10:07.166673 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 2 13:10:07.166680 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 2 13:10:07.166686 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 2 13:10:07.166693 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 2 13:10:07.166699 kernel: Zone ranges: Mar 2 13:10:07.166705 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 2 13:10:07.166712 kernel: DMA32 empty Mar 2 13:10:07.166718 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 2 13:10:07.166724 kernel: Movable zone start for each node Mar 2 13:10:07.166735 kernel: Early memory node ranges Mar 2 13:10:07.166742 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 2 13:10:07.166749 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 2 13:10:07.166756 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 2 13:10:07.166762 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 2 13:10:07.166770 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 2 13:10:07.166777 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 2 13:10:07.166784 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 2 13:10:07.166791 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 2 13:10:07.166798 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 2 13:10:07.166805 kernel: psci: probing for conduit method from ACPI. Mar 2 13:10:07.166811 kernel: psci: PSCIv1.1 detected in firmware. Mar 2 13:10:07.166818 kernel: psci: Using standard PSCI v0.2 function IDs Mar 2 13:10:07.166825 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 2 13:10:07.166832 kernel: psci: SMC Calling Convention v1.4 Mar 2 13:10:07.166838 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 2 13:10:07.166845 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 2 13:10:07.166853 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 2 13:10:07.166860 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 2 13:10:07.166867 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 2 13:10:07.166874 kernel: Detected PIPT I-cache on CPU0 Mar 2 13:10:07.166881 kernel: CPU features: detected: GIC system register CPU interface Mar 2 13:10:07.166888 kernel: CPU features: detected: Hardware dirty bit management Mar 2 13:10:07.166894 kernel: CPU features: detected: Spectre-BHB Mar 2 13:10:07.166901 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 2 13:10:07.166908 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 2 13:10:07.166915 kernel: CPU features: detected: ARM erratum 1418040 Mar 2 13:10:07.166922 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 2 13:10:07.166930 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 2 13:10:07.166937 kernel: alternatives: applying boot alternatives Mar 2 13:10:07.166945 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7ecec6e0f4313fe7e6ab44dac0c51edbf0b22765a212833abcec729cd9dc543f Mar 2 13:10:07.166952 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 13:10:07.166959 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 13:10:07.166966 kernel: Fallback order for Node 0: 0 Mar 2 13:10:07.166973 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 2 13:10:07.166980 kernel: Policy zone: Normal Mar 2 13:10:07.166986 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 13:10:07.166993 kernel: software IO TLB: area num 2. Mar 2 13:10:07.167000 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 2 13:10:07.167009 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 2 13:10:07.167016 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 2 13:10:07.167022 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 13:10:07.167030 kernel: rcu: RCU event tracing is enabled. Mar 2 13:10:07.167037 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 2 13:10:07.167044 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 13:10:07.167051 kernel: Tracing variant of Tasks RCU enabled. Mar 2 13:10:07.167058 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 13:10:07.167065 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 2 13:10:07.167071 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 2 13:10:07.167078 kernel: GICv3: 960 SPIs implemented Mar 2 13:10:07.167086 kernel: GICv3: 0 Extended SPIs implemented Mar 2 13:10:07.168147 kernel: Root IRQ handler: gic_handle_irq Mar 2 13:10:07.168163 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 2 13:10:07.168171 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 2 13:10:07.168178 kernel: ITS: No ITS available, not enabling LPIs Mar 2 13:10:07.168185 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 13:10:07.168193 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 2 13:10:07.168200 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 2 13:10:07.168207 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 2 13:10:07.168215 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 2 13:10:07.168222 kernel: Console: colour dummy device 80x25 Mar 2 13:10:07.168234 kernel: printk: console [tty1] enabled Mar 2 13:10:07.168243 kernel: ACPI: Core revision 20230628 Mar 2 13:10:07.168250 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 2 13:10:07.168257 kernel: pid_max: default: 32768 minimum: 301 Mar 2 13:10:07.168264 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 2 13:10:07.168272 kernel: landlock: Up and running. Mar 2 13:10:07.168279 kernel: SELinux: Initializing. Mar 2 13:10:07.168286 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:10:07.168293 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:10:07.168302 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 2 13:10:07.168309 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 2 13:10:07.168316 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 2 13:10:07.168323 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 2 13:10:07.168330 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 2 13:10:07.168337 kernel: rcu: Hierarchical SRCU implementation. Mar 2 13:10:07.168344 kernel: rcu: Max phase no-delay instances is 400. Mar 2 13:10:07.168352 kernel: Remapping and enabling EFI services. Mar 2 13:10:07.168365 kernel: smp: Bringing up secondary CPUs ... Mar 2 13:10:07.168372 kernel: Detected PIPT I-cache on CPU1 Mar 2 13:10:07.168380 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 2 13:10:07.168387 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 2 13:10:07.168396 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 2 13:10:07.168403 kernel: smp: Brought up 1 node, 2 CPUs Mar 2 13:10:07.168411 kernel: SMP: Total of 2 processors activated. Mar 2 13:10:07.168418 kernel: CPU features: detected: 32-bit EL0 Support Mar 2 13:10:07.168426 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 2 13:10:07.168435 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 2 13:10:07.168442 kernel: CPU features: detected: CRC32 instructions Mar 2 13:10:07.168450 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 2 13:10:07.168457 kernel: CPU features: detected: LSE atomic instructions Mar 2 13:10:07.168464 kernel: CPU features: detected: Privileged Access Never Mar 2 13:10:07.168472 kernel: CPU: All CPU(s) started at EL1 Mar 2 13:10:07.168479 kernel: alternatives: applying system-wide alternatives Mar 2 13:10:07.168487 kernel: devtmpfs: initialized Mar 2 13:10:07.168494 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 13:10:07.168503 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 2 13:10:07.168510 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 13:10:07.168518 kernel: SMBIOS 3.1.0 present. Mar 2 13:10:07.168525 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 2 13:10:07.168533 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 13:10:07.168540 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 2 13:10:07.168547 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 2 13:10:07.168555 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 2 13:10:07.168562 kernel: audit: initializing netlink subsys (disabled) Mar 2 13:10:07.168571 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 2 13:10:07.168579 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 13:10:07.168586 kernel: cpuidle: using governor menu Mar 2 13:10:07.168593 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 2 13:10:07.168601 kernel: ASID allocator initialised with 32768 entries Mar 2 13:10:07.168608 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 13:10:07.168615 kernel: Serial: AMBA PL011 UART driver Mar 2 13:10:07.168623 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 2 13:10:07.168630 kernel: Modules: 0 pages in range for non-PLT usage Mar 2 13:10:07.168639 kernel: Modules: 509008 pages in range for PLT usage Mar 2 13:10:07.168646 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 13:10:07.168654 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 13:10:07.168661 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 2 13:10:07.168668 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 2 13:10:07.168676 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 13:10:07.168683 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 13:10:07.168691 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 2 13:10:07.168698 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 2 13:10:07.168707 kernel: ACPI: Added _OSI(Module Device) Mar 2 13:10:07.168714 kernel: ACPI: Added _OSI(Processor Device) Mar 2 13:10:07.168721 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 13:10:07.168729 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 13:10:07.168736 kernel: ACPI: Interpreter enabled Mar 2 13:10:07.168743 kernel: ACPI: Using GIC for interrupt routing Mar 2 13:10:07.168750 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 2 13:10:07.168758 kernel: printk: console [ttyAMA0] enabled Mar 2 13:10:07.168765 kernel: printk: bootconsole [pl11] disabled Mar 2 13:10:07.168774 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 2 13:10:07.168781 kernel: iommu: Default domain type: Translated Mar 2 13:10:07.168789 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 2 13:10:07.168796 kernel: efivars: Registered efivars operations Mar 2 13:10:07.168803 kernel: vgaarb: loaded Mar 2 13:10:07.168811 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 2 13:10:07.168818 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 13:10:07.168826 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 13:10:07.168833 kernel: pnp: PnP ACPI init Mar 2 13:10:07.168842 kernel: pnp: PnP ACPI: found 0 devices Mar 2 13:10:07.168849 kernel: NET: Registered PF_INET protocol family Mar 2 13:10:07.168857 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 13:10:07.168864 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 13:10:07.168872 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 13:10:07.168879 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 13:10:07.168887 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 13:10:07.168894 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 13:10:07.168902 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:10:07.168911 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:10:07.168918 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 13:10:07.168925 kernel: PCI: CLS 0 bytes, default 64 Mar 2 13:10:07.168933 kernel: kvm [1]: HYP mode not available Mar 2 13:10:07.168940 kernel: Initialise system trusted keyrings Mar 2 13:10:07.168947 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 13:10:07.168955 kernel: Key type asymmetric registered Mar 2 13:10:07.168962 kernel: Asymmetric key parser 'x509' registered Mar 2 13:10:07.168969 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 2 13:10:07.168978 kernel: io scheduler mq-deadline registered Mar 2 13:10:07.168985 kernel: io scheduler kyber registered Mar 2 13:10:07.168993 kernel: io scheduler bfq registered Mar 2 13:10:07.169000 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 13:10:07.169007 kernel: thunder_xcv, ver 1.0 Mar 2 13:10:07.169014 kernel: thunder_bgx, ver 1.0 Mar 2 13:10:07.169021 kernel: nicpf, ver 1.0 Mar 2 13:10:07.169029 kernel: nicvf, ver 1.0 Mar 2 13:10:07.169165 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 2 13:10:07.169243 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-02T13:10:06 UTC (1772457006) Mar 2 13:10:07.169254 kernel: efifb: probing for efifb Mar 2 13:10:07.169262 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 2 13:10:07.169269 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 2 13:10:07.169277 kernel: efifb: scrolling: redraw Mar 2 13:10:07.169284 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 2 13:10:07.169291 kernel: Console: switching to colour frame buffer device 128x48 Mar 2 13:10:07.169299 kernel: fb0: EFI VGA frame buffer device Mar 2 13:10:07.169308 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 2 13:10:07.169316 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 2 13:10:07.169323 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 2 13:10:07.169331 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 2 13:10:07.169338 kernel: watchdog: Hard watchdog permanently disabled Mar 2 13:10:07.169346 kernel: NET: Registered PF_INET6 protocol family Mar 2 13:10:07.169353 kernel: Segment Routing with IPv6 Mar 2 13:10:07.169360 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 13:10:07.169368 kernel: NET: Registered PF_PACKET protocol family Mar 2 13:10:07.169376 kernel: Key type dns_resolver registered Mar 2 13:10:07.169384 kernel: registered taskstats version 1 Mar 2 13:10:07.169391 kernel: Loading compiled-in X.509 certificates Mar 2 13:10:07.169399 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 888055ac257926b028c9aac8084c1e2b1bcee773' Mar 2 13:10:07.169406 kernel: Key type .fscrypt registered Mar 2 13:10:07.169413 kernel: Key type fscrypt-provisioning registered Mar 2 13:10:07.169420 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 13:10:07.169428 kernel: ima: Allocated hash algorithm: sha1 Mar 2 13:10:07.169435 kernel: ima: No architecture policies found Mar 2 13:10:07.169444 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 2 13:10:07.169451 kernel: clk: Disabling unused clocks Mar 2 13:10:07.169459 kernel: Freeing unused kernel memory: 39424K Mar 2 13:10:07.169466 kernel: Run /init as init process Mar 2 13:10:07.169473 kernel: with arguments: Mar 2 13:10:07.169480 kernel: /init Mar 2 13:10:07.169487 kernel: with environment: Mar 2 13:10:07.169494 kernel: HOME=/ Mar 2 13:10:07.169502 kernel: TERM=linux Mar 2 13:10:07.169511 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:10:07.169522 systemd[1]: Detected virtualization microsoft. Mar 2 13:10:07.169530 systemd[1]: Detected architecture arm64. Mar 2 13:10:07.169537 systemd[1]: Running in initrd. Mar 2 13:10:07.169545 systemd[1]: No hostname configured, using default hostname. Mar 2 13:10:07.169552 systemd[1]: Hostname set to . Mar 2 13:10:07.169561 systemd[1]: Initializing machine ID from random generator. Mar 2 13:10:07.169570 systemd[1]: Queued start job for default target initrd.target. Mar 2 13:10:07.169578 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:10:07.169586 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:10:07.169594 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 13:10:07.169602 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:10:07.169610 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 13:10:07.169618 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 13:10:07.169627 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 13:10:07.169637 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 13:10:07.169645 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:10:07.169653 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:10:07.169661 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:10:07.169669 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:10:07.169677 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:10:07.169685 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:10:07.169693 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:10:07.169703 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:10:07.169711 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 13:10:07.169719 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 2 13:10:07.169726 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:10:07.169734 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:10:07.169742 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:10:07.169750 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:10:07.169758 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 13:10:07.169768 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:10:07.169776 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 13:10:07.169784 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 13:10:07.169791 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:10:07.169814 systemd-journald[217]: Collecting audit messages is disabled. Mar 2 13:10:07.169835 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:10:07.169843 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:10:07.169852 systemd-journald[217]: Journal started Mar 2 13:10:07.169870 systemd-journald[217]: Runtime Journal (/run/log/journal/bd17a39deaaa4ffd840712943f0174ee) is 8.0M, max 78.5M, 70.5M free. Mar 2 13:10:07.175724 systemd-modules-load[218]: Inserted module 'overlay' Mar 2 13:10:07.195610 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:10:07.196106 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 13:10:07.220249 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 13:10:07.220269 kernel: Bridge firewalling registered Mar 2 13:10:07.213357 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:10:07.216982 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 2 13:10:07.226206 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 13:10:07.234332 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:10:07.242495 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:10:07.260362 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:10:07.266230 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:10:07.279491 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:10:07.304300 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:10:07.314568 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:10:07.320137 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:10:07.337111 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:10:07.346784 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:10:07.367369 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 13:10:07.379263 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:10:07.387235 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:10:07.404493 dracut-cmdline[253]: dracut-dracut-053 Mar 2 13:10:07.404493 dracut-cmdline[253]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7ecec6e0f4313fe7e6ab44dac0c51edbf0b22765a212833abcec729cd9dc543f Mar 2 13:10:07.443672 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:10:07.449127 systemd-resolved[259]: Positive Trust Anchors: Mar 2 13:10:07.449137 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:10:07.449169 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:10:07.451396 systemd-resolved[259]: Defaulting to hostname 'linux'. Mar 2 13:10:07.453383 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:10:07.458015 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:10:07.523103 kernel: SCSI subsystem initialized Mar 2 13:10:07.530108 kernel: Loading iSCSI transport class v2.0-870. Mar 2 13:10:07.540120 kernel: iscsi: registered transport (tcp) Mar 2 13:10:07.555708 kernel: iscsi: registered transport (qla4xxx) Mar 2 13:10:07.555764 kernel: QLogic iSCSI HBA Driver Mar 2 13:10:07.588739 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 13:10:07.599465 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 13:10:07.625389 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 13:10:07.625437 kernel: device-mapper: uevent: version 1.0.3 Mar 2 13:10:07.630473 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 2 13:10:07.677122 kernel: raid6: neonx8 gen() 15795 MB/s Mar 2 13:10:07.696106 kernel: raid6: neonx4 gen() 15687 MB/s Mar 2 13:10:07.715104 kernel: raid6: neonx2 gen() 13224 MB/s Mar 2 13:10:07.735105 kernel: raid6: neonx1 gen() 10488 MB/s Mar 2 13:10:07.754103 kernel: raid6: int64x8 gen() 6990 MB/s Mar 2 13:10:07.773102 kernel: raid6: int64x4 gen() 7363 MB/s Mar 2 13:10:07.793113 kernel: raid6: int64x2 gen() 6146 MB/s Mar 2 13:10:07.814706 kernel: raid6: int64x1 gen() 5071 MB/s Mar 2 13:10:07.814738 kernel: raid6: using algorithm neonx8 gen() 15795 MB/s Mar 2 13:10:07.836790 kernel: raid6: .... xor() 12035 MB/s, rmw enabled Mar 2 13:10:07.836835 kernel: raid6: using neon recovery algorithm Mar 2 13:10:07.846423 kernel: xor: measuring software checksum speed Mar 2 13:10:07.846443 kernel: 8regs : 19812 MB/sec Mar 2 13:10:07.849207 kernel: 32regs : 19655 MB/sec Mar 2 13:10:07.852663 kernel: arm64_neon : 27052 MB/sec Mar 2 13:10:07.855768 kernel: xor: using function: arm64_neon (27052 MB/sec) Mar 2 13:10:07.905107 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 13:10:07.916216 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:10:07.929219 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:10:07.949078 systemd-udevd[440]: Using default interface naming scheme 'v255'. Mar 2 13:10:07.953419 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:10:07.970323 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 13:10:07.985629 dracut-pre-trigger[443]: rd.md=0: removing MD RAID activation Mar 2 13:10:08.010926 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:10:08.023357 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:10:08.061802 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:10:08.081272 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 13:10:08.101122 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 13:10:08.111727 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:10:08.127188 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:10:08.140250 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:10:08.162416 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 13:10:08.176107 kernel: hv_vmbus: Vmbus version:5.3 Mar 2 13:10:08.176551 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:10:08.197748 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:10:08.205798 kernel: hv_vmbus: registering driver hid_hyperv Mar 2 13:10:08.205820 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 2 13:10:08.205830 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 2 13:10:08.197902 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:10:08.255058 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 2 13:10:08.255087 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 2 13:10:08.255147 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 2 13:10:08.255167 kernel: hv_vmbus: registering driver hv_netvsc Mar 2 13:10:08.255177 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 2 13:10:08.255326 kernel: PTP clock support registered Mar 2 13:10:08.241497 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:10:08.277662 kernel: hv_vmbus: registering driver hv_storvsc Mar 2 13:10:08.277684 kernel: scsi host1: storvsc_host_t Mar 2 13:10:08.277724 kernel: scsi host0: storvsc_host_t Mar 2 13:10:08.265221 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:10:08.286920 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 2 13:10:08.265435 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:10:08.292235 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:10:08.312117 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 2 13:10:08.317396 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:10:08.082778 kernel: hv_utils: Registering HyperV Utility Driver Mar 2 13:10:08.087814 kernel: hv_vmbus: registering driver hv_utils Mar 2 13:10:08.087830 kernel: hv_utils: Shutdown IC version 3.2 Mar 2 13:10:08.087838 kernel: hv_utils: Heartbeat IC version 3.0 Mar 2 13:10:08.087848 kernel: hv_utils: TimeSync IC version 4.0 Mar 2 13:10:08.087856 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 2 13:10:08.087998 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 13:10:08.088007 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 2 13:10:08.088096 systemd-journald[217]: Time jumped backwards, rotating. Mar 2 13:10:08.088146 kernel: hv_netvsc 000d3a6d-850c-000d-3a6d-850c000d3a6d eth0: VF slot 1 added Mar 2 13:10:08.055269 systemd-resolved[259]: Clock change detected. Flushing caches. Mar 2 13:10:08.096368 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:10:08.096469 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:10:08.128916 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#209 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 2 13:10:08.129093 kernel: hv_vmbus: registering driver hv_pci Mar 2 13:10:08.130248 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:10:08.150607 kernel: hv_pci 67ca448e-b0d4-4688-8eb5-dd23c23759d7: PCI VMBus probing: Using version 0x10004 Mar 2 13:10:08.145707 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:10:08.172220 kernel: hv_pci 67ca448e-b0d4-4688-8eb5-dd23c23759d7: PCI host bridge to bus b0d4:00 Mar 2 13:10:08.172390 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 2 13:10:08.172506 kernel: pci_bus b0d4:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 2 13:10:08.172602 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 2 13:10:08.172697 kernel: pci_bus b0d4:00: No busn resource found for root bus, will use [bus 00-ff] Mar 2 13:10:08.179274 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 2 13:10:08.179409 kernel: pci b0d4:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 2 13:10:08.179558 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:10:08.217270 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 2 13:10:08.217456 kernel: pci b0d4:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 2 13:10:08.217572 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 2 13:10:08.217661 kernel: pci b0d4:00:02.0: enabling Extended Tags Mar 2 13:10:08.217746 kernel: pci b0d4:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at b0d4:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 2 13:10:08.217831 kernel: pci_bus b0d4:00: busn_res: [bus 00-ff] end is updated to 00 Mar 2 13:10:08.223772 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:10:08.223795 kernel: pci b0d4:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 2 13:10:08.231979 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 2 13:10:08.260174 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#296 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 2 13:10:08.260598 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:10:08.290881 kernel: mlx5_core b0d4:00:02.0: enabling device (0000 -> 0002) Mar 2 13:10:08.296173 kernel: mlx5_core b0d4:00:02.0: firmware version: 16.30.5026 Mar 2 13:10:08.493181 kernel: hv_netvsc 000d3a6d-850c-000d-3a6d-850c000d3a6d eth0: VF registering: eth1 Mar 2 13:10:08.493359 kernel: mlx5_core b0d4:00:02.0 eth1: joined to eth0 Mar 2 13:10:08.501452 kernel: mlx5_core b0d4:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 2 13:10:08.512183 kernel: mlx5_core b0d4:00:02.0 enP45268s1: renamed from eth1 Mar 2 13:10:08.776193 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (502) Mar 2 13:10:08.790257 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 2 13:10:08.805533 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 2 13:10:08.836629 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 2 13:10:08.882187 kernel: BTRFS: device fsid 0d0ab669-47ba-4267-b368-82e952673c8e devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (506) Mar 2 13:10:08.895086 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 2 13:10:08.900521 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 2 13:10:08.927335 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 13:10:08.948193 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:10:08.960176 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:10:08.970175 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:10:09.971222 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:10:09.971622 disk-uuid[607]: The operation has completed successfully. Mar 2 13:10:10.032215 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 13:10:10.034182 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 13:10:10.068282 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 13:10:10.078188 sh[720]: Success Mar 2 13:10:10.106178 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 2 13:10:10.441355 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 13:10:10.459283 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 13:10:10.466985 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 13:10:10.496807 kernel: BTRFS info (device dm-0): first mount of filesystem 0d0ab669-47ba-4267-b368-82e952673c8e Mar 2 13:10:10.496845 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:10:10.501953 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 2 13:10:10.505761 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 2 13:10:10.508999 kernel: BTRFS info (device dm-0): using free space tree Mar 2 13:10:10.873446 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 13:10:10.881204 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 13:10:10.899360 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 13:10:10.906337 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 13:10:10.940488 kernel: BTRFS info (device sda6): first mount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:10:10.940528 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:10:10.943957 kernel: BTRFS info (device sda6): using free space tree Mar 2 13:10:10.982189 kernel: BTRFS info (device sda6): auto enabling async discard Mar 2 13:10:10.991442 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 2 13:10:11.000748 kernel: BTRFS info (device sda6): last unmount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:10:11.001891 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:10:11.022331 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:10:11.031572 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 13:10:11.041370 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 13:10:11.059013 systemd-networkd[906]: lo: Link UP Mar 2 13:10:11.059021 systemd-networkd[906]: lo: Gained carrier Mar 2 13:10:11.063268 systemd-networkd[906]: Enumeration completed Mar 2 13:10:11.063420 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:10:11.068569 systemd-networkd[906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:10:11.068573 systemd-networkd[906]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:10:11.068925 systemd[1]: Reached target network.target - Network. Mar 2 13:10:11.146177 kernel: mlx5_core b0d4:00:02.0 enP45268s1: Link up Mar 2 13:10:11.185173 kernel: hv_netvsc 000d3a6d-850c-000d-3a6d-850c000d3a6d eth0: Data path switched to VF: enP45268s1 Mar 2 13:10:11.185952 systemd-networkd[906]: enP45268s1: Link UP Mar 2 13:10:11.186199 systemd-networkd[906]: eth0: Link UP Mar 2 13:10:11.186576 systemd-networkd[906]: eth0: Gained carrier Mar 2 13:10:11.186586 systemd-networkd[906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:10:11.206656 systemd-networkd[906]: enP45268s1: Gained carrier Mar 2 13:10:11.224204 systemd-networkd[906]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 2 13:10:11.999974 ignition[908]: Ignition 2.19.0 Mar 2 13:10:11.999986 ignition[908]: Stage: fetch-offline Mar 2 13:10:12.000063 ignition[908]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:10:12.004197 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:10:12.000071 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:10:12.002977 ignition[908]: parsed url from cmdline: "" Mar 2 13:10:12.002981 ignition[908]: no config URL provided Mar 2 13:10:12.002989 ignition[908]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:10:12.026284 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 2 13:10:12.003001 ignition[908]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:10:12.003006 ignition[908]: failed to fetch config: resource requires networking Mar 2 13:10:12.003194 ignition[908]: Ignition finished successfully Mar 2 13:10:12.042294 ignition[933]: Ignition 2.19.0 Mar 2 13:10:12.042300 ignition[933]: Stage: fetch Mar 2 13:10:12.042503 ignition[933]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:10:12.042515 ignition[933]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:10:12.042622 ignition[933]: parsed url from cmdline: "" Mar 2 13:10:12.042625 ignition[933]: no config URL provided Mar 2 13:10:12.042630 ignition[933]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:10:12.042638 ignition[933]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:10:12.042662 ignition[933]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 2 13:10:12.126704 ignition[933]: GET result: OK Mar 2 13:10:12.126787 ignition[933]: config has been read from IMDS userdata Mar 2 13:10:12.126830 ignition[933]: parsing config with SHA512: 6cee9337f6d6a172cde6fcf92aa2a0decaf76ca7b62c8c8264f3d071a5710adaaf60696a55e49547a315e5d4a84bc39ab1d8e4af468b3f7033a68ea76cb55869 Mar 2 13:10:12.130332 unknown[933]: fetched base config from "system" Mar 2 13:10:12.130654 ignition[933]: fetch: fetch complete Mar 2 13:10:12.130339 unknown[933]: fetched base config from "system" Mar 2 13:10:12.130658 ignition[933]: fetch: fetch passed Mar 2 13:10:12.130343 unknown[933]: fetched user config from "azure" Mar 2 13:10:12.130695 ignition[933]: Ignition finished successfully Mar 2 13:10:12.135184 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 2 13:10:12.148370 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 13:10:12.168897 ignition[939]: Ignition 2.19.0 Mar 2 13:10:12.168903 ignition[939]: Stage: kargs Mar 2 13:10:12.176572 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 13:10:12.171669 ignition[939]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:10:12.171681 ignition[939]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:10:12.189962 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 13:10:12.173231 ignition[939]: kargs: kargs passed Mar 2 13:10:12.173285 ignition[939]: Ignition finished successfully Mar 2 13:10:12.206673 ignition[945]: Ignition 2.19.0 Mar 2 13:10:12.210682 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 13:10:12.206684 ignition[945]: Stage: disks Mar 2 13:10:12.215714 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 13:10:12.206857 ignition[945]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:10:12.222344 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 13:10:12.206866 ignition[945]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:10:12.231498 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:10:12.207963 ignition[945]: disks: disks passed Mar 2 13:10:12.237976 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:10:12.208014 ignition[945]: Ignition finished successfully Mar 2 13:10:12.246985 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:10:12.268482 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 13:10:12.345068 systemd-fsck[953]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 2 13:10:12.351115 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 13:10:12.367148 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 13:10:12.419181 kernel: EXT4-fs (sda9): mounted filesystem a5f5c21d-8a27-4a94-875f-5735c39d000b r/w with ordered data mode. Quota mode: none. Mar 2 13:10:12.419244 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 13:10:12.423032 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 13:10:12.485234 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:10:12.501394 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (964) Mar 2 13:10:12.501428 kernel: BTRFS info (device sda6): first mount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:10:12.511690 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:10:12.515225 kernel: BTRFS info (device sda6): using free space tree Mar 2 13:10:12.518353 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 13:10:12.529516 kernel: BTRFS info (device sda6): auto enabling async discard Mar 2 13:10:12.530973 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 2 13:10:12.535958 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 13:10:12.535991 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:10:12.558128 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:10:12.565372 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 13:10:12.588357 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 13:10:12.860414 systemd-networkd[906]: eth0: Gained IPv6LL Mar 2 13:10:13.171945 coreos-metadata[981]: Mar 02 13:10:13.171 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 2 13:10:13.179368 coreos-metadata[981]: Mar 02 13:10:13.179 INFO Fetch successful Mar 2 13:10:13.183802 coreos-metadata[981]: Mar 02 13:10:13.183 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 2 13:10:13.193850 coreos-metadata[981]: Mar 02 13:10:13.193 INFO Fetch successful Mar 2 13:10:13.210240 coreos-metadata[981]: Mar 02 13:10:13.210 INFO wrote hostname ci-4081.3.101-d5e61b93e9 to /sysroot/etc/hostname Mar 2 13:10:13.217438 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 2 13:10:13.489903 initrd-setup-root[993]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 13:10:13.530550 initrd-setup-root[1000]: cut: /sysroot/etc/group: No such file or directory Mar 2 13:10:13.535719 initrd-setup-root[1007]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 13:10:13.542768 initrd-setup-root[1014]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 13:10:14.848424 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 13:10:14.863391 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 13:10:14.874323 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 13:10:14.886930 kernel: BTRFS info (device sda6): last unmount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:10:14.883720 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 13:10:14.908245 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 13:10:14.918180 ignition[1081]: INFO : Ignition 2.19.0 Mar 2 13:10:14.918180 ignition[1081]: INFO : Stage: mount Mar 2 13:10:14.918180 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:10:14.918180 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:10:14.938507 ignition[1081]: INFO : mount: mount passed Mar 2 13:10:14.938507 ignition[1081]: INFO : Ignition finished successfully Mar 2 13:10:14.924004 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 13:10:14.942328 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 13:10:14.954359 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:10:14.979179 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1094) Mar 2 13:10:14.989193 kernel: BTRFS info (device sda6): first mount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:10:14.989228 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:10:14.992478 kernel: BTRFS info (device sda6): using free space tree Mar 2 13:10:14.999179 kernel: BTRFS info (device sda6): auto enabling async discard Mar 2 13:10:15.000577 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:10:15.027095 ignition[1111]: INFO : Ignition 2.19.0 Mar 2 13:10:15.031391 ignition[1111]: INFO : Stage: files Mar 2 13:10:15.031391 ignition[1111]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:10:15.031391 ignition[1111]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:10:15.043183 ignition[1111]: DEBUG : files: compiled without relabeling support, skipping Mar 2 13:10:15.043183 ignition[1111]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 13:10:15.043183 ignition[1111]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 13:10:15.099589 ignition[1111]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 13:10:15.105303 ignition[1111]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 13:10:15.105303 ignition[1111]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 13:10:15.099969 unknown[1111]: wrote ssh authorized keys file for user: core Mar 2 13:10:15.120065 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 2 13:10:15.128297 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 2 13:10:15.193930 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 2 13:10:15.406128 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 2 13:10:15.406128 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Mar 2 13:10:15.855027 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 2 13:10:16.087334 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 2 13:10:16.087334 ignition[1111]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 2 13:10:16.103862 ignition[1111]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:10:16.111865 ignition[1111]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:10:16.111865 ignition[1111]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 2 13:10:16.111865 ignition[1111]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 2 13:10:16.111865 ignition[1111]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 13:10:16.111865 ignition[1111]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:10:16.111865 ignition[1111]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:10:16.111865 ignition[1111]: INFO : files: files passed Mar 2 13:10:16.111865 ignition[1111]: INFO : Ignition finished successfully Mar 2 13:10:16.106472 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 13:10:16.142444 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 13:10:16.156310 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 13:10:16.170416 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 13:10:16.199345 initrd-setup-root-after-ignition[1139]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:10:16.199345 initrd-setup-root-after-ignition[1139]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:10:16.170517 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 13:10:16.226262 initrd-setup-root-after-ignition[1143]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:10:16.196826 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:10:16.204828 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 13:10:16.226425 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 13:10:16.266719 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 13:10:16.266848 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 13:10:16.276777 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 13:10:16.286135 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 13:10:16.294227 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 13:10:16.305357 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 13:10:16.321637 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:10:16.334447 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 13:10:16.351508 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 13:10:16.351629 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 13:10:16.361685 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:10:16.370357 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:10:16.379617 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 13:10:16.387872 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 13:10:16.387926 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:10:16.399973 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 13:10:16.408977 systemd[1]: Stopped target basic.target - Basic System. Mar 2 13:10:16.416733 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 13:10:16.425064 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:10:16.434317 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 13:10:16.443164 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 13:10:16.451834 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:10:16.460689 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 13:10:16.469512 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 13:10:16.477426 systemd[1]: Stopped target swap.target - Swaps. Mar 2 13:10:16.484401 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 13:10:16.484465 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:10:16.495635 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:10:16.504522 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:10:16.513560 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 13:10:16.513595 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:10:16.523399 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 13:10:16.523458 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 13:10:16.537225 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 13:10:16.537266 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:10:16.546026 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 13:10:16.546062 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 13:10:16.554337 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 2 13:10:16.554370 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 2 13:10:16.577311 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 13:10:16.604215 ignition[1164]: INFO : Ignition 2.19.0 Mar 2 13:10:16.604215 ignition[1164]: INFO : Stage: umount Mar 2 13:10:16.604215 ignition[1164]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:10:16.604215 ignition[1164]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:10:16.587338 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 13:10:16.655794 ignition[1164]: INFO : umount: umount passed Mar 2 13:10:16.655794 ignition[1164]: INFO : Ignition finished successfully Mar 2 13:10:16.587400 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:10:16.606249 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 13:10:16.615719 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 13:10:16.615775 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:10:16.620844 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 13:10:16.620883 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:10:16.625947 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 13:10:16.628113 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 13:10:16.641225 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 13:10:16.641564 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 13:10:16.641603 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 13:10:16.651640 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 13:10:16.651694 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 13:10:16.660204 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 2 13:10:16.660250 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 2 13:10:16.668929 systemd[1]: Stopped target network.target - Network. Mar 2 13:10:16.676223 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 13:10:16.676279 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:10:16.681267 systemd[1]: Stopped target paths.target - Path Units. Mar 2 13:10:16.690009 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 13:10:16.695508 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:10:16.700767 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 13:10:16.714064 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 13:10:16.722133 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 13:10:16.722198 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:10:16.734359 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 13:10:16.734405 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:10:16.743229 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 13:10:16.743288 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 13:10:16.752386 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 13:10:16.752422 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 13:10:16.761262 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 13:10:16.772949 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 13:10:16.936466 kernel: hv_netvsc 000d3a6d-850c-000d-3a6d-850c000d3a6d eth0: Data path switched from VF: enP45268s1 Mar 2 13:10:16.780609 systemd-networkd[906]: eth0: DHCPv6 lease lost Mar 2 13:10:16.782490 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 13:10:16.782584 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 13:10:16.792595 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 13:10:16.792655 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:10:16.810329 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 13:10:16.819712 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 13:10:16.819766 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:10:16.828400 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:10:16.846625 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 13:10:16.846731 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 13:10:16.867538 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 13:10:16.867881 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:10:16.877898 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 13:10:16.877963 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 13:10:16.885433 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 13:10:16.885468 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:10:16.893414 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 13:10:16.893455 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:10:16.907541 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 13:10:16.907601 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 13:10:16.916449 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:10:16.916500 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:10:16.949367 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 13:10:16.961857 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:10:16.961919 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:10:16.971340 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 13:10:16.971382 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 13:10:16.979863 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 13:10:16.979906 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:10:16.989078 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 13:10:16.989118 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:10:17.000018 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:10:17.000067 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:10:17.010367 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 13:10:17.010478 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 13:10:17.018365 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 13:10:17.018456 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 13:10:17.026664 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 13:10:17.026743 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 13:10:17.035869 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 13:10:17.045368 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 13:10:17.045444 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 13:10:17.071379 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 13:10:17.084392 systemd[1]: Switching root. Mar 2 13:10:17.296050 systemd-journald[217]: Journal stopped Mar 2 13:10:07.166452 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 2 13:10:07.166474 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Mar 2 11:11:01 -00 2026 Mar 2 13:10:07.166482 kernel: KASLR enabled Mar 2 13:10:07.166488 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 2 13:10:07.166495 kernel: printk: bootconsole [pl11] enabled Mar 2 13:10:07.166501 kernel: efi: EFI v2.7 by EDK II Mar 2 13:10:07.166508 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 2 13:10:07.166515 kernel: random: crng init done Mar 2 13:10:07.166521 kernel: ACPI: Early table checksum verification disabled Mar 2 13:10:07.166527 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 2 13:10:07.166533 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:10:07.166539 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:10:07.166546 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 2 13:10:07.166553 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:10:07.166560 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:10:07.166567 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:10:07.166573 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:10:07.166581 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:10:07.166588 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:10:07.166594 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 2 13:10:07.166601 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:10:07.166607 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 2 13:10:07.166614 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 2 13:10:07.166620 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 2 13:10:07.166627 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 2 13:10:07.166633 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 2 13:10:07.166640 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 2 13:10:07.166646 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 2 13:10:07.166654 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 2 13:10:07.166661 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 2 13:10:07.166667 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 2 13:10:07.166673 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 2 13:10:07.166680 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 2 13:10:07.166686 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 2 13:10:07.166693 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 2 13:10:07.166699 kernel: Zone ranges: Mar 2 13:10:07.166705 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 2 13:10:07.166712 kernel: DMA32 empty Mar 2 13:10:07.166718 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 2 13:10:07.166724 kernel: Movable zone start for each node Mar 2 13:10:07.166735 kernel: Early memory node ranges Mar 2 13:10:07.166742 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 2 13:10:07.166749 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 2 13:10:07.166756 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 2 13:10:07.166762 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 2 13:10:07.166770 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 2 13:10:07.166777 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 2 13:10:07.166784 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 2 13:10:07.166791 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 2 13:10:07.166798 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 2 13:10:07.166805 kernel: psci: probing for conduit method from ACPI. Mar 2 13:10:07.166811 kernel: psci: PSCIv1.1 detected in firmware. Mar 2 13:10:07.166818 kernel: psci: Using standard PSCI v0.2 function IDs Mar 2 13:10:07.166825 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 2 13:10:07.166832 kernel: psci: SMC Calling Convention v1.4 Mar 2 13:10:07.166838 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 2 13:10:07.166845 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 2 13:10:07.166853 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 2 13:10:07.166860 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 2 13:10:07.166867 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 2 13:10:07.166874 kernel: Detected PIPT I-cache on CPU0 Mar 2 13:10:07.166881 kernel: CPU features: detected: GIC system register CPU interface Mar 2 13:10:07.166888 kernel: CPU features: detected: Hardware dirty bit management Mar 2 13:10:07.166894 kernel: CPU features: detected: Spectre-BHB Mar 2 13:10:07.166901 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 2 13:10:07.166908 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 2 13:10:07.166915 kernel: CPU features: detected: ARM erratum 1418040 Mar 2 13:10:07.166922 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 2 13:10:07.166930 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 2 13:10:07.166937 kernel: alternatives: applying boot alternatives Mar 2 13:10:07.166945 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7ecec6e0f4313fe7e6ab44dac0c51edbf0b22765a212833abcec729cd9dc543f Mar 2 13:10:07.166952 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 13:10:07.166959 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 13:10:07.166966 kernel: Fallback order for Node 0: 0 Mar 2 13:10:07.166973 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 2 13:10:07.166980 kernel: Policy zone: Normal Mar 2 13:10:07.166986 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 13:10:07.166993 kernel: software IO TLB: area num 2. Mar 2 13:10:07.167000 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 2 13:10:07.167009 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 2 13:10:07.167016 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 2 13:10:07.167022 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 13:10:07.167030 kernel: rcu: RCU event tracing is enabled. Mar 2 13:10:07.167037 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 2 13:10:07.167044 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 13:10:07.167051 kernel: Tracing variant of Tasks RCU enabled. Mar 2 13:10:07.167058 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 13:10:07.167065 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 2 13:10:07.167071 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 2 13:10:07.167078 kernel: GICv3: 960 SPIs implemented Mar 2 13:10:07.167086 kernel: GICv3: 0 Extended SPIs implemented Mar 2 13:10:07.168147 kernel: Root IRQ handler: gic_handle_irq Mar 2 13:10:07.168163 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 2 13:10:07.168171 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 2 13:10:07.168178 kernel: ITS: No ITS available, not enabling LPIs Mar 2 13:10:07.168185 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 13:10:07.168193 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 2 13:10:07.168200 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 2 13:10:07.168207 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 2 13:10:07.168215 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 2 13:10:07.168222 kernel: Console: colour dummy device 80x25 Mar 2 13:10:07.168234 kernel: printk: console [tty1] enabled Mar 2 13:10:07.168243 kernel: ACPI: Core revision 20230628 Mar 2 13:10:07.168250 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 2 13:10:07.168257 kernel: pid_max: default: 32768 minimum: 301 Mar 2 13:10:07.168264 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 2 13:10:07.168272 kernel: landlock: Up and running. Mar 2 13:10:07.168279 kernel: SELinux: Initializing. Mar 2 13:10:07.168286 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:10:07.168293 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:10:07.168302 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 2 13:10:07.168309 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 2 13:10:07.168316 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 2 13:10:07.168323 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 2 13:10:07.168330 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 2 13:10:07.168337 kernel: rcu: Hierarchical SRCU implementation. Mar 2 13:10:07.168344 kernel: rcu: Max phase no-delay instances is 400. Mar 2 13:10:07.168352 kernel: Remapping and enabling EFI services. Mar 2 13:10:07.168365 kernel: smp: Bringing up secondary CPUs ... Mar 2 13:10:07.168372 kernel: Detected PIPT I-cache on CPU1 Mar 2 13:10:07.168380 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 2 13:10:07.168387 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 2 13:10:07.168396 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 2 13:10:07.168403 kernel: smp: Brought up 1 node, 2 CPUs Mar 2 13:10:07.168411 kernel: SMP: Total of 2 processors activated. Mar 2 13:10:07.168418 kernel: CPU features: detected: 32-bit EL0 Support Mar 2 13:10:07.168426 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 2 13:10:07.168435 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 2 13:10:07.168442 kernel: CPU features: detected: CRC32 instructions Mar 2 13:10:07.168450 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 2 13:10:07.168457 kernel: CPU features: detected: LSE atomic instructions Mar 2 13:10:07.168464 kernel: CPU features: detected: Privileged Access Never Mar 2 13:10:07.168472 kernel: CPU: All CPU(s) started at EL1 Mar 2 13:10:07.168479 kernel: alternatives: applying system-wide alternatives Mar 2 13:10:07.168487 kernel: devtmpfs: initialized Mar 2 13:10:07.168494 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 13:10:07.168503 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 2 13:10:07.168510 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 13:10:07.168518 kernel: SMBIOS 3.1.0 present. Mar 2 13:10:07.168525 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 2 13:10:07.168533 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 13:10:07.168540 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 2 13:10:07.168547 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 2 13:10:07.168555 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 2 13:10:07.168562 kernel: audit: initializing netlink subsys (disabled) Mar 2 13:10:07.168571 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 2 13:10:07.168579 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 13:10:07.168586 kernel: cpuidle: using governor menu Mar 2 13:10:07.168593 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 2 13:10:07.168601 kernel: ASID allocator initialised with 32768 entries Mar 2 13:10:07.168608 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 13:10:07.168615 kernel: Serial: AMBA PL011 UART driver Mar 2 13:10:07.168623 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 2 13:10:07.168630 kernel: Modules: 0 pages in range for non-PLT usage Mar 2 13:10:07.168639 kernel: Modules: 509008 pages in range for PLT usage Mar 2 13:10:07.168646 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 13:10:07.168654 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 13:10:07.168661 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 2 13:10:07.168668 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 2 13:10:07.168676 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 13:10:07.168683 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 13:10:07.168691 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 2 13:10:07.168698 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 2 13:10:07.168707 kernel: ACPI: Added _OSI(Module Device) Mar 2 13:10:07.168714 kernel: ACPI: Added _OSI(Processor Device) Mar 2 13:10:07.168721 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 13:10:07.168729 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 13:10:07.168736 kernel: ACPI: Interpreter enabled Mar 2 13:10:07.168743 kernel: ACPI: Using GIC for interrupt routing Mar 2 13:10:07.168750 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 2 13:10:07.168758 kernel: printk: console [ttyAMA0] enabled Mar 2 13:10:07.168765 kernel: printk: bootconsole [pl11] disabled Mar 2 13:10:07.168774 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 2 13:10:07.168781 kernel: iommu: Default domain type: Translated Mar 2 13:10:07.168789 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 2 13:10:07.168796 kernel: efivars: Registered efivars operations Mar 2 13:10:07.168803 kernel: vgaarb: loaded Mar 2 13:10:07.168811 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 2 13:10:07.168818 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 13:10:07.168826 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 13:10:07.168833 kernel: pnp: PnP ACPI init Mar 2 13:10:07.168842 kernel: pnp: PnP ACPI: found 0 devices Mar 2 13:10:07.168849 kernel: NET: Registered PF_INET protocol family Mar 2 13:10:07.168857 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 13:10:07.168864 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 13:10:07.168872 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 13:10:07.168879 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 13:10:07.168887 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 13:10:07.168894 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 13:10:07.168902 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:10:07.168911 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:10:07.168918 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 13:10:07.168925 kernel: PCI: CLS 0 bytes, default 64 Mar 2 13:10:07.168933 kernel: kvm [1]: HYP mode not available Mar 2 13:10:07.168940 kernel: Initialise system trusted keyrings Mar 2 13:10:07.168947 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 13:10:07.168955 kernel: Key type asymmetric registered Mar 2 13:10:07.168962 kernel: Asymmetric key parser 'x509' registered Mar 2 13:10:07.168969 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 2 13:10:07.168978 kernel: io scheduler mq-deadline registered Mar 2 13:10:07.168985 kernel: io scheduler kyber registered Mar 2 13:10:07.168993 kernel: io scheduler bfq registered Mar 2 13:10:07.169000 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 13:10:07.169007 kernel: thunder_xcv, ver 1.0 Mar 2 13:10:07.169014 kernel: thunder_bgx, ver 1.0 Mar 2 13:10:07.169021 kernel: nicpf, ver 1.0 Mar 2 13:10:07.169029 kernel: nicvf, ver 1.0 Mar 2 13:10:07.169165 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 2 13:10:07.169243 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-02T13:10:06 UTC (1772457006) Mar 2 13:10:07.169254 kernel: efifb: probing for efifb Mar 2 13:10:07.169262 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 2 13:10:07.169269 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 2 13:10:07.169277 kernel: efifb: scrolling: redraw Mar 2 13:10:07.169284 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 2 13:10:07.169291 kernel: Console: switching to colour frame buffer device 128x48 Mar 2 13:10:07.169299 kernel: fb0: EFI VGA frame buffer device Mar 2 13:10:07.169308 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 2 13:10:07.169316 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 2 13:10:07.169323 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 2 13:10:07.169331 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 2 13:10:07.169338 kernel: watchdog: Hard watchdog permanently disabled Mar 2 13:10:07.169346 kernel: NET: Registered PF_INET6 protocol family Mar 2 13:10:07.169353 kernel: Segment Routing with IPv6 Mar 2 13:10:07.169360 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 13:10:07.169368 kernel: NET: Registered PF_PACKET protocol family Mar 2 13:10:07.169376 kernel: Key type dns_resolver registered Mar 2 13:10:07.169384 kernel: registered taskstats version 1 Mar 2 13:10:07.169391 kernel: Loading compiled-in X.509 certificates Mar 2 13:10:07.169399 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 888055ac257926b028c9aac8084c1e2b1bcee773' Mar 2 13:10:07.169406 kernel: Key type .fscrypt registered Mar 2 13:10:07.169413 kernel: Key type fscrypt-provisioning registered Mar 2 13:10:07.169420 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 13:10:07.169428 kernel: ima: Allocated hash algorithm: sha1 Mar 2 13:10:07.169435 kernel: ima: No architecture policies found Mar 2 13:10:07.169444 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 2 13:10:07.169451 kernel: clk: Disabling unused clocks Mar 2 13:10:07.169459 kernel: Freeing unused kernel memory: 39424K Mar 2 13:10:07.169466 kernel: Run /init as init process Mar 2 13:10:07.169473 kernel: with arguments: Mar 2 13:10:07.169480 kernel: /init Mar 2 13:10:07.169487 kernel: with environment: Mar 2 13:10:07.169494 kernel: HOME=/ Mar 2 13:10:07.169502 kernel: TERM=linux Mar 2 13:10:07.169511 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:10:07.169522 systemd[1]: Detected virtualization microsoft. Mar 2 13:10:07.169530 systemd[1]: Detected architecture arm64. Mar 2 13:10:07.169537 systemd[1]: Running in initrd. Mar 2 13:10:07.169545 systemd[1]: No hostname configured, using default hostname. Mar 2 13:10:07.169552 systemd[1]: Hostname set to . Mar 2 13:10:07.169561 systemd[1]: Initializing machine ID from random generator. Mar 2 13:10:07.169570 systemd[1]: Queued start job for default target initrd.target. Mar 2 13:10:07.169578 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:10:07.169586 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:10:07.169594 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 13:10:07.169602 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:10:07.169610 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 13:10:07.169618 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 13:10:07.169627 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 13:10:07.169637 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 13:10:07.169645 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:10:07.169653 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:10:07.169661 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:10:07.169669 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:10:07.169677 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:10:07.169685 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:10:07.169693 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:10:07.169703 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:10:07.169711 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 13:10:07.169719 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 2 13:10:07.169726 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:10:07.169734 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:10:07.169742 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:10:07.169750 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:10:07.169758 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 13:10:07.169768 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:10:07.169776 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 13:10:07.169784 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 13:10:07.169791 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:10:07.169814 systemd-journald[217]: Collecting audit messages is disabled. Mar 2 13:10:07.169835 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:10:07.169843 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:10:07.169852 systemd-journald[217]: Journal started Mar 2 13:10:07.169870 systemd-journald[217]: Runtime Journal (/run/log/journal/bd17a39deaaa4ffd840712943f0174ee) is 8.0M, max 78.5M, 70.5M free. Mar 2 13:10:07.175724 systemd-modules-load[218]: Inserted module 'overlay' Mar 2 13:10:07.195610 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:10:07.196106 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 13:10:07.220249 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 13:10:07.220269 kernel: Bridge firewalling registered Mar 2 13:10:07.213357 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:10:07.216982 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 2 13:10:07.226206 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 13:10:07.234332 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:10:07.242495 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:10:07.260362 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:10:07.266230 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:10:07.279491 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:10:07.304300 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:10:07.314568 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:10:07.320137 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:10:07.337111 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:10:07.346784 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:10:07.367369 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 13:10:07.379263 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:10:07.387235 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:10:07.404493 dracut-cmdline[253]: dracut-dracut-053 Mar 2 13:10:07.404493 dracut-cmdline[253]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7ecec6e0f4313fe7e6ab44dac0c51edbf0b22765a212833abcec729cd9dc543f Mar 2 13:10:07.443672 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:10:07.449127 systemd-resolved[259]: Positive Trust Anchors: Mar 2 13:10:07.449137 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:10:07.449169 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:10:07.451396 systemd-resolved[259]: Defaulting to hostname 'linux'. Mar 2 13:10:07.453383 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:10:07.458015 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:10:07.523103 kernel: SCSI subsystem initialized Mar 2 13:10:07.530108 kernel: Loading iSCSI transport class v2.0-870. Mar 2 13:10:07.540120 kernel: iscsi: registered transport (tcp) Mar 2 13:10:07.555708 kernel: iscsi: registered transport (qla4xxx) Mar 2 13:10:07.555764 kernel: QLogic iSCSI HBA Driver Mar 2 13:10:07.588739 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 13:10:07.599465 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 13:10:07.625389 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 13:10:07.625437 kernel: device-mapper: uevent: version 1.0.3 Mar 2 13:10:07.630473 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 2 13:10:07.677122 kernel: raid6: neonx8 gen() 15795 MB/s Mar 2 13:10:07.696106 kernel: raid6: neonx4 gen() 15687 MB/s Mar 2 13:10:07.715104 kernel: raid6: neonx2 gen() 13224 MB/s Mar 2 13:10:07.735105 kernel: raid6: neonx1 gen() 10488 MB/s Mar 2 13:10:07.754103 kernel: raid6: int64x8 gen() 6990 MB/s Mar 2 13:10:07.773102 kernel: raid6: int64x4 gen() 7363 MB/s Mar 2 13:10:07.793113 kernel: raid6: int64x2 gen() 6146 MB/s Mar 2 13:10:07.814706 kernel: raid6: int64x1 gen() 5071 MB/s Mar 2 13:10:07.814738 kernel: raid6: using algorithm neonx8 gen() 15795 MB/s Mar 2 13:10:07.836790 kernel: raid6: .... xor() 12035 MB/s, rmw enabled Mar 2 13:10:07.836835 kernel: raid6: using neon recovery algorithm Mar 2 13:10:07.846423 kernel: xor: measuring software checksum speed Mar 2 13:10:07.846443 kernel: 8regs : 19812 MB/sec Mar 2 13:10:07.849207 kernel: 32regs : 19655 MB/sec Mar 2 13:10:07.852663 kernel: arm64_neon : 27052 MB/sec Mar 2 13:10:07.855768 kernel: xor: using function: arm64_neon (27052 MB/sec) Mar 2 13:10:07.905107 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 13:10:07.916216 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:10:07.929219 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:10:07.949078 systemd-udevd[440]: Using default interface naming scheme 'v255'. Mar 2 13:10:07.953419 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:10:07.970323 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 13:10:07.985629 dracut-pre-trigger[443]: rd.md=0: removing MD RAID activation Mar 2 13:10:08.010926 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:10:08.023357 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:10:08.061802 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:10:08.081272 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 13:10:08.101122 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 13:10:08.111727 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:10:08.127188 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:10:08.140250 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:10:08.162416 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 13:10:08.176107 kernel: hv_vmbus: Vmbus version:5.3 Mar 2 13:10:08.176551 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:10:08.197748 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:10:08.205798 kernel: hv_vmbus: registering driver hid_hyperv Mar 2 13:10:08.205820 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 2 13:10:08.205830 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 2 13:10:08.197902 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:10:08.255058 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 2 13:10:08.255087 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 2 13:10:08.255147 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 2 13:10:08.255167 kernel: hv_vmbus: registering driver hv_netvsc Mar 2 13:10:08.255177 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 2 13:10:08.255326 kernel: PTP clock support registered Mar 2 13:10:08.241497 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:10:08.277662 kernel: hv_vmbus: registering driver hv_storvsc Mar 2 13:10:08.277684 kernel: scsi host1: storvsc_host_t Mar 2 13:10:08.277724 kernel: scsi host0: storvsc_host_t Mar 2 13:10:08.265221 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:10:08.286920 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 2 13:10:08.265435 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:10:08.292235 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:10:08.312117 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 2 13:10:08.317396 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:10:08.082778 kernel: hv_utils: Registering HyperV Utility Driver Mar 2 13:10:08.087814 kernel: hv_vmbus: registering driver hv_utils Mar 2 13:10:08.087830 kernel: hv_utils: Shutdown IC version 3.2 Mar 2 13:10:08.087838 kernel: hv_utils: Heartbeat IC version 3.0 Mar 2 13:10:08.087848 kernel: hv_utils: TimeSync IC version 4.0 Mar 2 13:10:08.087856 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 2 13:10:08.087998 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 13:10:08.088007 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 2 13:10:08.088096 systemd-journald[217]: Time jumped backwards, rotating. Mar 2 13:10:08.088146 kernel: hv_netvsc 000d3a6d-850c-000d-3a6d-850c000d3a6d eth0: VF slot 1 added Mar 2 13:10:08.055269 systemd-resolved[259]: Clock change detected. Flushing caches. Mar 2 13:10:08.096368 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:10:08.096469 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:10:08.128916 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#209 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 2 13:10:08.129093 kernel: hv_vmbus: registering driver hv_pci Mar 2 13:10:08.130248 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:10:08.150607 kernel: hv_pci 67ca448e-b0d4-4688-8eb5-dd23c23759d7: PCI VMBus probing: Using version 0x10004 Mar 2 13:10:08.145707 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:10:08.172220 kernel: hv_pci 67ca448e-b0d4-4688-8eb5-dd23c23759d7: PCI host bridge to bus b0d4:00 Mar 2 13:10:08.172390 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 2 13:10:08.172506 kernel: pci_bus b0d4:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 2 13:10:08.172602 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 2 13:10:08.172697 kernel: pci_bus b0d4:00: No busn resource found for root bus, will use [bus 00-ff] Mar 2 13:10:08.179274 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 2 13:10:08.179409 kernel: pci b0d4:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 2 13:10:08.179558 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:10:08.217270 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 2 13:10:08.217456 kernel: pci b0d4:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 2 13:10:08.217572 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 2 13:10:08.217661 kernel: pci b0d4:00:02.0: enabling Extended Tags Mar 2 13:10:08.217746 kernel: pci b0d4:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at b0d4:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 2 13:10:08.217831 kernel: pci_bus b0d4:00: busn_res: [bus 00-ff] end is updated to 00 Mar 2 13:10:08.223772 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:10:08.223795 kernel: pci b0d4:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 2 13:10:08.231979 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 2 13:10:08.260174 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#296 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 2 13:10:08.260598 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:10:08.290881 kernel: mlx5_core b0d4:00:02.0: enabling device (0000 -> 0002) Mar 2 13:10:08.296173 kernel: mlx5_core b0d4:00:02.0: firmware version: 16.30.5026 Mar 2 13:10:08.493181 kernel: hv_netvsc 000d3a6d-850c-000d-3a6d-850c000d3a6d eth0: VF registering: eth1 Mar 2 13:10:08.493359 kernel: mlx5_core b0d4:00:02.0 eth1: joined to eth0 Mar 2 13:10:08.501452 kernel: mlx5_core b0d4:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 2 13:10:08.512183 kernel: mlx5_core b0d4:00:02.0 enP45268s1: renamed from eth1 Mar 2 13:10:08.776193 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (502) Mar 2 13:10:08.790257 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 2 13:10:08.805533 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 2 13:10:08.836629 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 2 13:10:08.882187 kernel: BTRFS: device fsid 0d0ab669-47ba-4267-b368-82e952673c8e devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (506) Mar 2 13:10:08.895086 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 2 13:10:08.900521 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 2 13:10:08.927335 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 13:10:08.948193 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:10:08.960176 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:10:08.970175 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:10:09.971222 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:10:09.971622 disk-uuid[607]: The operation has completed successfully. Mar 2 13:10:10.032215 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 13:10:10.034182 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 13:10:10.068282 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 13:10:10.078188 sh[720]: Success Mar 2 13:10:10.106178 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 2 13:10:10.441355 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 13:10:10.459283 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 13:10:10.466985 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 13:10:10.496807 kernel: BTRFS info (device dm-0): first mount of filesystem 0d0ab669-47ba-4267-b368-82e952673c8e Mar 2 13:10:10.496845 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:10:10.501953 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 2 13:10:10.505761 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 2 13:10:10.508999 kernel: BTRFS info (device dm-0): using free space tree Mar 2 13:10:10.873446 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 13:10:10.881204 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 13:10:10.899360 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 13:10:10.906337 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 13:10:10.940488 kernel: BTRFS info (device sda6): first mount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:10:10.940528 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:10:10.943957 kernel: BTRFS info (device sda6): using free space tree Mar 2 13:10:10.982189 kernel: BTRFS info (device sda6): auto enabling async discard Mar 2 13:10:10.991442 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 2 13:10:11.000748 kernel: BTRFS info (device sda6): last unmount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:10:11.001891 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:10:11.022331 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:10:11.031572 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 13:10:11.041370 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 13:10:11.059013 systemd-networkd[906]: lo: Link UP Mar 2 13:10:11.059021 systemd-networkd[906]: lo: Gained carrier Mar 2 13:10:11.063268 systemd-networkd[906]: Enumeration completed Mar 2 13:10:11.063420 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:10:11.068569 systemd-networkd[906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:10:11.068573 systemd-networkd[906]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:10:11.068925 systemd[1]: Reached target network.target - Network. Mar 2 13:10:11.146177 kernel: mlx5_core b0d4:00:02.0 enP45268s1: Link up Mar 2 13:10:11.185173 kernel: hv_netvsc 000d3a6d-850c-000d-3a6d-850c000d3a6d eth0: Data path switched to VF: enP45268s1 Mar 2 13:10:11.185952 systemd-networkd[906]: enP45268s1: Link UP Mar 2 13:10:11.186199 systemd-networkd[906]: eth0: Link UP Mar 2 13:10:11.186576 systemd-networkd[906]: eth0: Gained carrier Mar 2 13:10:11.186586 systemd-networkd[906]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:10:11.206656 systemd-networkd[906]: enP45268s1: Gained carrier Mar 2 13:10:11.224204 systemd-networkd[906]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 2 13:10:11.999974 ignition[908]: Ignition 2.19.0 Mar 2 13:10:11.999986 ignition[908]: Stage: fetch-offline Mar 2 13:10:12.000063 ignition[908]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:10:12.004197 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:10:12.000071 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:10:12.002977 ignition[908]: parsed url from cmdline: "" Mar 2 13:10:12.002981 ignition[908]: no config URL provided Mar 2 13:10:12.002989 ignition[908]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:10:12.026284 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 2 13:10:12.003001 ignition[908]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:10:12.003006 ignition[908]: failed to fetch config: resource requires networking Mar 2 13:10:12.003194 ignition[908]: Ignition finished successfully Mar 2 13:10:12.042294 ignition[933]: Ignition 2.19.0 Mar 2 13:10:12.042300 ignition[933]: Stage: fetch Mar 2 13:10:12.042503 ignition[933]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:10:12.042515 ignition[933]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:10:12.042622 ignition[933]: parsed url from cmdline: "" Mar 2 13:10:12.042625 ignition[933]: no config URL provided Mar 2 13:10:12.042630 ignition[933]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:10:12.042638 ignition[933]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:10:12.042662 ignition[933]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 2 13:10:12.126704 ignition[933]: GET result: OK Mar 2 13:10:12.126787 ignition[933]: config has been read from IMDS userdata Mar 2 13:10:12.126830 ignition[933]: parsing config with SHA512: 6cee9337f6d6a172cde6fcf92aa2a0decaf76ca7b62c8c8264f3d071a5710adaaf60696a55e49547a315e5d4a84bc39ab1d8e4af468b3f7033a68ea76cb55869 Mar 2 13:10:12.130332 unknown[933]: fetched base config from "system" Mar 2 13:10:12.130654 ignition[933]: fetch: fetch complete Mar 2 13:10:12.130339 unknown[933]: fetched base config from "system" Mar 2 13:10:12.130658 ignition[933]: fetch: fetch passed Mar 2 13:10:12.130343 unknown[933]: fetched user config from "azure" Mar 2 13:10:12.130695 ignition[933]: Ignition finished successfully Mar 2 13:10:12.135184 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 2 13:10:12.148370 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 13:10:12.168897 ignition[939]: Ignition 2.19.0 Mar 2 13:10:12.168903 ignition[939]: Stage: kargs Mar 2 13:10:12.176572 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 13:10:12.171669 ignition[939]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:10:12.171681 ignition[939]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:10:12.189962 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 13:10:12.173231 ignition[939]: kargs: kargs passed Mar 2 13:10:12.173285 ignition[939]: Ignition finished successfully Mar 2 13:10:12.206673 ignition[945]: Ignition 2.19.0 Mar 2 13:10:12.210682 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 13:10:12.206684 ignition[945]: Stage: disks Mar 2 13:10:12.215714 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 13:10:12.206857 ignition[945]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:10:12.222344 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 13:10:12.206866 ignition[945]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:10:12.231498 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:10:12.207963 ignition[945]: disks: disks passed Mar 2 13:10:12.237976 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:10:12.208014 ignition[945]: Ignition finished successfully Mar 2 13:10:12.246985 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:10:12.268482 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 13:10:12.345068 systemd-fsck[953]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 2 13:10:12.351115 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 13:10:12.367148 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 13:10:12.419181 kernel: EXT4-fs (sda9): mounted filesystem a5f5c21d-8a27-4a94-875f-5735c39d000b r/w with ordered data mode. Quota mode: none. Mar 2 13:10:12.419244 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 13:10:12.423032 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 13:10:12.485234 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:10:12.501394 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (964) Mar 2 13:10:12.501428 kernel: BTRFS info (device sda6): first mount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:10:12.511690 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:10:12.515225 kernel: BTRFS info (device sda6): using free space tree Mar 2 13:10:12.518353 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 13:10:12.529516 kernel: BTRFS info (device sda6): auto enabling async discard Mar 2 13:10:12.530973 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 2 13:10:12.535958 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 13:10:12.535991 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:10:12.558128 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:10:12.565372 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 13:10:12.588357 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 13:10:12.860414 systemd-networkd[906]: eth0: Gained IPv6LL Mar 2 13:10:13.171945 coreos-metadata[981]: Mar 02 13:10:13.171 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 2 13:10:13.179368 coreos-metadata[981]: Mar 02 13:10:13.179 INFO Fetch successful Mar 2 13:10:13.183802 coreos-metadata[981]: Mar 02 13:10:13.183 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 2 13:10:13.193850 coreos-metadata[981]: Mar 02 13:10:13.193 INFO Fetch successful Mar 2 13:10:13.210240 coreos-metadata[981]: Mar 02 13:10:13.210 INFO wrote hostname ci-4081.3.101-d5e61b93e9 to /sysroot/etc/hostname Mar 2 13:10:13.217438 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 2 13:10:13.489903 initrd-setup-root[993]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 13:10:13.530550 initrd-setup-root[1000]: cut: /sysroot/etc/group: No such file or directory Mar 2 13:10:13.535719 initrd-setup-root[1007]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 13:10:13.542768 initrd-setup-root[1014]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 13:10:14.848424 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 13:10:14.863391 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 13:10:14.874323 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 13:10:14.886930 kernel: BTRFS info (device sda6): last unmount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:10:14.883720 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 13:10:14.908245 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 13:10:14.918180 ignition[1081]: INFO : Ignition 2.19.0 Mar 2 13:10:14.918180 ignition[1081]: INFO : Stage: mount Mar 2 13:10:14.918180 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:10:14.918180 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:10:14.938507 ignition[1081]: INFO : mount: mount passed Mar 2 13:10:14.938507 ignition[1081]: INFO : Ignition finished successfully Mar 2 13:10:14.924004 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 13:10:14.942328 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 13:10:14.954359 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:10:14.979179 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1094) Mar 2 13:10:14.989193 kernel: BTRFS info (device sda6): first mount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:10:14.989228 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:10:14.992478 kernel: BTRFS info (device sda6): using free space tree Mar 2 13:10:14.999179 kernel: BTRFS info (device sda6): auto enabling async discard Mar 2 13:10:15.000577 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:10:15.027095 ignition[1111]: INFO : Ignition 2.19.0 Mar 2 13:10:15.031391 ignition[1111]: INFO : Stage: files Mar 2 13:10:15.031391 ignition[1111]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:10:15.031391 ignition[1111]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:10:15.043183 ignition[1111]: DEBUG : files: compiled without relabeling support, skipping Mar 2 13:10:15.043183 ignition[1111]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 13:10:15.043183 ignition[1111]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 13:10:15.099589 ignition[1111]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 13:10:15.105303 ignition[1111]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 13:10:15.105303 ignition[1111]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 13:10:15.099969 unknown[1111]: wrote ssh authorized keys file for user: core Mar 2 13:10:15.120065 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 2 13:10:15.128297 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 2 13:10:15.193930 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 2 13:10:15.406128 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 2 13:10:15.406128 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 2 13:10:15.421871 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Mar 2 13:10:15.855027 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 2 13:10:16.087334 ignition[1111]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 2 13:10:16.087334 ignition[1111]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 2 13:10:16.103862 ignition[1111]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:10:16.111865 ignition[1111]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:10:16.111865 ignition[1111]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 2 13:10:16.111865 ignition[1111]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 2 13:10:16.111865 ignition[1111]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 13:10:16.111865 ignition[1111]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:10:16.111865 ignition[1111]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:10:16.111865 ignition[1111]: INFO : files: files passed Mar 2 13:10:16.111865 ignition[1111]: INFO : Ignition finished successfully Mar 2 13:10:16.106472 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 13:10:16.142444 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 13:10:16.156310 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 13:10:16.170416 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 13:10:16.199345 initrd-setup-root-after-ignition[1139]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:10:16.199345 initrd-setup-root-after-ignition[1139]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:10:16.170517 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 13:10:16.226262 initrd-setup-root-after-ignition[1143]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:10:16.196826 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:10:16.204828 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 13:10:16.226425 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 13:10:16.266719 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 13:10:16.266848 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 13:10:16.276777 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 13:10:16.286135 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 13:10:16.294227 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 13:10:16.305357 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 13:10:16.321637 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:10:16.334447 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 13:10:16.351508 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 13:10:16.351629 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 13:10:16.361685 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:10:16.370357 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:10:16.379617 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 13:10:16.387872 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 13:10:16.387926 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:10:16.399973 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 13:10:16.408977 systemd[1]: Stopped target basic.target - Basic System. Mar 2 13:10:16.416733 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 13:10:16.425064 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:10:16.434317 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 13:10:16.443164 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 13:10:16.451834 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:10:16.460689 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 13:10:16.469512 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 13:10:16.477426 systemd[1]: Stopped target swap.target - Swaps. Mar 2 13:10:16.484401 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 13:10:16.484465 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:10:16.495635 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:10:16.504522 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:10:16.513560 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 13:10:16.513595 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:10:16.523399 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 13:10:16.523458 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 13:10:16.537225 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 13:10:16.537266 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:10:16.546026 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 13:10:16.546062 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 13:10:16.554337 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 2 13:10:16.554370 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 2 13:10:16.577311 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 13:10:16.604215 ignition[1164]: INFO : Ignition 2.19.0 Mar 2 13:10:16.604215 ignition[1164]: INFO : Stage: umount Mar 2 13:10:16.604215 ignition[1164]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:10:16.604215 ignition[1164]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:10:16.587338 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 13:10:16.655794 ignition[1164]: INFO : umount: umount passed Mar 2 13:10:16.655794 ignition[1164]: INFO : Ignition finished successfully Mar 2 13:10:16.587400 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:10:16.606249 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 13:10:16.615719 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 13:10:16.615775 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:10:16.620844 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 13:10:16.620883 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:10:16.625947 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 13:10:16.628113 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 13:10:16.641225 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 13:10:16.641564 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 13:10:16.641603 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 13:10:16.651640 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 13:10:16.651694 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 13:10:16.660204 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 2 13:10:16.660250 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 2 13:10:16.668929 systemd[1]: Stopped target network.target - Network. Mar 2 13:10:16.676223 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 13:10:16.676279 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:10:16.681267 systemd[1]: Stopped target paths.target - Path Units. Mar 2 13:10:16.690009 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 13:10:16.695508 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:10:16.700767 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 13:10:16.714064 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 13:10:16.722133 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 13:10:16.722198 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:10:16.734359 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 13:10:16.734405 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:10:16.743229 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 13:10:16.743288 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 13:10:16.752386 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 13:10:16.752422 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 13:10:16.761262 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 13:10:16.772949 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 13:10:16.936466 kernel: hv_netvsc 000d3a6d-850c-000d-3a6d-850c000d3a6d eth0: Data path switched from VF: enP45268s1 Mar 2 13:10:16.780609 systemd-networkd[906]: eth0: DHCPv6 lease lost Mar 2 13:10:16.782490 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 13:10:16.782584 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 13:10:16.792595 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 13:10:16.792655 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:10:16.810329 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 13:10:16.819712 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 13:10:16.819766 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:10:16.828400 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:10:16.846625 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 13:10:16.846731 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 13:10:16.867538 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 13:10:16.867881 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:10:16.877898 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 13:10:16.877963 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 13:10:16.885433 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 13:10:16.885468 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:10:16.893414 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 13:10:16.893455 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:10:16.907541 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 13:10:16.907601 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 13:10:16.916449 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:10:16.916500 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:10:16.949367 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 13:10:16.961857 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:10:16.961919 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:10:16.971340 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 13:10:16.971382 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 13:10:16.979863 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 13:10:16.979906 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:10:16.989078 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 13:10:16.989118 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:10:17.000018 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:10:17.000067 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:10:17.010367 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 13:10:17.010478 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 13:10:17.018365 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 13:10:17.018456 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 13:10:17.026664 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 13:10:17.026743 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 13:10:17.035869 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 13:10:17.045368 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 13:10:17.045444 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 13:10:17.071379 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 13:10:17.084392 systemd[1]: Switching root. Mar 2 13:10:17.296050 systemd-journald[217]: Journal stopped Mar 2 13:10:22.108786 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Mar 2 13:10:22.108812 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 13:10:22.108822 kernel: SELinux: policy capability open_perms=1 Mar 2 13:10:22.108832 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 13:10:22.108840 kernel: SELinux: policy capability always_check_network=0 Mar 2 13:10:22.108848 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 13:10:22.108857 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 13:10:22.108865 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 13:10:22.108872 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 13:10:22.108880 kernel: audit: type=1403 audit(1772457018.627:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 13:10:22.108891 systemd[1]: Successfully loaded SELinux policy in 169.162ms. Mar 2 13:10:22.108900 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.176ms. Mar 2 13:10:22.108910 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:10:22.108919 systemd[1]: Detected virtualization microsoft. Mar 2 13:10:22.108928 systemd[1]: Detected architecture arm64. Mar 2 13:10:22.108939 systemd[1]: Detected first boot. Mar 2 13:10:22.108948 systemd[1]: Hostname set to . Mar 2 13:10:22.108957 systemd[1]: Initializing machine ID from random generator. Mar 2 13:10:22.108966 zram_generator::config[1204]: No configuration found. Mar 2 13:10:22.108976 systemd[1]: Populated /etc with preset unit settings. Mar 2 13:10:22.108985 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 2 13:10:22.108996 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 2 13:10:22.109007 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 2 13:10:22.109017 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 13:10:22.109026 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 13:10:22.109036 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 13:10:22.109045 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 13:10:22.109068 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 13:10:22.109080 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 13:10:22.109090 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 13:10:22.109100 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 13:10:22.109110 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:10:22.109119 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:10:22.109129 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 13:10:22.109138 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 13:10:22.109148 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 13:10:22.109157 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:10:22.109192 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 2 13:10:22.109202 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:10:22.109211 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 2 13:10:22.109223 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 2 13:10:22.109232 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 2 13:10:22.109242 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 13:10:22.109252 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:10:22.109264 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:10:22.109273 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:10:22.109282 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:10:22.109292 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 13:10:22.109301 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 13:10:22.109310 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:10:22.109320 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:10:22.109332 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:10:22.109341 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 13:10:22.109351 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 13:10:22.109361 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 13:10:22.109370 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 13:10:22.109380 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 13:10:22.109391 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 13:10:22.109400 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 13:10:22.109413 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 13:10:22.109427 systemd[1]: Reached target machines.target - Containers. Mar 2 13:10:22.109439 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 13:10:22.109451 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:10:22.109463 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:10:22.109476 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 13:10:22.109492 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:10:22.109505 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:10:22.109517 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:10:22.109529 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 13:10:22.109541 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:10:22.109553 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 13:10:22.109565 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 2 13:10:22.109577 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 2 13:10:22.109589 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 2 13:10:22.109601 systemd[1]: Stopped systemd-fsck-usr.service. Mar 2 13:10:22.109611 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:10:22.109621 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:10:22.109631 kernel: loop: module loaded Mar 2 13:10:22.109647 kernel: fuse: init (API version 7.39) Mar 2 13:10:22.109659 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 13:10:22.109672 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 13:10:22.109684 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:10:22.109696 systemd[1]: verity-setup.service: Deactivated successfully. Mar 2 13:10:22.109711 systemd[1]: Stopped verity-setup.service. Mar 2 13:10:22.109740 systemd-journald[1286]: Collecting audit messages is disabled. Mar 2 13:10:22.109761 systemd-journald[1286]: Journal started Mar 2 13:10:22.109784 systemd-journald[1286]: Runtime Journal (/run/log/journal/f31c0c2765724b37bdd2287a3c5b7876) is 8.0M, max 78.5M, 70.5M free. Mar 2 13:10:21.269410 systemd[1]: Queued start job for default target multi-user.target. Mar 2 13:10:21.419892 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 2 13:10:21.420258 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 2 13:10:21.420543 systemd[1]: systemd-journald.service: Consumed 2.350s CPU time. Mar 2 13:10:22.120259 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:10:22.120256 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 13:10:22.124628 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 13:10:22.130205 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 13:10:22.137375 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 13:10:22.142189 kernel: ACPI: bus type drm_connector registered Mar 2 13:10:22.142377 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 13:10:22.147364 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 13:10:22.151605 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 13:10:22.157236 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:10:22.162999 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 13:10:22.163136 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 13:10:22.168654 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:10:22.168784 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:10:22.174046 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:10:22.174189 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:10:22.179546 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:10:22.179680 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:10:22.185361 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 13:10:22.185490 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 13:10:22.190484 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:10:22.190612 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:10:22.195353 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:10:22.200355 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 13:10:22.205881 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 13:10:22.211208 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:10:22.224674 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 13:10:22.236242 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 13:10:22.244298 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 13:10:22.249408 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 13:10:22.249442 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:10:22.254932 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 2 13:10:22.261534 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 13:10:22.267422 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 13:10:22.271770 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:10:22.273237 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 13:10:22.278749 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 13:10:22.283688 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:10:22.284786 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 13:10:22.289547 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:10:22.290961 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:10:22.308302 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 13:10:22.315411 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 13:10:22.325444 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 2 13:10:22.332797 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 13:10:22.341579 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 13:10:22.352196 kernel: loop0: detected capacity change from 0 to 197488 Mar 2 13:10:22.352286 systemd-journald[1286]: Time spent on flushing to /var/log/journal/f31c0c2765724b37bdd2287a3c5b7876 is 61.053ms for 897 entries. Mar 2 13:10:22.352286 systemd-journald[1286]: System Journal (/var/log/journal/f31c0c2765724b37bdd2287a3c5b7876) is 11.8M, max 2.6G, 2.6G free. Mar 2 13:10:22.467099 systemd-journald[1286]: Received client request to flush runtime journal. Mar 2 13:10:22.467146 systemd-journald[1286]: /var/log/journal/f31c0c2765724b37bdd2287a3c5b7876/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Mar 2 13:10:22.467186 systemd-journald[1286]: Rotating system journal. Mar 2 13:10:22.467353 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 13:10:22.353492 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 13:10:22.363643 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 13:10:22.375079 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 13:10:22.402740 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 2 13:10:22.408408 udevadm[1341]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 2 13:10:22.411296 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:10:22.473223 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 13:10:22.483186 kernel: loop1: detected capacity change from 0 to 31320 Mar 2 13:10:22.509543 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 13:10:22.510171 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 2 13:10:22.556425 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 13:10:22.571295 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:10:22.636501 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Mar 2 13:10:22.636515 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Mar 2 13:10:22.641012 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:10:22.967184 kernel: loop2: detected capacity change from 0 to 114328 Mar 2 13:10:23.031385 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 13:10:23.040374 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:10:23.066802 systemd-udevd[1365]: Using default interface naming scheme 'v255'. Mar 2 13:10:23.231986 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:10:23.248470 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:10:23.305413 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 13:10:23.311709 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 2 13:10:23.359855 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 13:10:23.415210 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 13:10:23.425190 kernel: hv_vmbus: registering driver hv_balloon Mar 2 13:10:23.432110 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 2 13:10:23.432169 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 2 13:10:23.432192 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#293 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 2 13:10:23.448375 kernel: loop3: detected capacity change from 0 to 114432 Mar 2 13:10:23.454042 systemd-networkd[1374]: lo: Link UP Mar 2 13:10:23.454048 systemd-networkd[1374]: lo: Gained carrier Mar 2 13:10:23.455946 systemd-networkd[1374]: Enumeration completed Mar 2 13:10:23.456038 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:10:23.457492 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:10:23.457497 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:10:23.467597 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 13:10:23.487297 kernel: hv_vmbus: registering driver hyperv_fb Mar 2 13:10:23.490912 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 2 13:10:23.502172 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 2 13:10:23.496340 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:10:23.506620 kernel: Console: switching to colour dummy device 80x25 Mar 2 13:10:23.515192 kernel: Console: switching to colour frame buffer device 128x48 Mar 2 13:10:23.515976 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:10:23.516148 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:10:23.526183 kernel: mlx5_core b0d4:00:02.0 enP45268s1: Link up Mar 2 13:10:23.529308 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:10:23.550197 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1378) Mar 2 13:10:23.550332 kernel: hv_netvsc 000d3a6d-850c-000d-3a6d-850c000d3a6d eth0: Data path switched to VF: enP45268s1 Mar 2 13:10:23.555673 systemd-networkd[1374]: enP45268s1: Link UP Mar 2 13:10:23.556236 systemd-networkd[1374]: eth0: Link UP Mar 2 13:10:23.556243 systemd-networkd[1374]: eth0: Gained carrier Mar 2 13:10:23.556259 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:10:23.559404 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:10:23.559565 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:10:23.560462 systemd-networkd[1374]: enP45268s1: Gained carrier Mar 2 13:10:23.574325 systemd-networkd[1374]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 2 13:10:23.574373 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:10:23.622725 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 2 13:10:23.633309 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 13:10:23.742590 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 13:10:23.880039 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 2 13:10:23.893441 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 2 13:10:23.902184 kernel: loop4: detected capacity change from 0 to 197488 Mar 2 13:10:23.920317 kernel: loop5: detected capacity change from 0 to 31320 Mar 2 13:10:23.934180 kernel: loop6: detected capacity change from 0 to 114328 Mar 2 13:10:23.946184 kernel: loop7: detected capacity change from 0 to 114432 Mar 2 13:10:23.956131 (sd-merge)[1466]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 2 13:10:23.956553 (sd-merge)[1466]: Merged extensions into '/usr'. Mar 2 13:10:23.968707 systemd[1]: Reloading requested from client PID 1338 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 13:10:23.968814 systemd[1]: Reloading... Mar 2 13:10:24.029298 zram_generator::config[1494]: No configuration found. Mar 2 13:10:24.044185 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:10:24.168477 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:10:24.240234 systemd[1]: Reloading finished in 271 ms. Mar 2 13:10:24.267959 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:10:24.273506 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 13:10:24.279698 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 2 13:10:24.288040 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:10:24.297277 systemd[1]: Starting ensure-sysext.service... Mar 2 13:10:24.303341 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 2 13:10:24.311400 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:10:24.313712 lvm[1556]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:10:24.323364 systemd[1]: Reloading requested from client PID 1555 ('systemctl') (unit ensure-sysext.service)... Mar 2 13:10:24.323380 systemd[1]: Reloading... Mar 2 13:10:24.340313 systemd-tmpfiles[1557]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 13:10:24.340913 systemd-tmpfiles[1557]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 13:10:24.341696 systemd-tmpfiles[1557]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 13:10:24.342014 systemd-tmpfiles[1557]: ACLs are not supported, ignoring. Mar 2 13:10:24.342129 systemd-tmpfiles[1557]: ACLs are not supported, ignoring. Mar 2 13:10:24.363554 systemd-tmpfiles[1557]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:10:24.363668 systemd-tmpfiles[1557]: Skipping /boot Mar 2 13:10:24.376567 systemd-tmpfiles[1557]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:10:24.376652 systemd-tmpfiles[1557]: Skipping /boot Mar 2 13:10:24.410262 zram_generator::config[1588]: No configuration found. Mar 2 13:10:24.512262 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:10:24.586178 systemd[1]: Reloading finished in 262 ms. Mar 2 13:10:24.610526 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 2 13:10:24.617731 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:10:24.637450 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 13:10:24.663358 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 13:10:24.672424 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 13:10:24.690382 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:10:24.695764 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 13:10:24.703595 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:10:24.706412 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:10:24.722436 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:10:24.734438 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:10:24.741560 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:10:24.742265 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:10:24.742930 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:10:24.753566 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:10:24.753699 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:10:24.762533 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:10:24.762679 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:10:24.773748 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:10:24.778384 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:10:24.785643 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:10:24.793593 augenrules[1673]: No rules Mar 2 13:10:24.794917 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:10:24.799207 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:10:24.800343 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 13:10:24.805689 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 13:10:24.815302 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:10:24.815447 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:10:24.820766 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:10:24.820892 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:10:24.821312 systemd-resolved[1654]: Positive Trust Anchors: Mar 2 13:10:24.821595 systemd-resolved[1654]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:10:24.821631 systemd-resolved[1654]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:10:24.827139 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:10:24.827288 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:10:24.833958 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 13:10:24.845199 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:10:24.857367 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:10:24.861015 systemd-resolved[1654]: Using system hostname 'ci-4081.3.101-d5e61b93e9'. Mar 2 13:10:24.865422 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:10:24.873401 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:10:24.881483 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:10:24.886211 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:10:24.886579 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 13:10:24.892420 systemd-networkd[1374]: eth0: Gained IPv6LL Mar 2 13:10:24.894648 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:10:24.899899 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:10:24.900069 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:10:24.905367 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 13:10:24.911131 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:10:24.911386 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:10:24.916387 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:10:24.916523 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:10:24.922425 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:10:24.922549 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:10:24.929988 systemd[1]: Finished ensure-sysext.service. Mar 2 13:10:24.936870 systemd[1]: Reached target network.target - Network. Mar 2 13:10:24.940985 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 13:10:24.945481 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:10:24.950581 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:10:24.950649 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:10:25.322928 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 13:10:25.328521 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 13:10:27.890195 ldconfig[1333]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 13:10:27.913950 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 13:10:27.923373 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 13:10:27.935901 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 13:10:27.940597 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:10:27.944939 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 13:10:27.949945 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 13:10:27.955291 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 13:10:27.959639 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 13:10:27.965286 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 13:10:27.970434 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 13:10:27.970564 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:10:27.974143 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:10:27.978506 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 13:10:27.984221 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 13:10:27.991764 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 13:10:27.996606 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 13:10:28.001059 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:10:28.004904 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:10:28.008622 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:10:28.008643 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:10:28.020243 systemd[1]: Starting chronyd.service - NTP client/server... Mar 2 13:10:28.027291 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 13:10:28.038332 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 2 13:10:28.043720 (chronyd)[1703]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 2 13:10:28.045921 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 13:10:28.051294 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 13:10:28.057331 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 13:10:28.065475 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 13:10:28.065514 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 2 13:10:28.068334 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 2 13:10:28.076597 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 2 13:10:28.077773 KVP[1711]: KVP starting; pid is:1711 Mar 2 13:10:28.078623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:10:28.082154 jq[1709]: false Mar 2 13:10:28.083627 chronyd[1715]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 2 13:10:28.085609 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 13:10:28.095350 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 13:10:28.104406 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 2 13:10:28.110806 chronyd[1715]: Timezone right/UTC failed leap second check, ignoring Mar 2 13:10:28.112345 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 13:10:28.111005 chronyd[1715]: Loaded seccomp filter (level 2) Mar 2 13:10:28.123408 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 13:10:28.131179 extend-filesystems[1710]: Found loop4 Mar 2 13:10:28.131179 extend-filesystems[1710]: Found loop5 Mar 2 13:10:28.131179 extend-filesystems[1710]: Found loop6 Mar 2 13:10:28.131179 extend-filesystems[1710]: Found loop7 Mar 2 13:10:28.131179 extend-filesystems[1710]: Found sda Mar 2 13:10:28.131179 extend-filesystems[1710]: Found sda1 Mar 2 13:10:28.131179 extend-filesystems[1710]: Found sda2 Mar 2 13:10:28.131179 extend-filesystems[1710]: Found sda3 Mar 2 13:10:28.131179 extend-filesystems[1710]: Found usr Mar 2 13:10:28.131179 extend-filesystems[1710]: Found sda4 Mar 2 13:10:28.131179 extend-filesystems[1710]: Found sda6 Mar 2 13:10:28.131179 extend-filesystems[1710]: Found sda7 Mar 2 13:10:28.131179 extend-filesystems[1710]: Found sda9 Mar 2 13:10:28.131179 extend-filesystems[1710]: Checking size of /dev/sda9 Mar 2 13:10:28.316168 kernel: hv_utils: KVP IC version 4.0 Mar 2 13:10:28.144376 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 13:10:28.317178 extend-filesystems[1710]: Old size kept for /dev/sda9 Mar 2 13:10:28.317178 extend-filesystems[1710]: Found sr0 Mar 2 13:10:28.167989 dbus-daemon[1706]: [system] SELinux support is enabled Mar 2 13:10:28.152851 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 13:10:28.183148 KVP[1711]: KVP LIC Version: 3.1 Mar 2 13:10:28.153336 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 13:10:28.339655 update_engine[1730]: I20260302 13:10:28.254015 1730 main.cc:92] Flatcar Update Engine starting Mar 2 13:10:28.339655 update_engine[1730]: I20260302 13:10:28.262945 1730 update_check_scheduler.cc:74] Next update check in 10m32s Mar 2 13:10:28.154461 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 13:10:28.339920 jq[1733]: true Mar 2 13:10:28.163295 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 13:10:28.174119 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 13:10:28.185125 systemd[1]: Started chronyd.service - NTP client/server. Mar 2 13:10:28.197601 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 13:10:28.199214 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 13:10:28.201691 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 13:10:28.202311 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 13:10:28.236841 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 13:10:28.237019 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 13:10:28.254845 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 13:10:28.257919 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 13:10:28.263983 systemd-logind[1725]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 2 13:10:28.272384 systemd-logind[1725]: New seat seat0. Mar 2 13:10:28.273960 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 13:10:28.282325 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 13:10:28.352680 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 13:10:28.359525 tar[1743]: linux-arm64/LICENSE Mar 2 13:10:28.359525 tar[1743]: linux-arm64/helm Mar 2 13:10:28.352717 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 13:10:28.360394 dbus-daemon[1706]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 2 13:10:28.364684 jq[1752]: true Mar 2 13:10:28.366763 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 13:10:28.366785 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 13:10:28.388196 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1750) Mar 2 13:10:28.408432 (ntainerd)[1760]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 13:10:28.423597 systemd[1]: Started update-engine.service - Update Engine. Mar 2 13:10:28.428175 coreos-metadata[1705]: Mar 02 13:10:28.427 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 2 13:10:28.430021 coreos-metadata[1705]: Mar 02 13:10:28.429 INFO Fetch successful Mar 2 13:10:28.430071 coreos-metadata[1705]: Mar 02 13:10:28.430 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 2 13:10:28.434327 coreos-metadata[1705]: Mar 02 13:10:28.434 INFO Fetch successful Mar 2 13:10:28.435057 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 13:10:28.441268 coreos-metadata[1705]: Mar 02 13:10:28.441 INFO Fetching http://168.63.129.16/machine/fe67f2f3-f19c-49cc-bcbc-d481b4323a0e/85cad035%2De7f3%2D4c6d%2D9999%2Db6b8d109ef64.%5Fci%2D4081.3.101%2Dd5e61b93e9?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 2 13:10:28.441268 coreos-metadata[1705]: Mar 02 13:10:28.441 INFO Fetch successful Mar 2 13:10:28.441268 coreos-metadata[1705]: Mar 02 13:10:28.441 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 2 13:10:28.450263 coreos-metadata[1705]: Mar 02 13:10:28.450 INFO Fetch successful Mar 2 13:10:28.531920 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 2 13:10:28.544217 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 13:10:28.618651 bash[1815]: Updated "/home/core/.ssh/authorized_keys" Mar 2 13:10:28.622278 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 13:10:28.630911 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 2 13:10:28.744206 locksmithd[1792]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 13:10:29.219589 tar[1743]: linux-arm64/README.md Mar 2 13:10:29.237185 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 2 13:10:29.266880 containerd[1760]: time="2026-03-02T13:10:29.266790620Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 2 13:10:29.294653 containerd[1760]: time="2026-03-02T13:10:29.294605100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:10:29.296578 containerd[1760]: time="2026-03-02T13:10:29.296545100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:10:29.296662 containerd[1760]: time="2026-03-02T13:10:29.296648020Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 2 13:10:29.296745 containerd[1760]: time="2026-03-02T13:10:29.296731580Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 2 13:10:29.296950 containerd[1760]: time="2026-03-02T13:10:29.296931140Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 2 13:10:29.297017 containerd[1760]: time="2026-03-02T13:10:29.297002860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 2 13:10:29.297135 containerd[1760]: time="2026-03-02T13:10:29.297117100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:10:29.297217 containerd[1760]: time="2026-03-02T13:10:29.297202700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:10:29.297468 containerd[1760]: time="2026-03-02T13:10:29.297445180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:10:29.297532 containerd[1760]: time="2026-03-02T13:10:29.297519180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 2 13:10:29.297586 containerd[1760]: time="2026-03-02T13:10:29.297571940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:10:29.297631 containerd[1760]: time="2026-03-02T13:10:29.297618860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 2 13:10:29.297762 containerd[1760]: time="2026-03-02T13:10:29.297744420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:10:29.298030 containerd[1760]: time="2026-03-02T13:10:29.298009980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:10:29.298213 containerd[1760]: time="2026-03-02T13:10:29.298192220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:10:29.298314 containerd[1760]: time="2026-03-02T13:10:29.298297340Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 2 13:10:29.298457 containerd[1760]: time="2026-03-02T13:10:29.298437300Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 2 13:10:29.298566 containerd[1760]: time="2026-03-02T13:10:29.298549380Z" level=info msg="metadata content store policy set" policy=shared Mar 2 13:10:29.316188 containerd[1760]: time="2026-03-02T13:10:29.315723500Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 2 13:10:29.316188 containerd[1760]: time="2026-03-02T13:10:29.315787020Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 2 13:10:29.316188 containerd[1760]: time="2026-03-02T13:10:29.315803420Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 2 13:10:29.316188 containerd[1760]: time="2026-03-02T13:10:29.315820780Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 2 13:10:29.316188 containerd[1760]: time="2026-03-02T13:10:29.315834740Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 2 13:10:29.316188 containerd[1760]: time="2026-03-02T13:10:29.315999940Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 2 13:10:29.319023 containerd[1760]: time="2026-03-02T13:10:29.318343740Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 2 13:10:29.319023 containerd[1760]: time="2026-03-02T13:10:29.318469860Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 2 13:10:29.319023 containerd[1760]: time="2026-03-02T13:10:29.318486700Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 2 13:10:29.319023 containerd[1760]: time="2026-03-02T13:10:29.318500420Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 2 13:10:29.319023 containerd[1760]: time="2026-03-02T13:10:29.318518380Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 2 13:10:29.319023 containerd[1760]: time="2026-03-02T13:10:29.318531100Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 2 13:10:29.319023 containerd[1760]: time="2026-03-02T13:10:29.318543060Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 2 13:10:29.319023 containerd[1760]: time="2026-03-02T13:10:29.318556820Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 2 13:10:29.319023 containerd[1760]: time="2026-03-02T13:10:29.318573060Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 2 13:10:29.319023 containerd[1760]: time="2026-03-02T13:10:29.318587580Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 2 13:10:29.319023 containerd[1760]: time="2026-03-02T13:10:29.318600340Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 2 13:10:29.319023 containerd[1760]: time="2026-03-02T13:10:29.318612940Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 2 13:10:29.319023 containerd[1760]: time="2026-03-02T13:10:29.318639460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 2 13:10:29.319023 containerd[1760]: time="2026-03-02T13:10:29.318653820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 2 13:10:29.319403 containerd[1760]: time="2026-03-02T13:10:29.318666860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 2 13:10:29.319403 containerd[1760]: time="2026-03-02T13:10:29.318680980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 2 13:10:29.319403 containerd[1760]: time="2026-03-02T13:10:29.318692860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 2 13:10:29.319403 containerd[1760]: time="2026-03-02T13:10:29.318710820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 2 13:10:29.319403 containerd[1760]: time="2026-03-02T13:10:29.318723380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 2 13:10:29.319403 containerd[1760]: time="2026-03-02T13:10:29.318736260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 2 13:10:29.319403 containerd[1760]: time="2026-03-02T13:10:29.318749260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 2 13:10:29.319403 containerd[1760]: time="2026-03-02T13:10:29.318763940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 2 13:10:29.319403 containerd[1760]: time="2026-03-02T13:10:29.318777620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 2 13:10:29.319403 containerd[1760]: time="2026-03-02T13:10:29.318790060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 2 13:10:29.319403 containerd[1760]: time="2026-03-02T13:10:29.318803860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 2 13:10:29.319403 containerd[1760]: time="2026-03-02T13:10:29.318820860Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 2 13:10:29.319403 containerd[1760]: time="2026-03-02T13:10:29.318841180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 2 13:10:29.319403 containerd[1760]: time="2026-03-02T13:10:29.318853140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 2 13:10:29.319403 containerd[1760]: time="2026-03-02T13:10:29.318864220Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 2 13:10:29.319713 containerd[1760]: time="2026-03-02T13:10:29.318926820Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 2 13:10:29.319713 containerd[1760]: time="2026-03-02T13:10:29.318949100Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 2 13:10:29.319713 containerd[1760]: time="2026-03-02T13:10:29.318960020Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 2 13:10:29.320178 containerd[1760]: time="2026-03-02T13:10:29.318972540Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 2 13:10:29.320178 containerd[1760]: time="2026-03-02T13:10:29.319865300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 2 13:10:29.320178 containerd[1760]: time="2026-03-02T13:10:29.319886500Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 2 13:10:29.320178 containerd[1760]: time="2026-03-02T13:10:29.319897140Z" level=info msg="NRI interface is disabled by configuration." Mar 2 13:10:29.320178 containerd[1760]: time="2026-03-02T13:10:29.319907260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 2 13:10:29.320411 containerd[1760]: time="2026-03-02T13:10:29.320353460Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 2 13:10:29.320562 containerd[1760]: time="2026-03-02T13:10:29.320545500Z" level=info msg="Connect containerd service" Mar 2 13:10:29.320648 containerd[1760]: time="2026-03-02T13:10:29.320634500Z" level=info msg="using legacy CRI server" Mar 2 13:10:29.320695 containerd[1760]: time="2026-03-02T13:10:29.320683820Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 13:10:29.320831 containerd[1760]: time="2026-03-02T13:10:29.320815140Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 2 13:10:29.323034 containerd[1760]: time="2026-03-02T13:10:29.322660500Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:10:29.323034 containerd[1760]: time="2026-03-02T13:10:29.322795180Z" level=info msg="Start subscribing containerd event" Mar 2 13:10:29.323034 containerd[1760]: time="2026-03-02T13:10:29.322842780Z" level=info msg="Start recovering state" Mar 2 13:10:29.323034 containerd[1760]: time="2026-03-02T13:10:29.322907260Z" level=info msg="Start event monitor" Mar 2 13:10:29.323034 containerd[1760]: time="2026-03-02T13:10:29.322918660Z" level=info msg="Start snapshots syncer" Mar 2 13:10:29.323034 containerd[1760]: time="2026-03-02T13:10:29.322927260Z" level=info msg="Start cni network conf syncer for default" Mar 2 13:10:29.323034 containerd[1760]: time="2026-03-02T13:10:29.322935100Z" level=info msg="Start streaming server" Mar 2 13:10:29.323653 containerd[1760]: time="2026-03-02T13:10:29.323634420Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 13:10:29.324283 containerd[1760]: time="2026-03-02T13:10:29.324241220Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 13:10:29.324709 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 13:10:29.328186 containerd[1760]: time="2026-03-02T13:10:29.327171940Z" level=info msg="containerd successfully booted in 0.058544s" Mar 2 13:10:29.341441 (kubelet)[1840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:10:29.341560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:10:29.393360 sshd_keygen[1735]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 13:10:29.412495 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 13:10:29.422878 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 13:10:29.433362 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 2 13:10:29.439653 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 13:10:29.439920 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 13:10:29.454483 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 13:10:29.464662 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 2 13:10:29.482547 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 13:10:29.493425 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 13:10:29.504455 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 2 13:10:29.509603 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 13:10:29.513812 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 13:10:29.518679 systemd[1]: Startup finished in 588ms (kernel) + 12.018s (initrd) + 11.059s (userspace) = 23.665s. Mar 2 13:10:29.765022 kubelet[1840]: E0302 13:10:29.764913 1840 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:10:29.767659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:10:29.767802 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:10:29.820930 login[1867]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 2 13:10:29.822900 login[1868]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:29.832131 systemd-logind[1725]: New session 1 of user core. Mar 2 13:10:29.832656 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 13:10:29.840710 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 13:10:29.866159 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 13:10:29.876390 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 13:10:29.895781 (systemd)[1878]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 13:10:30.013704 systemd[1878]: Queued start job for default target default.target. Mar 2 13:10:30.024056 systemd[1878]: Created slice app.slice - User Application Slice. Mar 2 13:10:30.024083 systemd[1878]: Reached target paths.target - Paths. Mar 2 13:10:30.024095 systemd[1878]: Reached target timers.target - Timers. Mar 2 13:10:30.025238 systemd[1878]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 13:10:30.035178 systemd[1878]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 13:10:30.035231 systemd[1878]: Reached target sockets.target - Sockets. Mar 2 13:10:30.035243 systemd[1878]: Reached target basic.target - Basic System. Mar 2 13:10:30.035284 systemd[1878]: Reached target default.target - Main User Target. Mar 2 13:10:30.035309 systemd[1878]: Startup finished in 134ms. Mar 2 13:10:30.035618 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 13:10:30.037013 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 13:10:30.821382 login[1867]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:30.825551 systemd-logind[1725]: New session 2 of user core. Mar 2 13:10:30.835299 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 13:10:31.215923 waagent[1864]: 2026-03-02T13:10:31.215793Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 2 13:10:31.220187 waagent[1864]: 2026-03-02T13:10:31.220125Z INFO Daemon Daemon OS: flatcar 4081.3.101 Mar 2 13:10:31.223628 waagent[1864]: 2026-03-02T13:10:31.223592Z INFO Daemon Daemon Python: 3.11.9 Mar 2 13:10:31.227041 waagent[1864]: 2026-03-02T13:10:31.226981Z INFO Daemon Daemon Run daemon Mar 2 13:10:31.230069 waagent[1864]: 2026-03-02T13:10:31.230034Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.101' Mar 2 13:10:31.236868 waagent[1864]: 2026-03-02T13:10:31.236686Z INFO Daemon Daemon Using waagent for provisioning Mar 2 13:10:31.240682 waagent[1864]: 2026-03-02T13:10:31.240643Z INFO Daemon Daemon Activate resource disk Mar 2 13:10:31.244270 waagent[1864]: 2026-03-02T13:10:31.244235Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 2 13:10:31.253242 waagent[1864]: 2026-03-02T13:10:31.253201Z INFO Daemon Daemon Found device: None Mar 2 13:10:31.256669 waagent[1864]: 2026-03-02T13:10:31.256634Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 2 13:10:31.263021 waagent[1864]: 2026-03-02T13:10:31.262988Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 2 13:10:31.273053 waagent[1864]: 2026-03-02T13:10:31.273006Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 2 13:10:31.277663 waagent[1864]: 2026-03-02T13:10:31.277625Z INFO Daemon Daemon Running default provisioning handler Mar 2 13:10:31.288091 waagent[1864]: 2026-03-02T13:10:31.288037Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 2 13:10:31.298494 waagent[1864]: 2026-03-02T13:10:31.298447Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 2 13:10:31.305688 waagent[1864]: 2026-03-02T13:10:31.305654Z INFO Daemon Daemon cloud-init is enabled: False Mar 2 13:10:31.309465 waagent[1864]: 2026-03-02T13:10:31.309434Z INFO Daemon Daemon Copying ovf-env.xml Mar 2 13:10:31.431195 waagent[1864]: 2026-03-02T13:10:31.431094Z INFO Daemon Daemon Successfully mounted dvd Mar 2 13:10:31.461845 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 2 13:10:31.464188 waagent[1864]: 2026-03-02T13:10:31.464109Z INFO Daemon Daemon Detect protocol endpoint Mar 2 13:10:31.467945 waagent[1864]: 2026-03-02T13:10:31.467860Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 2 13:10:31.472191 waagent[1864]: 2026-03-02T13:10:31.472130Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 2 13:10:31.476961 waagent[1864]: 2026-03-02T13:10:31.476923Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 2 13:10:31.481029 waagent[1864]: 2026-03-02T13:10:31.480989Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 2 13:10:31.484792 waagent[1864]: 2026-03-02T13:10:31.484750Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 2 13:10:31.523517 waagent[1864]: 2026-03-02T13:10:31.523477Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 2 13:10:31.528749 waagent[1864]: 2026-03-02T13:10:31.528725Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 2 13:10:31.532777 waagent[1864]: 2026-03-02T13:10:31.532734Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 2 13:10:31.753250 waagent[1864]: 2026-03-02T13:10:31.752601Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 2 13:10:31.757623 waagent[1864]: 2026-03-02T13:10:31.757583Z INFO Daemon Daemon Forcing an update of the goal state. Mar 2 13:10:31.764547 waagent[1864]: 2026-03-02T13:10:31.764501Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 2 13:10:31.783742 waagent[1864]: 2026-03-02T13:10:31.783703Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 2 13:10:31.788021 waagent[1864]: 2026-03-02T13:10:31.787977Z INFO Daemon Mar 2 13:10:31.790156 waagent[1864]: 2026-03-02T13:10:31.790114Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 5d6236a3-fe9f-4c92-8b00-e1d91109b594 eTag: 1276441327265334201 source: Fabric] Mar 2 13:10:31.798618 waagent[1864]: 2026-03-02T13:10:31.798579Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 2 13:10:31.803891 waagent[1864]: 2026-03-02T13:10:31.803851Z INFO Daemon Mar 2 13:10:31.806078 waagent[1864]: 2026-03-02T13:10:31.806039Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 2 13:10:31.814760 waagent[1864]: 2026-03-02T13:10:31.814729Z INFO Daemon Daemon Downloading artifacts profile blob Mar 2 13:10:31.888400 waagent[1864]: 2026-03-02T13:10:31.888335Z INFO Daemon Downloaded certificate {'thumbprint': '9BB02D1079F4FA5C05D6BDD432F1C9BE27AC4DF0', 'hasPrivateKey': True} Mar 2 13:10:31.896062 waagent[1864]: 2026-03-02T13:10:31.896019Z INFO Daemon Fetch goal state completed Mar 2 13:10:31.905485 waagent[1864]: 2026-03-02T13:10:31.905450Z INFO Daemon Daemon Starting provisioning Mar 2 13:10:31.909435 waagent[1864]: 2026-03-02T13:10:31.909396Z INFO Daemon Daemon Handle ovf-env.xml. Mar 2 13:10:31.913066 waagent[1864]: 2026-03-02T13:10:31.913036Z INFO Daemon Daemon Set hostname [ci-4081.3.101-d5e61b93e9] Mar 2 13:10:31.920184 waagent[1864]: 2026-03-02T13:10:31.919418Z INFO Daemon Daemon Publish hostname [ci-4081.3.101-d5e61b93e9] Mar 2 13:10:31.924794 waagent[1864]: 2026-03-02T13:10:31.924750Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 2 13:10:31.929505 waagent[1864]: 2026-03-02T13:10:31.929465Z INFO Daemon Daemon Primary interface is [eth0] Mar 2 13:10:31.957237 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:10:31.957243 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:10:31.957268 systemd-networkd[1374]: eth0: DHCP lease lost Mar 2 13:10:31.958445 waagent[1864]: 2026-03-02T13:10:31.958369Z INFO Daemon Daemon Create user account if not exists Mar 2 13:10:31.962572 waagent[1864]: 2026-03-02T13:10:31.962532Z INFO Daemon Daemon User core already exists, skip useradd Mar 2 13:10:31.966721 waagent[1864]: 2026-03-02T13:10:31.966685Z INFO Daemon Daemon Configure sudoer Mar 2 13:10:31.967337 systemd-networkd[1374]: eth0: DHCPv6 lease lost Mar 2 13:10:31.970376 waagent[1864]: 2026-03-02T13:10:31.970327Z INFO Daemon Daemon Configure sshd Mar 2 13:10:31.973637 waagent[1864]: 2026-03-02T13:10:31.973593Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 2 13:10:31.982652 waagent[1864]: 2026-03-02T13:10:31.982616Z INFO Daemon Daemon Deploy ssh public key. Mar 2 13:10:31.999238 systemd-networkd[1374]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 2 13:10:33.118052 waagent[1864]: 2026-03-02T13:10:33.114307Z INFO Daemon Daemon Provisioning complete Mar 2 13:10:33.129944 waagent[1864]: 2026-03-02T13:10:33.129900Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 2 13:10:33.134879 waagent[1864]: 2026-03-02T13:10:33.134842Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 2 13:10:33.142401 waagent[1864]: 2026-03-02T13:10:33.142363Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 2 13:10:33.266745 waagent[1927]: 2026-03-02T13:10:33.266122Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 2 13:10:33.266745 waagent[1927]: 2026-03-02T13:10:33.266283Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.101 Mar 2 13:10:33.266745 waagent[1927]: 2026-03-02T13:10:33.266339Z INFO ExtHandler ExtHandler Python: 3.11.9 Mar 2 13:10:33.304046 waagent[1927]: 2026-03-02T13:10:33.303986Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.101; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 2 13:10:33.304330 waagent[1927]: 2026-03-02T13:10:33.304292Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 2 13:10:33.304461 waagent[1927]: 2026-03-02T13:10:33.304430Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 2 13:10:33.311310 waagent[1927]: 2026-03-02T13:10:33.311254Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 2 13:10:33.315926 waagent[1927]: 2026-03-02T13:10:33.315891Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 2 13:10:33.316439 waagent[1927]: 2026-03-02T13:10:33.316401Z INFO ExtHandler Mar 2 13:10:33.316584 waagent[1927]: 2026-03-02T13:10:33.316552Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 292b3bb4-819c-4ff3-8168-c86c157915f9 eTag: 1276441327265334201 source: Fabric] Mar 2 13:10:33.316944 waagent[1927]: 2026-03-02T13:10:33.316907Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 2 13:10:33.317630 waagent[1927]: 2026-03-02T13:10:33.317581Z INFO ExtHandler Mar 2 13:10:33.318324 waagent[1927]: 2026-03-02T13:10:33.317740Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 2 13:10:33.321192 waagent[1927]: 2026-03-02T13:10:33.320673Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 2 13:10:33.384905 waagent[1927]: 2026-03-02T13:10:33.384789Z INFO ExtHandler Downloaded certificate {'thumbprint': '9BB02D1079F4FA5C05D6BDD432F1C9BE27AC4DF0', 'hasPrivateKey': True} Mar 2 13:10:33.385370 waagent[1927]: 2026-03-02T13:10:33.385328Z INFO ExtHandler Fetch goal state completed Mar 2 13:10:33.398790 waagent[1927]: 2026-03-02T13:10:33.398744Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1927 Mar 2 13:10:33.398923 waagent[1927]: 2026-03-02T13:10:33.398888Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 2 13:10:33.400504 waagent[1927]: 2026-03-02T13:10:33.400462Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.101', '', 'Flatcar Container Linux by Kinvolk'] Mar 2 13:10:33.400854 waagent[1927]: 2026-03-02T13:10:33.400819Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 2 13:10:33.424745 waagent[1927]: 2026-03-02T13:10:33.424709Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 2 13:10:33.424917 waagent[1927]: 2026-03-02T13:10:33.424882Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 2 13:10:33.430761 waagent[1927]: 2026-03-02T13:10:33.430716Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 2 13:10:33.436536 systemd[1]: Reloading requested from client PID 1940 ('systemctl') (unit waagent.service)... Mar 2 13:10:33.436551 systemd[1]: Reloading... Mar 2 13:10:33.517186 zram_generator::config[1977]: No configuration found. Mar 2 13:10:33.619617 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:10:33.699591 systemd[1]: Reloading finished in 262 ms. Mar 2 13:10:33.728545 waagent[1927]: 2026-03-02T13:10:33.728463Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 2 13:10:33.734700 systemd[1]: Reloading requested from client PID 2028 ('systemctl') (unit waagent.service)... Mar 2 13:10:33.734823 systemd[1]: Reloading... Mar 2 13:10:33.799187 zram_generator::config[2065]: No configuration found. Mar 2 13:10:33.905568 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:10:33.979658 systemd[1]: Reloading finished in 244 ms. Mar 2 13:10:34.004057 waagent[1927]: 2026-03-02T13:10:34.003977Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 2 13:10:34.004228 waagent[1927]: 2026-03-02T13:10:34.004140Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 2 13:10:34.726629 waagent[1927]: 2026-03-02T13:10:34.726546Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 2 13:10:34.727183 waagent[1927]: 2026-03-02T13:10:34.727129Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 2 13:10:34.727941 waagent[1927]: 2026-03-02T13:10:34.727890Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 2 13:10:34.728454 waagent[1927]: 2026-03-02T13:10:34.728233Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 2 13:10:34.728454 waagent[1927]: 2026-03-02T13:10:34.728397Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 2 13:10:34.728681 waagent[1927]: 2026-03-02T13:10:34.728643Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 2 13:10:34.729481 waagent[1927]: 2026-03-02T13:10:34.728793Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 2 13:10:34.729481 waagent[1927]: 2026-03-02T13:10:34.729007Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 2 13:10:34.729481 waagent[1927]: 2026-03-02T13:10:34.729197Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 2 13:10:34.729481 waagent[1927]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 2 13:10:34.729481 waagent[1927]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 2 13:10:34.729481 waagent[1927]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 2 13:10:34.729481 waagent[1927]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 2 13:10:34.729481 waagent[1927]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 2 13:10:34.729481 waagent[1927]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 2 13:10:34.729797 waagent[1927]: 2026-03-02T13:10:34.729755Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 2 13:10:34.729961 waagent[1927]: 2026-03-02T13:10:34.729917Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 2 13:10:34.730028 waagent[1927]: 2026-03-02T13:10:34.729974Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 2 13:10:34.730348 waagent[1927]: 2026-03-02T13:10:34.730299Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 2 13:10:34.730491 waagent[1927]: 2026-03-02T13:10:34.730447Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 2 13:10:34.730732 waagent[1927]: 2026-03-02T13:10:34.730689Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 2 13:10:34.731242 waagent[1927]: 2026-03-02T13:10:34.731143Z INFO EnvHandler ExtHandler Configure routes Mar 2 13:10:34.733972 waagent[1927]: 2026-03-02T13:10:34.733942Z INFO EnvHandler ExtHandler Gateway:None Mar 2 13:10:34.734516 waagent[1927]: 2026-03-02T13:10:34.734477Z INFO EnvHandler ExtHandler Routes:None Mar 2 13:10:34.735348 waagent[1927]: 2026-03-02T13:10:34.735304Z INFO ExtHandler ExtHandler Mar 2 13:10:34.736660 waagent[1927]: 2026-03-02T13:10:34.736507Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 04953fc5-db1b-440b-8559-922246868bef correlation 843b6686-450e-4454-aee6-e57bfe18bbb9 created: 2026-03-02T13:09:31.677956Z] Mar 2 13:10:34.737536 waagent[1927]: 2026-03-02T13:10:34.737468Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 2 13:10:34.739299 waagent[1927]: 2026-03-02T13:10:34.739235Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Mar 2 13:10:34.773866 waagent[1927]: 2026-03-02T13:10:34.773769Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 627BE61D-C70B-4BC9-A6A2-CE7BBD1F3DB4;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 2 13:10:34.793189 waagent[1927]: 2026-03-02T13:10:34.793106Z INFO MonitorHandler ExtHandler Network interfaces: Mar 2 13:10:34.793189 waagent[1927]: Executing ['ip', '-a', '-o', 'link']: Mar 2 13:10:34.793189 waagent[1927]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 2 13:10:34.793189 waagent[1927]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6d:85:0c brd ff:ff:ff:ff:ff:ff Mar 2 13:10:34.793189 waagent[1927]: 3: enP45268s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6d:85:0c brd ff:ff:ff:ff:ff:ff\ altname enP45268p0s2 Mar 2 13:10:34.793189 waagent[1927]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 2 13:10:34.793189 waagent[1927]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 2 13:10:34.793189 waagent[1927]: 2: eth0 inet 10.200.20.38/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 2 13:10:34.793189 waagent[1927]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 2 13:10:34.793189 waagent[1927]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 2 13:10:34.793189 waagent[1927]: 2: eth0 inet6 fe80::20d:3aff:fe6d:850c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 2 13:10:34.824963 waagent[1927]: 2026-03-02T13:10:34.824672Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 2 13:10:34.824963 waagent[1927]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 2 13:10:34.824963 waagent[1927]: pkts bytes target prot opt in out source destination Mar 2 13:10:34.824963 waagent[1927]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 2 13:10:34.824963 waagent[1927]: pkts bytes target prot opt in out source destination Mar 2 13:10:34.824963 waagent[1927]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 2 13:10:34.824963 waagent[1927]: pkts bytes target prot opt in out source destination Mar 2 13:10:34.824963 waagent[1927]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 2 13:10:34.824963 waagent[1927]: 2 481 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 2 13:10:34.824963 waagent[1927]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 2 13:10:34.828433 waagent[1927]: 2026-03-02T13:10:34.828369Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 2 13:10:34.828433 waagent[1927]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 2 13:10:34.828433 waagent[1927]: pkts bytes target prot opt in out source destination Mar 2 13:10:34.828433 waagent[1927]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 2 13:10:34.828433 waagent[1927]: pkts bytes target prot opt in out source destination Mar 2 13:10:34.828433 waagent[1927]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 2 13:10:34.828433 waagent[1927]: pkts bytes target prot opt in out source destination Mar 2 13:10:34.828433 waagent[1927]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 2 13:10:34.828433 waagent[1927]: 7 948 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 2 13:10:34.828433 waagent[1927]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 2 13:10:34.829665 waagent[1927]: 2026-03-02T13:10:34.829624Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 2 13:10:39.991703 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 13:10:39.999305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:10:40.097194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:10:40.101154 (kubelet)[2155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:10:40.211184 kubelet[2155]: E0302 13:10:40.210478 2155 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:10:40.213429 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:10:40.213566 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:10:48.069994 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 13:10:48.071391 systemd[1]: Started sshd@0-10.200.20.38:22-10.200.16.10:59300.service - OpenSSH per-connection server daemon (10.200.16.10:59300). Mar 2 13:10:48.638867 sshd[2164]: Accepted publickey for core from 10.200.16.10 port 59300 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:10:48.639702 sshd[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:48.643668 systemd-logind[1725]: New session 3 of user core. Mar 2 13:10:48.649306 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 13:10:49.074333 systemd[1]: Started sshd@1-10.200.20.38:22-10.200.16.10:59306.service - OpenSSH per-connection server daemon (10.200.16.10:59306). Mar 2 13:10:49.563897 sshd[2169]: Accepted publickey for core from 10.200.16.10 port 59306 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:10:49.564803 sshd[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:49.569310 systemd-logind[1725]: New session 4 of user core. Mar 2 13:10:49.575316 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 13:10:49.913186 sshd[2169]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:49.916766 systemd[1]: sshd@1-10.200.20.38:22-10.200.16.10:59306.service: Deactivated successfully. Mar 2 13:10:49.918685 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 13:10:49.919362 systemd-logind[1725]: Session 4 logged out. Waiting for processes to exit. Mar 2 13:10:49.920462 systemd-logind[1725]: Removed session 4. Mar 2 13:10:50.006243 systemd[1]: Started sshd@2-10.200.20.38:22-10.200.16.10:34580.service - OpenSSH per-connection server daemon (10.200.16.10:34580). Mar 2 13:10:50.241728 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 2 13:10:50.251316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:10:50.345748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:10:50.349176 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:10:50.379670 kubelet[2186]: E0302 13:10:50.379617 2186 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:10:50.382487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:10:50.382733 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:10:50.493055 sshd[2176]: Accepted publickey for core from 10.200.16.10 port 34580 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:10:50.494370 sshd[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:50.498142 systemd-logind[1725]: New session 5 of user core. Mar 2 13:10:50.507276 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 13:10:50.839659 sshd[2176]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:50.842634 systemd-logind[1725]: Session 5 logged out. Waiting for processes to exit. Mar 2 13:10:50.844062 systemd[1]: sshd@2-10.200.20.38:22-10.200.16.10:34580.service: Deactivated successfully. Mar 2 13:10:50.846100 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 13:10:50.847418 systemd-logind[1725]: Removed session 5. Mar 2 13:10:50.930089 systemd[1]: Started sshd@3-10.200.20.38:22-10.200.16.10:34582.service - OpenSSH per-connection server daemon (10.200.16.10:34582). Mar 2 13:10:51.413773 sshd[2198]: Accepted publickey for core from 10.200.16.10 port 34582 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:10:51.415038 sshd[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:51.418589 systemd-logind[1725]: New session 6 of user core. Mar 2 13:10:51.430271 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 13:10:51.758398 sshd[2198]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:51.761665 systemd[1]: sshd@3-10.200.20.38:22-10.200.16.10:34582.service: Deactivated successfully. Mar 2 13:10:51.763565 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 13:10:51.764138 systemd-logind[1725]: Session 6 logged out. Waiting for processes to exit. Mar 2 13:10:51.764880 systemd-logind[1725]: Removed session 6. Mar 2 13:10:51.846356 systemd[1]: Started sshd@4-10.200.20.38:22-10.200.16.10:34588.service - OpenSSH per-connection server daemon (10.200.16.10:34588). Mar 2 13:10:51.894271 chronyd[1715]: Selected source PHC0 Mar 2 13:10:52.337802 sshd[2205]: Accepted publickey for core from 10.200.16.10 port 34588 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:10:52.339106 sshd[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:52.342555 systemd-logind[1725]: New session 7 of user core. Mar 2 13:10:52.353279 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 13:10:52.807962 sudo[2208]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 2 13:10:52.808296 sudo[2208]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:10:52.820761 sudo[2208]: pam_unix(sudo:session): session closed for user root Mar 2 13:10:52.898355 sshd[2205]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:52.902409 systemd[1]: sshd@4-10.200.20.38:22-10.200.16.10:34588.service: Deactivated successfully. Mar 2 13:10:52.903848 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 13:10:52.904461 systemd-logind[1725]: Session 7 logged out. Waiting for processes to exit. Mar 2 13:10:52.905624 systemd-logind[1725]: Removed session 7. Mar 2 13:10:52.998483 systemd[1]: Started sshd@5-10.200.20.38:22-10.200.16.10:34604.service - OpenSSH per-connection server daemon (10.200.16.10:34604). Mar 2 13:10:53.492937 sshd[2213]: Accepted publickey for core from 10.200.16.10 port 34604 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:10:53.494308 sshd[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:53.498254 systemd-logind[1725]: New session 8 of user core. Mar 2 13:10:53.504281 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 13:10:53.768832 sudo[2217]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 2 13:10:53.769319 sudo[2217]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:10:53.772292 sudo[2217]: pam_unix(sudo:session): session closed for user root Mar 2 13:10:53.776345 sudo[2216]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 2 13:10:53.776588 sudo[2216]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:10:53.788420 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 2 13:10:53.789436 auditctl[2220]: No rules Mar 2 13:10:53.789927 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 13:10:53.791349 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 2 13:10:53.793401 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 13:10:53.814954 augenrules[2238]: No rules Mar 2 13:10:53.817230 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 13:10:53.818793 sudo[2216]: pam_unix(sudo:session): session closed for user root Mar 2 13:10:53.897371 sshd[2213]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:53.900203 systemd-logind[1725]: Session 8 logged out. Waiting for processes to exit. Mar 2 13:10:53.901112 systemd[1]: sshd@5-10.200.20.38:22-10.200.16.10:34604.service: Deactivated successfully. Mar 2 13:10:53.902788 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 13:10:53.904541 systemd-logind[1725]: Removed session 8. Mar 2 13:10:53.982269 systemd[1]: Started sshd@6-10.200.20.38:22-10.200.16.10:34610.service - OpenSSH per-connection server daemon (10.200.16.10:34610). Mar 2 13:10:54.463754 sshd[2246]: Accepted publickey for core from 10.200.16.10 port 34610 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:10:54.464544 sshd[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:54.468475 systemd-logind[1725]: New session 9 of user core. Mar 2 13:10:54.478364 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 13:10:54.736210 sudo[2249]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 13:10:54.736491 sudo[2249]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:10:56.492477 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 2 13:10:56.492518 (dockerd)[2265]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 2 13:10:57.221077 dockerd[2265]: time="2026-03-02T13:10:57.221025458Z" level=info msg="Starting up" Mar 2 13:10:57.611306 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1155650490-merged.mount: Deactivated successfully. Mar 2 13:10:58.294578 dockerd[2265]: time="2026-03-02T13:10:58.294536186Z" level=info msg="Loading containers: start." Mar 2 13:10:58.518189 kernel: Initializing XFRM netlink socket Mar 2 13:10:58.697106 systemd-networkd[1374]: docker0: Link UP Mar 2 13:10:58.720714 dockerd[2265]: time="2026-03-02T13:10:58.720201140Z" level=info msg="Loading containers: done." Mar 2 13:10:59.146258 dockerd[2265]: time="2026-03-02T13:10:59.146213495Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 2 13:10:59.146426 dockerd[2265]: time="2026-03-02T13:10:59.146315655Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 2 13:10:59.146459 dockerd[2265]: time="2026-03-02T13:10:59.146427655Z" level=info msg="Daemon has completed initialization" Mar 2 13:10:59.347743 dockerd[2265]: time="2026-03-02T13:10:59.347234871Z" level=info msg="API listen on /run/docker.sock" Mar 2 13:10:59.347472 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 2 13:10:59.687564 containerd[1760]: time="2026-03-02T13:10:59.687526299Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 2 13:11:00.491632 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 2 13:11:00.497351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:11:00.595544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:11:00.599457 (kubelet)[2410]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:11:00.630179 kubelet[2410]: E0302 13:11:00.628813 2410 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:11:00.630564 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:11:00.630682 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:11:04.157604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3301017917.mount: Deactivated successfully. Mar 2 13:11:09.511804 containerd[1760]: time="2026-03-02T13:11:09.511752737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:09.514685 containerd[1760]: time="2026-03-02T13:11:09.514656739Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=24701796" Mar 2 13:11:09.517669 containerd[1760]: time="2026-03-02T13:11:09.517641462Z" level=info msg="ImageCreate event name:\"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:09.522257 containerd[1760]: time="2026-03-02T13:11:09.522230345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:09.523221 containerd[1760]: time="2026-03-02T13:11:09.523191866Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"24698395\" in 9.835627967s" Mar 2 13:11:09.523254 containerd[1760]: time="2026-03-02T13:11:09.523226186Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\"" Mar 2 13:11:09.523767 containerd[1760]: time="2026-03-02T13:11:09.523738506Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 2 13:11:10.741740 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 2 13:11:10.747338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:11:10.778201 containerd[1760]: time="2026-03-02T13:11:10.776448179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:10.854685 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:11:10.858632 (kubelet)[2487]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:11:10.888178 containerd[1760]: time="2026-03-02T13:11:10.888116540Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=19063039" Mar 2 13:11:10.961140 kubelet[2487]: E0302 13:11:10.961087 2487 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:11:10.964092 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:11:10.964346 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:11:11.238080 containerd[1760]: time="2026-03-02T13:11:11.237965955Z" level=info msg="ImageCreate event name:\"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:11.245030 containerd[1760]: time="2026-03-02T13:11:11.243876360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:11.245030 containerd[1760]: time="2026-03-02T13:11:11.244909000Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"20675140\" in 1.721136374s" Mar 2 13:11:11.245030 containerd[1760]: time="2026-03-02T13:11:11.244941280Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\"" Mar 2 13:11:11.246011 containerd[1760]: time="2026-03-02T13:11:11.245987521Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 2 13:11:11.536334 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 2 13:11:12.319726 containerd[1760]: time="2026-03-02T13:11:12.319675824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:12.322095 containerd[1760]: time="2026-03-02T13:11:12.322067185Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=13797901" Mar 2 13:11:12.326114 containerd[1760]: time="2026-03-02T13:11:12.325684468Z" level=info msg="ImageCreate event name:\"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:12.335611 containerd[1760]: time="2026-03-02T13:11:12.335569515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:12.336766 containerd[1760]: time="2026-03-02T13:11:12.336733796Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"15410020\" in 1.090626595s" Mar 2 13:11:12.336826 containerd[1760]: time="2026-03-02T13:11:12.336771116Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\"" Mar 2 13:11:12.337564 containerd[1760]: time="2026-03-02T13:11:12.337534757Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 2 13:11:13.392873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount915181447.mount: Deactivated successfully. Mar 2 13:11:13.639282 containerd[1760]: time="2026-03-02T13:11:13.638611745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:13.641234 containerd[1760]: time="2026-03-02T13:11:13.641201907Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=22329583" Mar 2 13:11:13.644798 containerd[1760]: time="2026-03-02T13:11:13.644569229Z" level=info msg="ImageCreate event name:\"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:13.648296 containerd[1760]: time="2026-03-02T13:11:13.648263752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:13.649125 containerd[1760]: time="2026-03-02T13:11:13.648958192Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"22328602\" in 1.311388035s" Mar 2 13:11:13.649125 containerd[1760]: time="2026-03-02T13:11:13.648988752Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\"" Mar 2 13:11:13.649930 containerd[1760]: time="2026-03-02T13:11:13.649752673Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 2 13:11:13.869202 update_engine[1730]: I20260302 13:11:13.869125 1730 update_attempter.cc:509] Updating boot flags... Mar 2 13:11:13.910204 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2518) Mar 2 13:11:14.364744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1876147816.mount: Deactivated successfully. Mar 2 13:11:20.991671 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 2 13:11:20.997314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:11:22.650329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:11:22.654254 (kubelet)[2569]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:11:22.684877 kubelet[2569]: E0302 13:11:22.684818 2569 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:11:22.686843 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:11:22.686970 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:11:32.741793 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 2 13:11:32.748571 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:11:33.615557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:11:33.618809 (kubelet)[2592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:11:33.648239 kubelet[2592]: E0302 13:11:33.648187 2592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:11:33.651083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:11:33.651242 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:11:34.560007 containerd[1760]: time="2026-03-02T13:11:34.559958802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:34.563177 containerd[1760]: time="2026-03-02T13:11:34.563051004Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172211" Mar 2 13:11:34.566846 containerd[1760]: time="2026-03-02T13:11:34.566812126Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:34.572442 containerd[1760]: time="2026-03-02T13:11:34.572398410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:34.574124 containerd[1760]: time="2026-03-02T13:11:34.573891691Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 20.924098418s" Mar 2 13:11:34.574124 containerd[1760]: time="2026-03-02T13:11:34.573930211Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Mar 2 13:11:34.575205 containerd[1760]: time="2026-03-02T13:11:34.575181412Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 2 13:11:35.205156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3644830675.mount: Deactivated successfully. Mar 2 13:11:35.227190 containerd[1760]: time="2026-03-02T13:11:35.227116210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:35.230350 containerd[1760]: time="2026-03-02T13:11:35.230325452Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 2 13:11:35.233319 containerd[1760]: time="2026-03-02T13:11:35.233293774Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:35.238365 containerd[1760]: time="2026-03-02T13:11:35.238332978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:35.239771 containerd[1760]: time="2026-03-02T13:11:35.239743899Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 664.455487ms" Mar 2 13:11:35.239809 containerd[1760]: time="2026-03-02T13:11:35.239775179Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 2 13:11:35.240307 containerd[1760]: time="2026-03-02T13:11:35.240127219Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 2 13:11:35.895298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2609568183.mount: Deactivated successfully. Mar 2 13:11:36.910267 containerd[1760]: time="2026-03-02T13:11:36.910214623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:36.913006 containerd[1760]: time="2026-03-02T13:11:36.912979664Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21738165" Mar 2 13:11:36.915921 containerd[1760]: time="2026-03-02T13:11:36.915892146Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:36.920671 containerd[1760]: time="2026-03-02T13:11:36.920638590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:36.922616 containerd[1760]: time="2026-03-02T13:11:36.922400271Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 1.682246452s" Mar 2 13:11:36.922616 containerd[1760]: time="2026-03-02T13:11:36.922435471Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Mar 2 13:11:38.787693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:11:38.794629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:11:38.826243 systemd[1]: Reloading requested from client PID 2721 ('systemctl') (unit session-9.scope)... Mar 2 13:11:38.826259 systemd[1]: Reloading... Mar 2 13:11:38.933209 zram_generator::config[2764]: No configuration found. Mar 2 13:11:39.027344 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:11:39.104722 systemd[1]: Reloading finished in 278 ms. Mar 2 13:11:39.152936 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:11:39.155887 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 13:11:39.156221 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:11:39.164495 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:11:39.316685 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:11:39.321558 (kubelet)[2830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:11:39.352964 kubelet[2830]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:11:39.658802 kubelet[2830]: I0302 13:11:39.658655 2830 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 2 13:11:39.658802 kubelet[2830]: I0302 13:11:39.658700 2830 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:11:39.660187 kubelet[2830]: I0302 13:11:39.659984 2830 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 13:11:39.660187 kubelet[2830]: I0302 13:11:39.660002 2830 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:11:39.660314 kubelet[2830]: I0302 13:11:39.660274 2830 server.go:951] "Client rotation is on, will bootstrap in background" Mar 2 13:11:39.852977 kubelet[2830]: E0302 13:11:39.852936 2830 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:11:39.853336 kubelet[2830]: I0302 13:11:39.853194 2830 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:11:39.857463 kubelet[2830]: E0302 13:11:39.856917 2830 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:11:39.857463 kubelet[2830]: I0302 13:11:39.856971 2830 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 2 13:11:39.859530 kubelet[2830]: I0302 13:11:39.859511 2830 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 13:11:39.860397 kubelet[2830]: I0302 13:11:39.860367 2830 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:11:39.860537 kubelet[2830]: I0302 13:11:39.860399 2830 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.101-d5e61b93e9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:11:39.860633 kubelet[2830]: I0302 13:11:39.860542 2830 topology_manager.go:143] "Creating topology manager with none policy" Mar 2 13:11:39.860633 kubelet[2830]: I0302 13:11:39.860556 2830 container_manager_linux.go:308] "Creating device plugin manager" Mar 2 13:11:39.860704 kubelet[2830]: I0302 13:11:39.860638 2830 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 13:11:39.865461 kubelet[2830]: I0302 13:11:39.865444 2830 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 2 13:11:39.865584 kubelet[2830]: I0302 13:11:39.865573 2830 kubelet.go:482] "Attempting to sync node with API server" Mar 2 13:11:39.865621 kubelet[2830]: I0302 13:11:39.865590 2830 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:11:39.865621 kubelet[2830]: I0302 13:11:39.865604 2830 kubelet.go:394] "Adding apiserver pod source" Mar 2 13:11:39.865621 kubelet[2830]: I0302 13:11:39.865612 2830 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:11:39.868837 kubelet[2830]: I0302 13:11:39.868817 2830 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:11:39.870350 kubelet[2830]: I0302 13:11:39.870323 2830 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:11:39.870417 kubelet[2830]: I0302 13:11:39.870361 2830 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 13:11:39.870417 kubelet[2830]: W0302 13:11:39.870406 2830 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 13:11:39.875633 kubelet[2830]: I0302 13:11:39.875616 2830 server.go:1257] "Started kubelet" Mar 2 13:11:39.876643 kubelet[2830]: I0302 13:11:39.876594 2830 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:11:39.877509 kubelet[2830]: I0302 13:11:39.877487 2830 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:11:39.877858 kubelet[2830]: I0302 13:11:39.877830 2830 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 2 13:11:39.880349 kubelet[2830]: I0302 13:11:39.880306 2830 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:11:39.880472 kubelet[2830]: I0302 13:11:39.880458 2830 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 13:11:39.880669 kubelet[2830]: I0302 13:11:39.880656 2830 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:11:39.884742 kubelet[2830]: I0302 13:11:39.884717 2830 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 2 13:11:39.884901 kubelet[2830]: E0302 13:11:39.884877 2830 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.101-d5e61b93e9\" not found" Mar 2 13:11:39.885260 kubelet[2830]: I0302 13:11:39.885241 2830 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:11:39.886856 kubelet[2830]: E0302 13:11:39.886832 2830 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-d5e61b93e9?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="200ms" Mar 2 13:11:39.887289 kubelet[2830]: I0302 13:11:39.887254 2830 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 13:11:39.889242 kubelet[2830]: I0302 13:11:39.887314 2830 reconciler.go:29] "Reconciler: start to sync state" Mar 2 13:11:39.889242 kubelet[2830]: E0302 13:11:39.886988 2830 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.101-d5e61b93e9.1899085769b40d89 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.101-d5e61b93e9,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.101-d5e61b93e9,},FirstTimestamp:2026-03-02 13:11:39.875585417 +0000 UTC m=+0.551216412,LastTimestamp:2026-03-02 13:11:39.875585417 +0000 UTC m=+0.551216412,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.101-d5e61b93e9,}" Mar 2 13:11:39.889242 kubelet[2830]: E0302 13:11:39.888906 2830 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:11:39.889672 kubelet[2830]: I0302 13:11:39.889654 2830 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:11:39.889778 kubelet[2830]: I0302 13:11:39.889756 2830 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:11:39.889917 kubelet[2830]: I0302 13:11:39.889902 2830 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:11:39.896771 kubelet[2830]: I0302 13:11:39.896735 2830 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 13:11:39.897598 kubelet[2830]: I0302 13:11:39.897575 2830 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 13:11:39.897598 kubelet[2830]: I0302 13:11:39.897590 2830 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 2 13:11:39.897683 kubelet[2830]: I0302 13:11:39.897608 2830 kubelet.go:2501] "Starting kubelet main sync loop" Mar 2 13:11:39.897683 kubelet[2830]: E0302 13:11:39.897651 2830 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:11:39.943195 kubelet[2830]: E0302 13:11:39.942754 2830 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.101-d5e61b93e9.1899085769b40d89 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.101-d5e61b93e9,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.101-d5e61b93e9,},FirstTimestamp:2026-03-02 13:11:39.875585417 +0000 UTC m=+0.551216412,LastTimestamp:2026-03-02 13:11:39.875585417 +0000 UTC m=+0.551216412,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.101-d5e61b93e9,}" Mar 2 13:11:39.985976 kubelet[2830]: E0302 13:11:39.985946 2830 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.101-d5e61b93e9\" not found" Mar 2 13:11:39.991495 kubelet[2830]: I0302 13:11:39.991477 2830 cpu_manager.go:225] "Starting" policy="none" Mar 2 13:11:39.991495 kubelet[2830]: I0302 13:11:39.991491 2830 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 2 13:11:39.991635 kubelet[2830]: I0302 13:11:39.991508 2830 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 2 13:11:39.997809 kubelet[2830]: I0302 13:11:39.997786 2830 policy_none.go:50] "Start" Mar 2 13:11:39.997809 kubelet[2830]: I0302 13:11:39.997812 2830 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 13:11:39.997904 kubelet[2830]: I0302 13:11:39.997823 2830 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 13:11:39.998370 kubelet[2830]: E0302 13:11:39.998351 2830 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 13:11:40.003139 kubelet[2830]: I0302 13:11:40.003125 2830 policy_none.go:44] "Start" Mar 2 13:11:40.006562 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 2 13:11:40.018189 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 2 13:11:40.020645 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 2 13:11:40.026587 kubelet[2830]: E0302 13:11:40.025981 2830 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:11:40.026587 kubelet[2830]: I0302 13:11:40.026148 2830 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 2 13:11:40.026587 kubelet[2830]: I0302 13:11:40.026159 2830 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:11:40.026587 kubelet[2830]: I0302 13:11:40.026401 2830 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 2 13:11:40.028194 kubelet[2830]: E0302 13:11:40.027739 2830 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:11:40.028194 kubelet[2830]: E0302 13:11:40.027917 2830 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.101-d5e61b93e9\" not found" Mar 2 13:11:40.088187 kubelet[2830]: E0302 13:11:40.088142 2830 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-d5e61b93e9?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="400ms" Mar 2 13:11:40.127539 kubelet[2830]: I0302 13:11:40.127509 2830 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.101-d5e61b93e9" Mar 2 13:11:40.127831 kubelet[2830]: E0302 13:11:40.127798 2830 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.101-d5e61b93e9" Mar 2 13:11:40.211508 systemd[1]: Created slice kubepods-burstable-pode48ea40e21f6be6cf1f149e8444ea4c5.slice - libcontainer container kubepods-burstable-pode48ea40e21f6be6cf1f149e8444ea4c5.slice. Mar 2 13:11:40.220938 kubelet[2830]: E0302 13:11:40.220908 2830 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-d5e61b93e9\" not found" node="ci-4081.3.101-d5e61b93e9" Mar 2 13:11:40.224984 systemd[1]: Created slice kubepods-burstable-pod1fe410bb27da8759af15f785724ea1c8.slice - libcontainer container kubepods-burstable-pod1fe410bb27da8759af15f785724ea1c8.slice. Mar 2 13:11:40.226839 kubelet[2830]: E0302 13:11:40.226812 2830 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-d5e61b93e9\" not found" node="ci-4081.3.101-d5e61b93e9" Mar 2 13:11:40.229027 systemd[1]: Created slice kubepods-burstable-podc55c8f476b0b846cc5d6d6497a84eeb6.slice - libcontainer container kubepods-burstable-podc55c8f476b0b846cc5d6d6497a84eeb6.slice. Mar 2 13:11:40.233894 kubelet[2830]: E0302 13:11:40.233744 2830 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-d5e61b93e9\" not found" node="ci-4081.3.101-d5e61b93e9" Mar 2 13:11:40.288400 kubelet[2830]: I0302 13:11:40.288369 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1fe410bb27da8759af15f785724ea1c8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.101-d5e61b93e9\" (UID: \"1fe410bb27da8759af15f785724ea1c8\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:40.288691 kubelet[2830]: I0302 13:11:40.288546 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1fe410bb27da8759af15f785724ea1c8-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.101-d5e61b93e9\" (UID: \"1fe410bb27da8759af15f785724ea1c8\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:40.288691 kubelet[2830]: I0302 13:11:40.288583 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1fe410bb27da8759af15f785724ea1c8-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.101-d5e61b93e9\" (UID: \"1fe410bb27da8759af15f785724ea1c8\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:40.288691 kubelet[2830]: I0302 13:11:40.288615 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1fe410bb27da8759af15f785724ea1c8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.101-d5e61b93e9\" (UID: \"1fe410bb27da8759af15f785724ea1c8\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:40.288691 kubelet[2830]: I0302 13:11:40.288633 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1fe410bb27da8759af15f785724ea1c8-ca-certs\") pod \"kube-controller-manager-ci-4081.3.101-d5e61b93e9\" (UID: \"1fe410bb27da8759af15f785724ea1c8\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:40.288691 kubelet[2830]: I0302 13:11:40.288647 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c55c8f476b0b846cc5d6d6497a84eeb6-kubeconfig\") pod \"kube-scheduler-ci-4081.3.101-d5e61b93e9\" (UID: \"c55c8f476b0b846cc5d6d6497a84eeb6\") " pod="kube-system/kube-scheduler-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:40.288824 kubelet[2830]: I0302 13:11:40.288663 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e48ea40e21f6be6cf1f149e8444ea4c5-ca-certs\") pod \"kube-apiserver-ci-4081.3.101-d5e61b93e9\" (UID: \"e48ea40e21f6be6cf1f149e8444ea4c5\") " pod="kube-system/kube-apiserver-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:40.288824 kubelet[2830]: I0302 13:11:40.288677 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e48ea40e21f6be6cf1f149e8444ea4c5-k8s-certs\") pod \"kube-apiserver-ci-4081.3.101-d5e61b93e9\" (UID: \"e48ea40e21f6be6cf1f149e8444ea4c5\") " pod="kube-system/kube-apiserver-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:40.288928 kubelet[2830]: I0302 13:11:40.288898 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e48ea40e21f6be6cf1f149e8444ea4c5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.101-d5e61b93e9\" (UID: \"e48ea40e21f6be6cf1f149e8444ea4c5\") " pod="kube-system/kube-apiserver-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:40.329756 kubelet[2830]: I0302 13:11:40.329707 2830 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.101-d5e61b93e9" Mar 2 13:11:40.330047 kubelet[2830]: E0302 13:11:40.330012 2830 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.101-d5e61b93e9" Mar 2 13:11:40.489059 kubelet[2830]: E0302 13:11:40.488948 2830 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-d5e61b93e9?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="800ms" Mar 2 13:11:40.528130 containerd[1760]: time="2026-03-02T13:11:40.527887065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.101-d5e61b93e9,Uid:e48ea40e21f6be6cf1f149e8444ea4c5,Namespace:kube-system,Attempt:0,}" Mar 2 13:11:40.534041 containerd[1760]: time="2026-03-02T13:11:40.533953069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.101-d5e61b93e9,Uid:1fe410bb27da8759af15f785724ea1c8,Namespace:kube-system,Attempt:0,}" Mar 2 13:11:40.540896 containerd[1760]: time="2026-03-02T13:11:40.540668034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.101-d5e61b93e9,Uid:c55c8f476b0b846cc5d6d6497a84eeb6,Namespace:kube-system,Attempt:0,}" Mar 2 13:11:40.732474 kubelet[2830]: I0302 13:11:40.732449 2830 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.101-d5e61b93e9" Mar 2 13:11:40.732883 kubelet[2830]: E0302 13:11:40.732854 2830 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.101-d5e61b93e9" Mar 2 13:11:41.202340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2473349789.mount: Deactivated successfully. Mar 2 13:11:41.223445 containerd[1760]: time="2026-03-02T13:11:41.223408143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:11:41.234679 containerd[1760]: time="2026-03-02T13:11:41.234612951Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 2 13:11:41.238747 containerd[1760]: time="2026-03-02T13:11:41.238041193Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:11:41.241503 containerd[1760]: time="2026-03-02T13:11:41.241471636Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:11:41.244499 containerd[1760]: time="2026-03-02T13:11:41.244461118Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:11:41.246706 containerd[1760]: time="2026-03-02T13:11:41.246680519Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:11:41.249518 containerd[1760]: time="2026-03-02T13:11:41.249488481Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:11:41.253025 containerd[1760]: time="2026-03-02T13:11:41.252990843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:11:41.254043 containerd[1760]: time="2026-03-02T13:11:41.254018684Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 726.055339ms" Mar 2 13:11:41.258108 containerd[1760]: time="2026-03-02T13:11:41.258074527Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 717.350493ms" Mar 2 13:11:41.261346 containerd[1760]: time="2026-03-02T13:11:41.261283089Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 727.26938ms" Mar 2 13:11:41.289872 kubelet[2830]: E0302 13:11:41.289831 2830 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-d5e61b93e9?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="1.6s" Mar 2 13:11:41.535390 kubelet[2830]: I0302 13:11:41.535364 2830 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.101-d5e61b93e9" Mar 2 13:11:41.535695 kubelet[2830]: E0302 13:11:41.535631 2830 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.101-d5e61b93e9" Mar 2 13:11:41.959026 containerd[1760]: time="2026-03-02T13:11:41.958868889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:11:41.959026 containerd[1760]: time="2026-03-02T13:11:41.958929009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:11:41.959609 containerd[1760]: time="2026-03-02T13:11:41.958940489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:11:41.959609 containerd[1760]: time="2026-03-02T13:11:41.959259649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:11:41.961256 containerd[1760]: time="2026-03-02T13:11:41.960747490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:11:41.961256 containerd[1760]: time="2026-03-02T13:11:41.960796770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:11:41.961256 containerd[1760]: time="2026-03-02T13:11:41.960810450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:11:41.961256 containerd[1760]: time="2026-03-02T13:11:41.960875930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:11:41.965111 containerd[1760]: time="2026-03-02T13:11:41.964872053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:11:41.965111 containerd[1760]: time="2026-03-02T13:11:41.964923813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:11:41.965111 containerd[1760]: time="2026-03-02T13:11:41.964938933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:11:41.965269 containerd[1760]: time="2026-03-02T13:11:41.965110173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:11:41.986336 systemd[1]: Started cri-containerd-2f40b5a7ccaa8de623e3391b9fce8ea7e495ab081a02fc58a83f352b3163b0a6.scope - libcontainer container 2f40b5a7ccaa8de623e3391b9fce8ea7e495ab081a02fc58a83f352b3163b0a6. Mar 2 13:11:41.988043 systemd[1]: Started cri-containerd-c8ab03fc442886229f393643ede61dcdf55763e74fb3b79264a8b0f7bfe1e312.scope - libcontainer container c8ab03fc442886229f393643ede61dcdf55763e74fb3b79264a8b0f7bfe1e312. Mar 2 13:11:41.993026 systemd[1]: Started cri-containerd-02b2643d0b4390ac667daeac9cca110f90790b065a74cb342aca772f0f11076f.scope - libcontainer container 02b2643d0b4390ac667daeac9cca110f90790b065a74cb342aca772f0f11076f. Mar 2 13:11:42.026582 containerd[1760]: time="2026-03-02T13:11:42.026173695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.101-d5e61b93e9,Uid:e48ea40e21f6be6cf1f149e8444ea4c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8ab03fc442886229f393643ede61dcdf55763e74fb3b79264a8b0f7bfe1e312\"" Mar 2 13:11:42.033471 kubelet[2830]: E0302 13:11:42.033433 2830 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:11:42.041108 containerd[1760]: time="2026-03-02T13:11:42.041074186Z" level=info msg="CreateContainer within sandbox \"c8ab03fc442886229f393643ede61dcdf55763e74fb3b79264a8b0f7bfe1e312\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 2 13:11:42.045234 containerd[1760]: time="2026-03-02T13:11:42.045207188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.101-d5e61b93e9,Uid:c55c8f476b0b846cc5d6d6497a84eeb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"02b2643d0b4390ac667daeac9cca110f90790b065a74cb342aca772f0f11076f\"" Mar 2 13:11:42.048860 containerd[1760]: time="2026-03-02T13:11:42.048826111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.101-d5e61b93e9,Uid:1fe410bb27da8759af15f785724ea1c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f40b5a7ccaa8de623e3391b9fce8ea7e495ab081a02fc58a83f352b3163b0a6\"" Mar 2 13:11:42.059629 containerd[1760]: time="2026-03-02T13:11:42.059602198Z" level=info msg="CreateContainer within sandbox \"02b2643d0b4390ac667daeac9cca110f90790b065a74cb342aca772f0f11076f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 2 13:11:42.066554 containerd[1760]: time="2026-03-02T13:11:42.066524123Z" level=info msg="CreateContainer within sandbox \"2f40b5a7ccaa8de623e3391b9fce8ea7e495ab081a02fc58a83f352b3163b0a6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 2 13:11:42.108398 containerd[1760]: time="2026-03-02T13:11:42.108346552Z" level=info msg="CreateContainer within sandbox \"c8ab03fc442886229f393643ede61dcdf55763e74fb3b79264a8b0f7bfe1e312\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f42819dbd208bd30e152e1f807091f1512250ede9355f1d962303f0069dc8e4d\"" Mar 2 13:11:42.109797 containerd[1760]: time="2026-03-02T13:11:42.108894832Z" level=info msg="StartContainer for \"f42819dbd208bd30e152e1f807091f1512250ede9355f1d962303f0069dc8e4d\"" Mar 2 13:11:42.130690 containerd[1760]: time="2026-03-02T13:11:42.130642927Z" level=info msg="CreateContainer within sandbox \"02b2643d0b4390ac667daeac9cca110f90790b065a74cb342aca772f0f11076f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a846c6fc7e7138e865641bb38854b5d52034fdcf59fd4d028c0adfac3cd0df5b\"" Mar 2 13:11:42.131944 containerd[1760]: time="2026-03-02T13:11:42.131271488Z" level=info msg="StartContainer for \"a846c6fc7e7138e865641bb38854b5d52034fdcf59fd4d028c0adfac3cd0df5b\"" Mar 2 13:11:42.131330 systemd[1]: Started cri-containerd-f42819dbd208bd30e152e1f807091f1512250ede9355f1d962303f0069dc8e4d.scope - libcontainer container f42819dbd208bd30e152e1f807091f1512250ede9355f1d962303f0069dc8e4d. Mar 2 13:11:42.139176 containerd[1760]: time="2026-03-02T13:11:42.138721013Z" level=info msg="CreateContainer within sandbox \"2f40b5a7ccaa8de623e3391b9fce8ea7e495ab081a02fc58a83f352b3163b0a6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"57c643ba4ed8194177158c750989279bc5399103f1516bddb6c66e88837f89d2\"" Mar 2 13:11:42.139720 containerd[1760]: time="2026-03-02T13:11:42.139700693Z" level=info msg="StartContainer for \"57c643ba4ed8194177158c750989279bc5399103f1516bddb6c66e88837f89d2\"" Mar 2 13:11:42.168321 systemd[1]: Started cri-containerd-57c643ba4ed8194177158c750989279bc5399103f1516bddb6c66e88837f89d2.scope - libcontainer container 57c643ba4ed8194177158c750989279bc5399103f1516bddb6c66e88837f89d2. Mar 2 13:11:42.172982 systemd[1]: Started cri-containerd-a846c6fc7e7138e865641bb38854b5d52034fdcf59fd4d028c0adfac3cd0df5b.scope - libcontainer container a846c6fc7e7138e865641bb38854b5d52034fdcf59fd4d028c0adfac3cd0df5b. Mar 2 13:11:42.187086 containerd[1760]: time="2026-03-02T13:11:42.187037006Z" level=info msg="StartContainer for \"f42819dbd208bd30e152e1f807091f1512250ede9355f1d962303f0069dc8e4d\" returns successfully" Mar 2 13:11:42.228960 containerd[1760]: time="2026-03-02T13:11:42.228795435Z" level=info msg="StartContainer for \"57c643ba4ed8194177158c750989279bc5399103f1516bddb6c66e88837f89d2\" returns successfully" Mar 2 13:11:42.252580 containerd[1760]: time="2026-03-02T13:11:42.252485611Z" level=info msg="StartContainer for \"a846c6fc7e7138e865641bb38854b5d52034fdcf59fd4d028c0adfac3cd0df5b\" returns successfully" Mar 2 13:11:42.916879 kubelet[2830]: E0302 13:11:42.916608 2830 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-d5e61b93e9\" not found" node="ci-4081.3.101-d5e61b93e9" Mar 2 13:11:42.920233 kubelet[2830]: E0302 13:11:42.917910 2830 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-d5e61b93e9\" not found" node="ci-4081.3.101-d5e61b93e9" Mar 2 13:11:42.925533 kubelet[2830]: E0302 13:11:42.925513 2830 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-d5e61b93e9\" not found" node="ci-4081.3.101-d5e61b93e9" Mar 2 13:11:43.137664 kubelet[2830]: I0302 13:11:43.137634 2830 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.101-d5e61b93e9" Mar 2 13:11:43.763628 kubelet[2830]: E0302 13:11:43.763584 2830 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.101-d5e61b93e9\" not found" node="ci-4081.3.101-d5e61b93e9" Mar 2 13:11:43.803658 kubelet[2830]: I0302 13:11:43.803461 2830 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081.3.101-d5e61b93e9" Mar 2 13:11:43.803658 kubelet[2830]: E0302 13:11:43.803506 2830 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ci-4081.3.101-d5e61b93e9\": node \"ci-4081.3.101-d5e61b93e9\" not found" Mar 2 13:11:43.871327 kubelet[2830]: I0302 13:11:43.871140 2830 apiserver.go:52] "Watching apiserver" Mar 2 13:11:43.885872 kubelet[2830]: I0302 13:11:43.885586 2830 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:43.924030 kubelet[2830]: I0302 13:11:43.924012 2830 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:43.925178 kubelet[2830]: I0302 13:11:43.924072 2830 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:43.952175 kubelet[2830]: E0302 13:11:43.951988 2830 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.101-d5e61b93e9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:43.952175 kubelet[2830]: I0302 13:11:43.952127 2830 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:43.953290 kubelet[2830]: E0302 13:11:43.952660 2830 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.101-d5e61b93e9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:43.953290 kubelet[2830]: E0302 13:11:43.952768 2830 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.101-d5e61b93e9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:43.955294 kubelet[2830]: E0302 13:11:43.955147 2830 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.101-d5e61b93e9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:43.955294 kubelet[2830]: I0302 13:11:43.955178 2830 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:43.956576 kubelet[2830]: E0302 13:11:43.956544 2830 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.101-d5e61b93e9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:43.988361 kubelet[2830]: I0302 13:11:43.988004 2830 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 13:11:44.925312 kubelet[2830]: I0302 13:11:44.925280 2830 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:44.933078 kubelet[2830]: I0302 13:11:44.932777 2830 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:11:45.878259 systemd[1]: Reloading requested from client PID 3118 ('systemctl') (unit session-9.scope)... Mar 2 13:11:45.878272 systemd[1]: Reloading... Mar 2 13:11:45.951197 zram_generator::config[3157]: No configuration found. Mar 2 13:11:46.065985 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:11:46.155267 systemd[1]: Reloading finished in 276 ms. Mar 2 13:11:46.195423 kubelet[2830]: I0302 13:11:46.195355 2830 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:11:46.195966 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:11:46.211105 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 13:11:46.211366 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:11:46.217437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:11:46.317013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:11:46.321344 (kubelet)[3222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:11:46.360974 kubelet[3222]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:11:46.366897 kubelet[3222]: I0302 13:11:46.366856 3222 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 2 13:11:46.366897 kubelet[3222]: I0302 13:11:46.366893 3222 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:11:46.366995 kubelet[3222]: I0302 13:11:46.366915 3222 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 13:11:46.366995 kubelet[3222]: I0302 13:11:46.366921 3222 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:11:46.367158 kubelet[3222]: I0302 13:11:46.367140 3222 server.go:951] "Client rotation is on, will bootstrap in background" Mar 2 13:11:46.368289 kubelet[3222]: I0302 13:11:46.368273 3222 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 2 13:11:46.370318 kubelet[3222]: I0302 13:11:46.370202 3222 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:11:46.374867 kubelet[3222]: E0302 13:11:46.374840 3222 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:11:46.374941 kubelet[3222]: I0302 13:11:46.374881 3222 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 2 13:11:46.377429 kubelet[3222]: I0302 13:11:46.377410 3222 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 13:11:46.377633 kubelet[3222]: I0302 13:11:46.377608 3222 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:11:46.377812 kubelet[3222]: I0302 13:11:46.377632 3222 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.101-d5e61b93e9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:11:46.377896 kubelet[3222]: I0302 13:11:46.377819 3222 topology_manager.go:143] "Creating topology manager with none policy" Mar 2 13:11:46.377896 kubelet[3222]: I0302 13:11:46.377827 3222 container_manager_linux.go:308] "Creating device plugin manager" Mar 2 13:11:46.377896 kubelet[3222]: I0302 13:11:46.377849 3222 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 13:11:46.378029 kubelet[3222]: I0302 13:11:46.378017 3222 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 2 13:11:46.381174 kubelet[3222]: I0302 13:11:46.378140 3222 kubelet.go:482] "Attempting to sync node with API server" Mar 2 13:11:46.381174 kubelet[3222]: I0302 13:11:46.378159 3222 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:11:46.381174 kubelet[3222]: I0302 13:11:46.378184 3222 kubelet.go:394] "Adding apiserver pod source" Mar 2 13:11:46.381174 kubelet[3222]: I0302 13:11:46.378193 3222 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:11:46.381598 kubelet[3222]: I0302 13:11:46.381580 3222 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:11:46.383747 kubelet[3222]: I0302 13:11:46.382837 3222 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:11:46.384091 kubelet[3222]: I0302 13:11:46.384001 3222 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 13:11:46.389173 kubelet[3222]: I0302 13:11:46.388979 3222 server.go:1257] "Started kubelet" Mar 2 13:11:46.395990 kubelet[3222]: I0302 13:11:46.395967 3222 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 2 13:11:46.402646 kubelet[3222]: I0302 13:11:46.402416 3222 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:11:46.404749 kubelet[3222]: I0302 13:11:46.403613 3222 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:11:46.409702 kubelet[3222]: I0302 13:11:46.409596 3222 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:11:46.412307 kubelet[3222]: I0302 13:11:46.410159 3222 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 13:11:46.412307 kubelet[3222]: I0302 13:11:46.410326 3222 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:11:46.412307 kubelet[3222]: I0302 13:11:46.410521 3222 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:11:46.414789 kubelet[3222]: I0302 13:11:46.414773 3222 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 2 13:11:46.416912 kubelet[3222]: I0302 13:11:46.416889 3222 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 13:11:46.417914 kubelet[3222]: I0302 13:11:46.417080 3222 reconciler.go:29] "Reconciler: start to sync state" Mar 2 13:11:46.418199 kubelet[3222]: I0302 13:11:46.417077 3222 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 13:11:46.418375 kubelet[3222]: I0302 13:11:46.418351 3222 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:11:46.418467 kubelet[3222]: I0302 13:11:46.418448 3222 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:11:46.421997 kubelet[3222]: I0302 13:11:46.420909 3222 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 13:11:46.421997 kubelet[3222]: I0302 13:11:46.420931 3222 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 2 13:11:46.421997 kubelet[3222]: I0302 13:11:46.420948 3222 kubelet.go:2501] "Starting kubelet main sync loop" Mar 2 13:11:46.421997 kubelet[3222]: E0302 13:11:46.420993 3222 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:11:46.439240 kubelet[3222]: I0302 13:11:46.437015 3222 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:11:46.481868 kubelet[3222]: I0302 13:11:46.481838 3222 cpu_manager.go:225] "Starting" policy="none" Mar 2 13:11:46.481868 kubelet[3222]: I0302 13:11:46.481858 3222 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 2 13:11:46.482003 kubelet[3222]: I0302 13:11:46.481882 3222 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 2 13:11:46.482027 kubelet[3222]: I0302 13:11:46.482015 3222 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 2 13:11:46.482057 kubelet[3222]: I0302 13:11:46.482026 3222 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 2 13:11:46.482057 kubelet[3222]: I0302 13:11:46.482044 3222 policy_none.go:50] "Start" Mar 2 13:11:46.482057 kubelet[3222]: I0302 13:11:46.482051 3222 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 13:11:46.482114 kubelet[3222]: I0302 13:11:46.482059 3222 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 13:11:46.482220 kubelet[3222]: I0302 13:11:46.482204 3222 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 2 13:11:46.482220 kubelet[3222]: I0302 13:11:46.482219 3222 policy_none.go:44] "Start" Mar 2 13:11:46.486757 kubelet[3222]: E0302 13:11:46.486730 3222 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:11:46.487359 kubelet[3222]: I0302 13:11:46.486886 3222 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 2 13:11:46.487359 kubelet[3222]: I0302 13:11:46.486900 3222 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:11:46.487359 kubelet[3222]: I0302 13:11:46.487280 3222 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 2 13:11:46.490004 kubelet[3222]: E0302 13:11:46.489988 3222 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:11:46.522398 kubelet[3222]: I0302 13:11:46.522370 3222 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:46.522948 kubelet[3222]: I0302 13:11:46.522678 3222 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:46.522948 kubelet[3222]: I0302 13:11:46.522721 3222 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:46.531139 kubelet[3222]: I0302 13:11:46.531110 3222 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:11:46.535586 kubelet[3222]: I0302 13:11:46.535563 3222 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:11:46.535711 kubelet[3222]: I0302 13:11:46.535695 3222 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:11:46.535743 kubelet[3222]: E0302 13:11:46.535728 3222 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.101-d5e61b93e9\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:46.593387 kubelet[3222]: I0302 13:11:46.593356 3222 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.101-d5e61b93e9" Mar 2 13:11:46.604046 kubelet[3222]: I0302 13:11:46.604015 3222 kubelet_node_status.go:123] "Node was previously registered" node="ci-4081.3.101-d5e61b93e9" Mar 2 13:11:46.604157 kubelet[3222]: I0302 13:11:46.604094 3222 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081.3.101-d5e61b93e9" Mar 2 13:11:46.619573 kubelet[3222]: I0302 13:11:46.619518 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e48ea40e21f6be6cf1f149e8444ea4c5-k8s-certs\") pod \"kube-apiserver-ci-4081.3.101-d5e61b93e9\" (UID: \"e48ea40e21f6be6cf1f149e8444ea4c5\") " pod="kube-system/kube-apiserver-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:46.619573 kubelet[3222]: I0302 13:11:46.619554 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e48ea40e21f6be6cf1f149e8444ea4c5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.101-d5e61b93e9\" (UID: \"e48ea40e21f6be6cf1f149e8444ea4c5\") " pod="kube-system/kube-apiserver-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:46.619573 kubelet[3222]: I0302 13:11:46.619573 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1fe410bb27da8759af15f785724ea1c8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.101-d5e61b93e9\" (UID: \"1fe410bb27da8759af15f785724ea1c8\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:46.619717 kubelet[3222]: I0302 13:11:46.619588 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1fe410bb27da8759af15f785724ea1c8-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.101-d5e61b93e9\" (UID: \"1fe410bb27da8759af15f785724ea1c8\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:46.619717 kubelet[3222]: I0302 13:11:46.619604 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1fe410bb27da8759af15f785724ea1c8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.101-d5e61b93e9\" (UID: \"1fe410bb27da8759af15f785724ea1c8\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:46.619717 kubelet[3222]: I0302 13:11:46.619621 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c55c8f476b0b846cc5d6d6497a84eeb6-kubeconfig\") pod \"kube-scheduler-ci-4081.3.101-d5e61b93e9\" (UID: \"c55c8f476b0b846cc5d6d6497a84eeb6\") " pod="kube-system/kube-scheduler-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:46.619717 kubelet[3222]: I0302 13:11:46.619642 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e48ea40e21f6be6cf1f149e8444ea4c5-ca-certs\") pod \"kube-apiserver-ci-4081.3.101-d5e61b93e9\" (UID: \"e48ea40e21f6be6cf1f149e8444ea4c5\") " pod="kube-system/kube-apiserver-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:46.619717 kubelet[3222]: I0302 13:11:46.619659 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1fe410bb27da8759af15f785724ea1c8-ca-certs\") pod \"kube-controller-manager-ci-4081.3.101-d5e61b93e9\" (UID: \"1fe410bb27da8759af15f785724ea1c8\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:46.619824 kubelet[3222]: I0302 13:11:46.619674 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1fe410bb27da8759af15f785724ea1c8-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.101-d5e61b93e9\" (UID: \"1fe410bb27da8759af15f785724ea1c8\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:47.380050 kubelet[3222]: I0302 13:11:47.379832 3222 apiserver.go:52] "Watching apiserver" Mar 2 13:11:47.417916 kubelet[3222]: I0302 13:11:47.417870 3222 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 13:11:47.460502 kubelet[3222]: I0302 13:11:47.460478 3222 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:47.460817 kubelet[3222]: I0302 13:11:47.460797 3222 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:47.473775 kubelet[3222]: I0302 13:11:47.473343 3222 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:11:47.473775 kubelet[3222]: E0302 13:11:47.473410 3222 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.101-d5e61b93e9\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:47.474144 kubelet[3222]: I0302 13:11:47.474057 3222 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:11:47.474144 kubelet[3222]: E0302 13:11:47.474094 3222 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.101-d5e61b93e9\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.101-d5e61b93e9" Mar 2 13:11:48.506962 kubelet[3222]: I0302 13:11:48.506899 3222 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.101-d5e61b93e9" podStartSLOduration=2.506872132 podStartE2EDuration="2.506872132s" podCreationTimestamp="2026-03-02 13:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:11:48.496679565 +0000 UTC m=+2.169917352" watchObservedRunningTime="2026-03-02 13:11:48.506872132 +0000 UTC m=+2.180109919" Mar 2 13:11:48.516974 kubelet[3222]: I0302 13:11:48.516698 3222 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.101-d5e61b93e9" podStartSLOduration=2.5166877789999997 podStartE2EDuration="2.516687779s" podCreationTimestamp="2026-03-02 13:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:11:48.507246812 +0000 UTC m=+2.180484639" watchObservedRunningTime="2026-03-02 13:11:48.516687779 +0000 UTC m=+2.189925566" Mar 2 13:11:49.874298 kubelet[3222]: I0302 13:11:49.874062 3222 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.101-d5e61b93e9" podStartSLOduration=5.87404772 podStartE2EDuration="5.87404772s" podCreationTimestamp="2026-03-02 13:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:11:48.516826019 +0000 UTC m=+2.190063806" watchObservedRunningTime="2026-03-02 13:11:49.87404772 +0000 UTC m=+3.547285507" Mar 2 13:11:52.772866 kubelet[3222]: I0302 13:11:52.772585 3222 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 2 13:11:52.773677 kubelet[3222]: I0302 13:11:52.773013 3222 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 2 13:11:52.773715 containerd[1760]: time="2026-03-02T13:11:52.772858894Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 13:11:53.466835 systemd[1]: Created slice kubepods-besteffort-pode44d5231_19b1_432e_a1c7_cb858f83c88a.slice - libcontainer container kubepods-besteffort-pode44d5231_19b1_432e_a1c7_cb858f83c88a.slice. Mar 2 13:11:53.554891 kubelet[3222]: I0302 13:11:53.554855 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e44d5231-19b1-432e-a1c7-cb858f83c88a-xtables-lock\") pod \"kube-proxy-r9vr6\" (UID: \"e44d5231-19b1-432e-a1c7-cb858f83c88a\") " pod="kube-system/kube-proxy-r9vr6" Mar 2 13:11:53.555026 kubelet[3222]: I0302 13:11:53.554898 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m2hz\" (UniqueName: \"kubernetes.io/projected/e44d5231-19b1-432e-a1c7-cb858f83c88a-kube-api-access-8m2hz\") pod \"kube-proxy-r9vr6\" (UID: \"e44d5231-19b1-432e-a1c7-cb858f83c88a\") " pod="kube-system/kube-proxy-r9vr6" Mar 2 13:11:53.555026 kubelet[3222]: I0302 13:11:53.554922 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e44d5231-19b1-432e-a1c7-cb858f83c88a-kube-proxy\") pod \"kube-proxy-r9vr6\" (UID: \"e44d5231-19b1-432e-a1c7-cb858f83c88a\") " pod="kube-system/kube-proxy-r9vr6" Mar 2 13:11:53.555026 kubelet[3222]: I0302 13:11:53.554936 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e44d5231-19b1-432e-a1c7-cb858f83c88a-lib-modules\") pod \"kube-proxy-r9vr6\" (UID: \"e44d5231-19b1-432e-a1c7-cb858f83c88a\") " pod="kube-system/kube-proxy-r9vr6" Mar 2 13:11:53.663100 kubelet[3222]: E0302 13:11:53.662934 3222 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 2 13:11:53.663100 kubelet[3222]: E0302 13:11:53.662964 3222 projected.go:196] Error preparing data for projected volume kube-api-access-8m2hz for pod kube-system/kube-proxy-r9vr6: configmap "kube-root-ca.crt" not found Mar 2 13:11:53.663100 kubelet[3222]: E0302 13:11:53.663025 3222 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e44d5231-19b1-432e-a1c7-cb858f83c88a-kube-api-access-8m2hz podName:e44d5231-19b1-432e-a1c7-cb858f83c88a nodeName:}" failed. No retries permitted until 2026-03-02 13:11:54.163006257 +0000 UTC m=+7.836244004 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8m2hz" (UniqueName: "kubernetes.io/projected/e44d5231-19b1-432e-a1c7-cb858f83c88a-kube-api-access-8m2hz") pod "kube-proxy-r9vr6" (UID: "e44d5231-19b1-432e-a1c7-cb858f83c88a") : configmap "kube-root-ca.crt" not found Mar 2 13:11:53.985197 systemd[1]: Created slice kubepods-besteffort-pod7a3e4bf7_e476_4a74_bd37_bd9a5e6fd723.slice - libcontainer container kubepods-besteffort-pod7a3e4bf7_e476_4a74_bd37_bd9a5e6fd723.slice. Mar 2 13:11:54.058300 kubelet[3222]: I0302 13:11:54.058228 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7a3e4bf7-e476-4a74-bd37-bd9a5e6fd723-var-lib-calico\") pod \"tigera-operator-6447996989-wvw52\" (UID: \"7a3e4bf7-e476-4a74-bd37-bd9a5e6fd723\") " pod="tigera-operator/tigera-operator-6447996989-wvw52" Mar 2 13:11:54.058300 kubelet[3222]: I0302 13:11:54.058273 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb74r\" (UniqueName: \"kubernetes.io/projected/7a3e4bf7-e476-4a74-bd37-bd9a5e6fd723-kube-api-access-wb74r\") pod \"tigera-operator-6447996989-wvw52\" (UID: \"7a3e4bf7-e476-4a74-bd37-bd9a5e6fd723\") " pod="tigera-operator/tigera-operator-6447996989-wvw52" Mar 2 13:11:54.295233 containerd[1760]: time="2026-03-02T13:11:54.294889114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6447996989-wvw52,Uid:7a3e4bf7-e476-4a74-bd37-bd9a5e6fd723,Namespace:tigera-operator,Attempt:0,}" Mar 2 13:11:54.344736 containerd[1760]: time="2026-03-02T13:11:54.344549230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:11:54.344736 containerd[1760]: time="2026-03-02T13:11:54.344601630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:11:54.344736 containerd[1760]: time="2026-03-02T13:11:54.344621230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:11:54.345333 containerd[1760]: time="2026-03-02T13:11:54.345271550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:11:54.366309 systemd[1]: Started cri-containerd-6f1bdec16eae54528be5ee9a5024d4a20d7f41455103f82ca0913d446d9597d0.scope - libcontainer container 6f1bdec16eae54528be5ee9a5024d4a20d7f41455103f82ca0913d446d9597d0. Mar 2 13:11:54.383824 containerd[1760]: time="2026-03-02T13:11:54.383785858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r9vr6,Uid:e44d5231-19b1-432e-a1c7-cb858f83c88a,Namespace:kube-system,Attempt:0,}" Mar 2 13:11:54.394828 containerd[1760]: time="2026-03-02T13:11:54.394793626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6447996989-wvw52,Uid:7a3e4bf7-e476-4a74-bd37-bd9a5e6fd723,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6f1bdec16eae54528be5ee9a5024d4a20d7f41455103f82ca0913d446d9597d0\"" Mar 2 13:11:54.396653 containerd[1760]: time="2026-03-02T13:11:54.396566307Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.3\"" Mar 2 13:11:54.426617 containerd[1760]: time="2026-03-02T13:11:54.426177929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:11:54.426767 containerd[1760]: time="2026-03-02T13:11:54.426638049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:11:54.426767 containerd[1760]: time="2026-03-02T13:11:54.426669249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:11:54.426835 containerd[1760]: time="2026-03-02T13:11:54.426765969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:11:54.443327 systemd[1]: Started cri-containerd-fa9ca1114479683a918532258c820759e8d11f1b6cc124911b8b999e5e78523b.scope - libcontainer container fa9ca1114479683a918532258c820759e8d11f1b6cc124911b8b999e5e78523b. Mar 2 13:11:54.465912 containerd[1760]: time="2026-03-02T13:11:54.465759517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r9vr6,Uid:e44d5231-19b1-432e-a1c7-cb858f83c88a,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa9ca1114479683a918532258c820759e8d11f1b6cc124911b8b999e5e78523b\"" Mar 2 13:11:54.475439 containerd[1760]: time="2026-03-02T13:11:54.475395844Z" level=info msg="CreateContainer within sandbox \"fa9ca1114479683a918532258c820759e8d11f1b6cc124911b8b999e5e78523b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 13:11:54.518684 containerd[1760]: time="2026-03-02T13:11:54.518542156Z" level=info msg="CreateContainer within sandbox \"fa9ca1114479683a918532258c820759e8d11f1b6cc124911b8b999e5e78523b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e5a608b8f9ee5e9bcc6588320f7faee33799fbc461bf79cd6fe1543ec91c8996\"" Mar 2 13:11:54.519201 containerd[1760]: time="2026-03-02T13:11:54.519005836Z" level=info msg="StartContainer for \"e5a608b8f9ee5e9bcc6588320f7faee33799fbc461bf79cd6fe1543ec91c8996\"" Mar 2 13:11:54.544311 systemd[1]: Started cri-containerd-e5a608b8f9ee5e9bcc6588320f7faee33799fbc461bf79cd6fe1543ec91c8996.scope - libcontainer container e5a608b8f9ee5e9bcc6588320f7faee33799fbc461bf79cd6fe1543ec91c8996. Mar 2 13:11:54.571749 containerd[1760]: time="2026-03-02T13:11:54.571175194Z" level=info msg="StartContainer for \"e5a608b8f9ee5e9bcc6588320f7faee33799fbc461bf79cd6fe1543ec91c8996\" returns successfully" Mar 2 13:11:55.488821 kubelet[3222]: I0302 13:11:55.488659 3222 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-r9vr6" podStartSLOduration=2.488645536 podStartE2EDuration="2.488645536s" podCreationTimestamp="2026-03-02 13:11:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:11:55.487814856 +0000 UTC m=+9.161052643" watchObservedRunningTime="2026-03-02 13:11:55.488645536 +0000 UTC m=+9.161883323" Mar 2 13:11:56.691699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount77058860.mount: Deactivated successfully. Mar 2 13:11:57.568093 containerd[1760]: time="2026-03-02T13:11:57.568047220Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:57.572006 containerd[1760]: time="2026-03-02T13:11:57.571974823Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.3: active requests=0, bytes read=25060789" Mar 2 13:11:57.576304 containerd[1760]: time="2026-03-02T13:11:57.576271625Z" level=info msg="ImageCreate event name:\"sha256:a94b0dfe779f8dc351e02e8988fd60aecb466000f13b6f00042ab83ebb237d87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:57.579703 containerd[1760]: time="2026-03-02T13:11:57.579672988Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3b1a6762e1f3fae8490773b8f06ddd1e6775850febbece4d6002416f39adc670\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:11:57.580345 containerd[1760]: time="2026-03-02T13:11:57.580226508Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.3\" with image id \"sha256:a94b0dfe779f8dc351e02e8988fd60aecb466000f13b6f00042ab83ebb237d87\", repo tag \"quay.io/tigera/operator:v1.40.3\", repo digest \"quay.io/tigera/operator@sha256:3b1a6762e1f3fae8490773b8f06ddd1e6775850febbece4d6002416f39adc670\", size \"25056784\" in 3.183625281s" Mar 2 13:11:57.580345 containerd[1760]: time="2026-03-02T13:11:57.580264148Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.3\" returns image reference \"sha256:a94b0dfe779f8dc351e02e8988fd60aecb466000f13b6f00042ab83ebb237d87\"" Mar 2 13:11:57.590078 containerd[1760]: time="2026-03-02T13:11:57.590053355Z" level=info msg="CreateContainer within sandbox \"6f1bdec16eae54528be5ee9a5024d4a20d7f41455103f82ca0913d446d9597d0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 2 13:11:57.624644 containerd[1760]: time="2026-03-02T13:11:57.624592097Z" level=info msg="CreateContainer within sandbox \"6f1bdec16eae54528be5ee9a5024d4a20d7f41455103f82ca0913d446d9597d0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"523c517782d729fc9b7cbe837f704ea40d273c2e001beee65b13986668d0cbc4\"" Mar 2 13:11:57.626392 containerd[1760]: time="2026-03-02T13:11:57.626324699Z" level=info msg="StartContainer for \"523c517782d729fc9b7cbe837f704ea40d273c2e001beee65b13986668d0cbc4\"" Mar 2 13:11:57.655311 systemd[1]: Started cri-containerd-523c517782d729fc9b7cbe837f704ea40d273c2e001beee65b13986668d0cbc4.scope - libcontainer container 523c517782d729fc9b7cbe837f704ea40d273c2e001beee65b13986668d0cbc4. Mar 2 13:11:57.682747 containerd[1760]: time="2026-03-02T13:11:57.682696376Z" level=info msg="StartContainer for \"523c517782d729fc9b7cbe837f704ea40d273c2e001beee65b13986668d0cbc4\" returns successfully" Mar 2 13:11:59.942496 kubelet[3222]: I0302 13:11:59.942439 3222 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6447996989-wvw52" podStartSLOduration=3.757317986 podStartE2EDuration="6.942425388s" podCreationTimestamp="2026-03-02 13:11:53 +0000 UTC" firstStartedPulling="2026-03-02 13:11:54.396241227 +0000 UTC m=+8.069478974" lastFinishedPulling="2026-03-02 13:11:57.581348589 +0000 UTC m=+11.254586376" observedRunningTime="2026-03-02 13:11:58.494527432 +0000 UTC m=+12.167765219" watchObservedRunningTime="2026-03-02 13:11:59.942425388 +0000 UTC m=+13.615663135" Mar 2 13:12:03.427766 sudo[2249]: pam_unix(sudo:session): session closed for user root Mar 2 13:12:03.508565 sshd[2246]: pam_unix(sshd:session): session closed for user core Mar 2 13:12:03.513589 systemd[1]: sshd@6-10.200.20.38:22-10.200.16.10:34610.service: Deactivated successfully. Mar 2 13:12:03.518533 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 13:12:03.518752 systemd[1]: session-9.scope: Consumed 3.542s CPU time, 149.2M memory peak, 0B memory swap peak. Mar 2 13:12:03.519711 systemd-logind[1725]: Session 9 logged out. Waiting for processes to exit. Mar 2 13:12:03.522198 systemd-logind[1725]: Removed session 9. Mar 2 13:12:07.396575 systemd[1]: Created slice kubepods-besteffort-pod621107d8_d39c_4872_8620_a95c9ef0d371.slice - libcontainer container kubepods-besteffort-pod621107d8_d39c_4872_8620_a95c9ef0d371.slice. Mar 2 13:12:07.431041 kubelet[3222]: I0302 13:12:07.430993 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/621107d8-d39c-4872-8620-a95c9ef0d371-tigera-ca-bundle\") pod \"calico-typha-578fbcdb6-hl46m\" (UID: \"621107d8-d39c-4872-8620-a95c9ef0d371\") " pod="calico-system/calico-typha-578fbcdb6-hl46m" Mar 2 13:12:07.431041 kubelet[3222]: I0302 13:12:07.431041 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph254\" (UniqueName: \"kubernetes.io/projected/621107d8-d39c-4872-8620-a95c9ef0d371-kube-api-access-ph254\") pod \"calico-typha-578fbcdb6-hl46m\" (UID: \"621107d8-d39c-4872-8620-a95c9ef0d371\") " pod="calico-system/calico-typha-578fbcdb6-hl46m" Mar 2 13:12:07.431528 kubelet[3222]: I0302 13:12:07.431065 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/621107d8-d39c-4872-8620-a95c9ef0d371-typha-certs\") pod \"calico-typha-578fbcdb6-hl46m\" (UID: \"621107d8-d39c-4872-8620-a95c9ef0d371\") " pod="calico-system/calico-typha-578fbcdb6-hl46m" Mar 2 13:12:07.488410 systemd[1]: Created slice kubepods-besteffort-pod5967a8d2_1fdd_48d4_8c70_92fb66f32c7d.slice - libcontainer container kubepods-besteffort-pod5967a8d2_1fdd_48d4_8c70_92fb66f32c7d.slice. Mar 2 13:12:07.531478 kubelet[3222]: I0302 13:12:07.531436 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5967a8d2-1fdd-48d4-8c70-92fb66f32c7d-lib-modules\") pod \"calico-node-x74x2\" (UID: \"5967a8d2-1fdd-48d4-8c70-92fb66f32c7d\") " pod="calico-system/calico-node-x74x2" Mar 2 13:12:07.531478 kubelet[3222]: I0302 13:12:07.531471 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/5967a8d2-1fdd-48d4-8c70-92fb66f32c7d-sys-fs\") pod \"calico-node-x74x2\" (UID: \"5967a8d2-1fdd-48d4-8c70-92fb66f32c7d\") " pod="calico-system/calico-node-x74x2" Mar 2 13:12:07.531478 kubelet[3222]: I0302 13:12:07.531489 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/5967a8d2-1fdd-48d4-8c70-92fb66f32c7d-nodeproc\") pod \"calico-node-x74x2\" (UID: \"5967a8d2-1fdd-48d4-8c70-92fb66f32c7d\") " pod="calico-system/calico-node-x74x2" Mar 2 13:12:07.531748 kubelet[3222]: I0302 13:12:07.531502 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5967a8d2-1fdd-48d4-8c70-92fb66f32c7d-xtables-lock\") pod \"calico-node-x74x2\" (UID: \"5967a8d2-1fdd-48d4-8c70-92fb66f32c7d\") " pod="calico-system/calico-node-x74x2" Mar 2 13:12:07.531748 kubelet[3222]: I0302 13:12:07.531518 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5967a8d2-1fdd-48d4-8c70-92fb66f32c7d-var-lib-calico\") pod \"calico-node-x74x2\" (UID: \"5967a8d2-1fdd-48d4-8c70-92fb66f32c7d\") " pod="calico-system/calico-node-x74x2" Mar 2 13:12:07.531748 kubelet[3222]: I0302 13:12:07.531543 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5967a8d2-1fdd-48d4-8c70-92fb66f32c7d-node-certs\") pod \"calico-node-x74x2\" (UID: \"5967a8d2-1fdd-48d4-8c70-92fb66f32c7d\") " pod="calico-system/calico-node-x74x2" Mar 2 13:12:07.531748 kubelet[3222]: I0302 13:12:07.531581 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/5967a8d2-1fdd-48d4-8c70-92fb66f32c7d-bpffs\") pod \"calico-node-x74x2\" (UID: \"5967a8d2-1fdd-48d4-8c70-92fb66f32c7d\") " pod="calico-system/calico-node-x74x2" Mar 2 13:12:07.531748 kubelet[3222]: I0302 13:12:07.531595 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5967a8d2-1fdd-48d4-8c70-92fb66f32c7d-cni-log-dir\") pod \"calico-node-x74x2\" (UID: \"5967a8d2-1fdd-48d4-8c70-92fb66f32c7d\") " pod="calico-system/calico-node-x74x2" Mar 2 13:12:07.531855 kubelet[3222]: I0302 13:12:07.531618 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5967a8d2-1fdd-48d4-8c70-92fb66f32c7d-cni-net-dir\") pod \"calico-node-x74x2\" (UID: \"5967a8d2-1fdd-48d4-8c70-92fb66f32c7d\") " pod="calico-system/calico-node-x74x2" Mar 2 13:12:07.531855 kubelet[3222]: I0302 13:12:07.531630 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5967a8d2-1fdd-48d4-8c70-92fb66f32c7d-tigera-ca-bundle\") pod \"calico-node-x74x2\" (UID: \"5967a8d2-1fdd-48d4-8c70-92fb66f32c7d\") " pod="calico-system/calico-node-x74x2" Mar 2 13:12:07.531855 kubelet[3222]: I0302 13:12:07.531645 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5967a8d2-1fdd-48d4-8c70-92fb66f32c7d-var-run-calico\") pod \"calico-node-x74x2\" (UID: \"5967a8d2-1fdd-48d4-8c70-92fb66f32c7d\") " pod="calico-system/calico-node-x74x2" Mar 2 13:12:07.531855 kubelet[3222]: I0302 13:12:07.531661 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5967a8d2-1fdd-48d4-8c70-92fb66f32c7d-cni-bin-dir\") pod \"calico-node-x74x2\" (UID: \"5967a8d2-1fdd-48d4-8c70-92fb66f32c7d\") " pod="calico-system/calico-node-x74x2" Mar 2 13:12:07.531855 kubelet[3222]: I0302 13:12:07.531676 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5967a8d2-1fdd-48d4-8c70-92fb66f32c7d-flexvol-driver-host\") pod \"calico-node-x74x2\" (UID: \"5967a8d2-1fdd-48d4-8c70-92fb66f32c7d\") " pod="calico-system/calico-node-x74x2" Mar 2 13:12:07.531959 kubelet[3222]: I0302 13:12:07.531690 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5967a8d2-1fdd-48d4-8c70-92fb66f32c7d-policysync\") pod \"calico-node-x74x2\" (UID: \"5967a8d2-1fdd-48d4-8c70-92fb66f32c7d\") " pod="calico-system/calico-node-x74x2" Mar 2 13:12:07.531959 kubelet[3222]: I0302 13:12:07.531705 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb8qq\" (UniqueName: \"kubernetes.io/projected/5967a8d2-1fdd-48d4-8c70-92fb66f32c7d-kube-api-access-hb8qq\") pod \"calico-node-x74x2\" (UID: \"5967a8d2-1fdd-48d4-8c70-92fb66f32c7d\") " pod="calico-system/calico-node-x74x2" Mar 2 13:12:07.591576 kubelet[3222]: E0302 13:12:07.591520 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbvnj" podUID="662b1d4a-a279-406c-96fb-fce38eb91097" Mar 2 13:12:07.631951 kubelet[3222]: I0302 13:12:07.631903 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/662b1d4a-a279-406c-96fb-fce38eb91097-registration-dir\") pod \"csi-node-driver-nbvnj\" (UID: \"662b1d4a-a279-406c-96fb-fce38eb91097\") " pod="calico-system/csi-node-driver-nbvnj" Mar 2 13:12:07.632091 kubelet[3222]: I0302 13:12:07.631985 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6zfk\" (UniqueName: \"kubernetes.io/projected/662b1d4a-a279-406c-96fb-fce38eb91097-kube-api-access-s6zfk\") pod \"csi-node-driver-nbvnj\" (UID: \"662b1d4a-a279-406c-96fb-fce38eb91097\") " pod="calico-system/csi-node-driver-nbvnj" Mar 2 13:12:07.632091 kubelet[3222]: I0302 13:12:07.632019 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/662b1d4a-a279-406c-96fb-fce38eb91097-socket-dir\") pod \"csi-node-driver-nbvnj\" (UID: \"662b1d4a-a279-406c-96fb-fce38eb91097\") " pod="calico-system/csi-node-driver-nbvnj" Mar 2 13:12:07.632147 kubelet[3222]: I0302 13:12:07.632091 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/662b1d4a-a279-406c-96fb-fce38eb91097-kubelet-dir\") pod \"csi-node-driver-nbvnj\" (UID: \"662b1d4a-a279-406c-96fb-fce38eb91097\") " pod="calico-system/csi-node-driver-nbvnj" Mar 2 13:12:07.632147 kubelet[3222]: I0302 13:12:07.632108 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/662b1d4a-a279-406c-96fb-fce38eb91097-varrun\") pod \"csi-node-driver-nbvnj\" (UID: \"662b1d4a-a279-406c-96fb-fce38eb91097\") " pod="calico-system/csi-node-driver-nbvnj" Mar 2 13:12:07.634642 kubelet[3222]: E0302 13:12:07.634614 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.634642 kubelet[3222]: W0302 13:12:07.634638 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.634762 kubelet[3222]: E0302 13:12:07.634661 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.635256 kubelet[3222]: E0302 13:12:07.635241 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.635300 kubelet[3222]: W0302 13:12:07.635256 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.635300 kubelet[3222]: E0302 13:12:07.635268 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.636240 kubelet[3222]: E0302 13:12:07.635896 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.636240 kubelet[3222]: W0302 13:12:07.636004 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.636240 kubelet[3222]: E0302 13:12:07.636023 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.639926 kubelet[3222]: E0302 13:12:07.639909 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.640030 kubelet[3222]: W0302 13:12:07.640018 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.640155 kubelet[3222]: E0302 13:12:07.640092 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.640793 kubelet[3222]: E0302 13:12:07.640551 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.640793 kubelet[3222]: W0302 13:12:07.640565 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.640793 kubelet[3222]: E0302 13:12:07.640597 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.644471 kubelet[3222]: E0302 13:12:07.644337 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.644471 kubelet[3222]: W0302 13:12:07.644352 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.644471 kubelet[3222]: E0302 13:12:07.644364 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.645863 kubelet[3222]: E0302 13:12:07.645031 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.645863 kubelet[3222]: W0302 13:12:07.645044 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.645863 kubelet[3222]: E0302 13:12:07.645056 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.646546 kubelet[3222]: E0302 13:12:07.646418 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.646546 kubelet[3222]: W0302 13:12:07.646434 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.646546 kubelet[3222]: E0302 13:12:07.646447 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.646778 kubelet[3222]: E0302 13:12:07.646716 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.646778 kubelet[3222]: W0302 13:12:07.646726 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.646778 kubelet[3222]: E0302 13:12:07.646736 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.647237 kubelet[3222]: E0302 13:12:07.647087 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.647237 kubelet[3222]: W0302 13:12:07.647098 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.647237 kubelet[3222]: E0302 13:12:07.647109 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.647681 kubelet[3222]: E0302 13:12:07.647567 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.647681 kubelet[3222]: W0302 13:12:07.647579 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.647681 kubelet[3222]: E0302 13:12:07.647591 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.649356 kubelet[3222]: E0302 13:12:07.649239 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.649356 kubelet[3222]: W0302 13:12:07.649253 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.649356 kubelet[3222]: E0302 13:12:07.649265 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.649988 kubelet[3222]: E0302 13:12:07.649767 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.650422 kubelet[3222]: W0302 13:12:07.650249 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.650422 kubelet[3222]: E0302 13:12:07.650273 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.650868 kubelet[3222]: E0302 13:12:07.650709 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.650868 kubelet[3222]: W0302 13:12:07.650722 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.650868 kubelet[3222]: E0302 13:12:07.650733 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.651345 kubelet[3222]: E0302 13:12:07.651129 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.651345 kubelet[3222]: W0302 13:12:07.651142 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.651345 kubelet[3222]: E0302 13:12:07.651153 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.651690 kubelet[3222]: E0302 13:12:07.651613 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.651690 kubelet[3222]: W0302 13:12:07.651626 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.651690 kubelet[3222]: E0302 13:12:07.651641 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.652221 kubelet[3222]: E0302 13:12:07.652136 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.652221 kubelet[3222]: W0302 13:12:07.652149 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.652221 kubelet[3222]: E0302 13:12:07.652170 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.652743 kubelet[3222]: E0302 13:12:07.652605 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.652743 kubelet[3222]: W0302 13:12:07.652617 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.652743 kubelet[3222]: E0302 13:12:07.652630 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.653232 kubelet[3222]: E0302 13:12:07.653045 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.653232 kubelet[3222]: W0302 13:12:07.653057 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.653232 kubelet[3222]: E0302 13:12:07.653068 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.655362 kubelet[3222]: E0302 13:12:07.655345 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.655477 kubelet[3222]: W0302 13:12:07.655446 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.655477 kubelet[3222]: E0302 13:12:07.655465 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.655998 kubelet[3222]: E0302 13:12:07.655891 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.655998 kubelet[3222]: W0302 13:12:07.655904 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.655998 kubelet[3222]: E0302 13:12:07.655915 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.656479 kubelet[3222]: E0302 13:12:07.656325 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.656479 kubelet[3222]: W0302 13:12:07.656337 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.656479 kubelet[3222]: E0302 13:12:07.656352 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.657900 kubelet[3222]: E0302 13:12:07.657509 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.657900 kubelet[3222]: W0302 13:12:07.657527 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.657900 kubelet[3222]: E0302 13:12:07.657539 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.658461 kubelet[3222]: E0302 13:12:07.658358 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.658559 kubelet[3222]: W0302 13:12:07.658545 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.658616 kubelet[3222]: E0302 13:12:07.658606 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.664387 kubelet[3222]: E0302 13:12:07.664311 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.664387 kubelet[3222]: W0302 13:12:07.664328 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.664387 kubelet[3222]: E0302 13:12:07.664342 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.709198 containerd[1760]: time="2026-03-02T13:12:07.709120848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-578fbcdb6-hl46m,Uid:621107d8-d39c-4872-8620-a95c9ef0d371,Namespace:calico-system,Attempt:0,}" Mar 2 13:12:07.733154 kubelet[3222]: E0302 13:12:07.733126 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.733154 kubelet[3222]: W0302 13:12:07.733152 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.733323 kubelet[3222]: E0302 13:12:07.733186 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.733404 kubelet[3222]: E0302 13:12:07.733392 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.733404 kubelet[3222]: W0302 13:12:07.733402 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.733461 kubelet[3222]: E0302 13:12:07.733412 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.733619 kubelet[3222]: E0302 13:12:07.733603 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.733619 kubelet[3222]: W0302 13:12:07.733617 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.733677 kubelet[3222]: E0302 13:12:07.733626 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.733814 kubelet[3222]: E0302 13:12:07.733789 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.733814 kubelet[3222]: W0302 13:12:07.733799 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.733814 kubelet[3222]: E0302 13:12:07.733807 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.734036 kubelet[3222]: E0302 13:12:07.734019 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.734180 kubelet[3222]: W0302 13:12:07.734033 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.734180 kubelet[3222]: E0302 13:12:07.734052 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.734356 kubelet[3222]: E0302 13:12:07.734342 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.734418 kubelet[3222]: W0302 13:12:07.734406 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.734477 kubelet[3222]: E0302 13:12:07.734465 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.734722 kubelet[3222]: E0302 13:12:07.734710 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.734902 kubelet[3222]: W0302 13:12:07.734784 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.734902 kubelet[3222]: E0302 13:12:07.734800 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.735037 kubelet[3222]: E0302 13:12:07.735025 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.735225 kubelet[3222]: W0302 13:12:07.735075 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.735225 kubelet[3222]: E0302 13:12:07.735088 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.735415 kubelet[3222]: E0302 13:12:07.735392 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.735575 kubelet[3222]: W0302 13:12:07.735468 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.735575 kubelet[3222]: E0302 13:12:07.735489 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.736100 kubelet[3222]: E0302 13:12:07.735970 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.736100 kubelet[3222]: W0302 13:12:07.735986 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.736100 kubelet[3222]: E0302 13:12:07.735999 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.736501 kubelet[3222]: E0302 13:12:07.736376 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.736608 kubelet[3222]: W0302 13:12:07.736592 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.736752 kubelet[3222]: E0302 13:12:07.736678 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.737028 kubelet[3222]: E0302 13:12:07.737010 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.737192 kubelet[3222]: W0302 13:12:07.737115 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.737192 kubelet[3222]: E0302 13:12:07.737134 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.737638 kubelet[3222]: E0302 13:12:07.737527 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.737638 kubelet[3222]: W0302 13:12:07.737542 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.737638 kubelet[3222]: E0302 13:12:07.737554 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.738198 kubelet[3222]: E0302 13:12:07.738074 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.738198 kubelet[3222]: W0302 13:12:07.738113 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.738198 kubelet[3222]: E0302 13:12:07.738129 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.738608 kubelet[3222]: E0302 13:12:07.738594 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.738788 kubelet[3222]: W0302 13:12:07.738690 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.738788 kubelet[3222]: E0302 13:12:07.738727 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.739199 kubelet[3222]: E0302 13:12:07.739042 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.739199 kubelet[3222]: W0302 13:12:07.739053 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.739199 kubelet[3222]: E0302 13:12:07.739070 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.740228 kubelet[3222]: E0302 13:12:07.740017 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.741916 kubelet[3222]: W0302 13:12:07.740337 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.741916 kubelet[3222]: E0302 13:12:07.740365 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.742359 kubelet[3222]: E0302 13:12:07.742247 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.742359 kubelet[3222]: W0302 13:12:07.742261 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.742359 kubelet[3222]: E0302 13:12:07.742275 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.744263 kubelet[3222]: E0302 13:12:07.743931 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.744263 kubelet[3222]: W0302 13:12:07.743945 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.744263 kubelet[3222]: E0302 13:12:07.743957 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.745058 kubelet[3222]: E0302 13:12:07.744732 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.745058 kubelet[3222]: W0302 13:12:07.744745 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.745058 kubelet[3222]: E0302 13:12:07.744756 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.745626 kubelet[3222]: E0302 13:12:07.745504 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.745626 kubelet[3222]: W0302 13:12:07.745519 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.745626 kubelet[3222]: E0302 13:12:07.745530 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.746245 kubelet[3222]: E0302 13:12:07.746046 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.746245 kubelet[3222]: W0302 13:12:07.746059 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.746245 kubelet[3222]: E0302 13:12:07.746070 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.746706 kubelet[3222]: E0302 13:12:07.746484 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.746706 kubelet[3222]: W0302 13:12:07.746497 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.746706 kubelet[3222]: E0302 13:12:07.746508 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.747086 kubelet[3222]: E0302 13:12:07.746952 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.747086 kubelet[3222]: W0302 13:12:07.746968 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.747252 kubelet[3222]: E0302 13:12:07.747196 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.747862 kubelet[3222]: E0302 13:12:07.747688 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.747862 kubelet[3222]: W0302 13:12:07.747706 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.748270 kubelet[3222]: E0302 13:12:07.747720 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.758987 kubelet[3222]: E0302 13:12:07.758924 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:07.758987 kubelet[3222]: W0302 13:12:07.758941 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:07.758987 kubelet[3222]: E0302 13:12:07.758955 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:07.759658 containerd[1760]: time="2026-03-02T13:12:07.759463362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:12:07.759658 containerd[1760]: time="2026-03-02T13:12:07.759517722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:12:07.759658 containerd[1760]: time="2026-03-02T13:12:07.759552122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:12:07.759658 containerd[1760]: time="2026-03-02T13:12:07.759623282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:12:07.779311 systemd[1]: Started cri-containerd-31aaff6e2d9abdb9b34428ad260e2de8445d7e36add584183977abd7c2b05ab4.scope - libcontainer container 31aaff6e2d9abdb9b34428ad260e2de8445d7e36add584183977abd7c2b05ab4. Mar 2 13:12:07.797711 containerd[1760]: time="2026-03-02T13:12:07.797324748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x74x2,Uid:5967a8d2-1fdd-48d4-8c70-92fb66f32c7d,Namespace:calico-system,Attempt:0,}" Mar 2 13:12:07.807619 containerd[1760]: time="2026-03-02T13:12:07.807586795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-578fbcdb6-hl46m,Uid:621107d8-d39c-4872-8620-a95c9ef0d371,Namespace:calico-system,Attempt:0,} returns sandbox id \"31aaff6e2d9abdb9b34428ad260e2de8445d7e36add584183977abd7c2b05ab4\"" Mar 2 13:12:07.809121 containerd[1760]: time="2026-03-02T13:12:07.809035436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.3\"" Mar 2 13:12:07.851761 containerd[1760]: time="2026-03-02T13:12:07.846695901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:12:07.851761 containerd[1760]: time="2026-03-02T13:12:07.846747181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:12:07.851761 containerd[1760]: time="2026-03-02T13:12:07.846761581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:12:07.851761 containerd[1760]: time="2026-03-02T13:12:07.846834981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:12:07.864317 systemd[1]: Started cri-containerd-cc49d93c1458110a247495459162fbf00e8c4b9608acb051abfb662011a25ed6.scope - libcontainer container cc49d93c1458110a247495459162fbf00e8c4b9608acb051abfb662011a25ed6. Mar 2 13:12:07.884939 containerd[1760]: time="2026-03-02T13:12:07.884811167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x74x2,Uid:5967a8d2-1fdd-48d4-8c70-92fb66f32c7d,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc49d93c1458110a247495459162fbf00e8c4b9608acb051abfb662011a25ed6\"" Mar 2 13:12:09.311050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4231135455.mount: Deactivated successfully. Mar 2 13:12:09.421701 kubelet[3222]: E0302 13:12:09.421657 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbvnj" podUID="662b1d4a-a279-406c-96fb-fce38eb91097" Mar 2 13:12:10.396368 containerd[1760]: time="2026-03-02T13:12:10.396314980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:10.399217 containerd[1760]: time="2026-03-02T13:12:10.399097142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.3: active requests=0, bytes read=33841852" Mar 2 13:12:10.404182 containerd[1760]: time="2026-03-02T13:12:10.404002105Z" level=info msg="ImageCreate event name:\"sha256:d28a261c14ff1c1c526940695055ffc414471b39d275a706eac99ccbbd5fdc62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:10.408542 containerd[1760]: time="2026-03-02T13:12:10.408497548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:3e62cf98a20c42a1786397d0192cfb639634ef95c6f463ab92f0439a5c1a4ae5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:10.409377 containerd[1760]: time="2026-03-02T13:12:10.409347789Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.3\" with image id \"sha256:d28a261c14ff1c1c526940695055ffc414471b39d275a706eac99ccbbd5fdc62\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:3e62cf98a20c42a1786397d0192cfb639634ef95c6f463ab92f0439a5c1a4ae5\", size \"33841706\" in 2.600284593s" Mar 2 13:12:10.409439 containerd[1760]: time="2026-03-02T13:12:10.409380109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.3\" returns image reference \"sha256:d28a261c14ff1c1c526940695055ffc414471b39d275a706eac99ccbbd5fdc62\"" Mar 2 13:12:10.411989 containerd[1760]: time="2026-03-02T13:12:10.410764550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\"" Mar 2 13:12:10.427418 containerd[1760]: time="2026-03-02T13:12:10.427381361Z" level=info msg="CreateContainer within sandbox \"31aaff6e2d9abdb9b34428ad260e2de8445d7e36add584183977abd7c2b05ab4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 2 13:12:10.464434 containerd[1760]: time="2026-03-02T13:12:10.464285426Z" level=info msg="CreateContainer within sandbox \"31aaff6e2d9abdb9b34428ad260e2de8445d7e36add584183977abd7c2b05ab4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1d0e62f7b765d377385043121f2e84010101e3e9a077d23d5907126a0e8db740\"" Mar 2 13:12:10.465277 containerd[1760]: time="2026-03-02T13:12:10.464850026Z" level=info msg="StartContainer for \"1d0e62f7b765d377385043121f2e84010101e3e9a077d23d5907126a0e8db740\"" Mar 2 13:12:10.501294 systemd[1]: Started cri-containerd-1d0e62f7b765d377385043121f2e84010101e3e9a077d23d5907126a0e8db740.scope - libcontainer container 1d0e62f7b765d377385043121f2e84010101e3e9a077d23d5907126a0e8db740. Mar 2 13:12:10.535499 containerd[1760]: time="2026-03-02T13:12:10.535445434Z" level=info msg="StartContainer for \"1d0e62f7b765d377385043121f2e84010101e3e9a077d23d5907126a0e8db740\" returns successfully" Mar 2 13:12:11.422257 kubelet[3222]: E0302 13:12:11.422218 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbvnj" podUID="662b1d4a-a279-406c-96fb-fce38eb91097" Mar 2 13:12:11.528107 kubelet[3222]: I0302 13:12:11.527576 3222 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-578fbcdb6-hl46m" podStartSLOduration=1.92595627 podStartE2EDuration="4.527565223s" podCreationTimestamp="2026-03-02 13:12:07 +0000 UTC" firstStartedPulling="2026-03-02 13:12:07.808740476 +0000 UTC m=+21.481978223" lastFinishedPulling="2026-03-02 13:12:10.410349389 +0000 UTC m=+24.083587176" observedRunningTime="2026-03-02 13:12:11.527408542 +0000 UTC m=+25.200646329" watchObservedRunningTime="2026-03-02 13:12:11.527565223 +0000 UTC m=+25.200803010" Mar 2 13:12:11.551954 kubelet[3222]: E0302 13:12:11.551823 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.551954 kubelet[3222]: W0302 13:12:11.551841 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.551954 kubelet[3222]: E0302 13:12:11.551881 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.552235 kubelet[3222]: E0302 13:12:11.552030 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.552235 kubelet[3222]: W0302 13:12:11.552038 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.552235 kubelet[3222]: E0302 13:12:11.552047 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.552449 kubelet[3222]: E0302 13:12:11.552350 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.552449 kubelet[3222]: W0302 13:12:11.552361 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.552449 kubelet[3222]: E0302 13:12:11.552370 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.552645 kubelet[3222]: E0302 13:12:11.552584 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.552645 kubelet[3222]: W0302 13:12:11.552594 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.552645 kubelet[3222]: E0302 13:12:11.552603 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.553000 kubelet[3222]: E0302 13:12:11.552908 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.553000 kubelet[3222]: W0302 13:12:11.552919 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.553000 kubelet[3222]: E0302 13:12:11.552931 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.553206 kubelet[3222]: E0302 13:12:11.553145 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.553206 kubelet[3222]: W0302 13:12:11.553154 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.553206 kubelet[3222]: E0302 13:12:11.553172 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.553497 kubelet[3222]: E0302 13:12:11.553439 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.553497 kubelet[3222]: W0302 13:12:11.553449 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.553497 kubelet[3222]: E0302 13:12:11.553458 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.553796 kubelet[3222]: E0302 13:12:11.553734 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.553796 kubelet[3222]: W0302 13:12:11.553745 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.553796 kubelet[3222]: E0302 13:12:11.553754 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.554099 kubelet[3222]: E0302 13:12:11.554046 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.554099 kubelet[3222]: W0302 13:12:11.554057 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.554099 kubelet[3222]: E0302 13:12:11.554067 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.554439 kubelet[3222]: E0302 13:12:11.554343 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.554439 kubelet[3222]: W0302 13:12:11.554354 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.554439 kubelet[3222]: E0302 13:12:11.554363 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.554683 kubelet[3222]: E0302 13:12:11.554590 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.554683 kubelet[3222]: W0302 13:12:11.554600 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.554683 kubelet[3222]: E0302 13:12:11.554609 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.554828 kubelet[3222]: E0302 13:12:11.554819 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.554920 kubelet[3222]: W0302 13:12:11.554867 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.554920 kubelet[3222]: E0302 13:12:11.554879 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.555129 kubelet[3222]: E0302 13:12:11.555118 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.555261 kubelet[3222]: W0302 13:12:11.555210 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.555261 kubelet[3222]: E0302 13:12:11.555225 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.555547 kubelet[3222]: E0302 13:12:11.555459 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.555547 kubelet[3222]: W0302 13:12:11.555470 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.555547 kubelet[3222]: E0302 13:12:11.555479 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.555757 kubelet[3222]: E0302 13:12:11.555694 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.555757 kubelet[3222]: W0302 13:12:11.555703 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.555757 kubelet[3222]: E0302 13:12:11.555712 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.559052 kubelet[3222]: E0302 13:12:11.559039 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.559192 kubelet[3222]: W0302 13:12:11.559125 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.559192 kubelet[3222]: E0302 13:12:11.559140 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.559556 kubelet[3222]: E0302 13:12:11.559477 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.559556 kubelet[3222]: W0302 13:12:11.559488 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.559556 kubelet[3222]: E0302 13:12:11.559498 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.559816 kubelet[3222]: E0302 13:12:11.559787 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.559816 kubelet[3222]: W0302 13:12:11.559797 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.559816 kubelet[3222]: E0302 13:12:11.559806 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.560183 kubelet[3222]: E0302 13:12:11.560112 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.560183 kubelet[3222]: W0302 13:12:11.560123 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.560183 kubelet[3222]: E0302 13:12:11.560132 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.560482 kubelet[3222]: E0302 13:12:11.560423 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.560482 kubelet[3222]: W0302 13:12:11.560434 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.560482 kubelet[3222]: E0302 13:12:11.560443 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.560764 kubelet[3222]: E0302 13:12:11.560698 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.560764 kubelet[3222]: W0302 13:12:11.560708 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.560764 kubelet[3222]: E0302 13:12:11.560717 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.561063 kubelet[3222]: E0302 13:12:11.560998 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.561063 kubelet[3222]: W0302 13:12:11.561008 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.561063 kubelet[3222]: E0302 13:12:11.561017 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.561431 kubelet[3222]: E0302 13:12:11.561323 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.561431 kubelet[3222]: W0302 13:12:11.561334 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.561431 kubelet[3222]: E0302 13:12:11.561344 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.561686 kubelet[3222]: E0302 13:12:11.561593 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.561686 kubelet[3222]: W0302 13:12:11.561603 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.561686 kubelet[3222]: E0302 13:12:11.561614 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.562075 kubelet[3222]: E0302 13:12:11.562044 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.562075 kubelet[3222]: W0302 13:12:11.562055 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.562075 kubelet[3222]: E0302 13:12:11.562064 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.562442 kubelet[3222]: E0302 13:12:11.562361 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.562442 kubelet[3222]: W0302 13:12:11.562372 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.562442 kubelet[3222]: E0302 13:12:11.562382 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.562794 kubelet[3222]: E0302 13:12:11.562702 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.562794 kubelet[3222]: W0302 13:12:11.562713 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.562794 kubelet[3222]: E0302 13:12:11.562723 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.563200 kubelet[3222]: E0302 13:12:11.563105 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.563200 kubelet[3222]: W0302 13:12:11.563116 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.563200 kubelet[3222]: E0302 13:12:11.563127 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.563446 kubelet[3222]: E0302 13:12:11.563376 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.563446 kubelet[3222]: W0302 13:12:11.563385 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.563446 kubelet[3222]: E0302 13:12:11.563394 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.563709 kubelet[3222]: E0302 13:12:11.563675 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.563709 kubelet[3222]: W0302 13:12:11.563686 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.563709 kubelet[3222]: E0302 13:12:11.563695 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.564078 kubelet[3222]: E0302 13:12:11.564002 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.564078 kubelet[3222]: W0302 13:12:11.564013 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.564078 kubelet[3222]: E0302 13:12:11.564022 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.564639 kubelet[3222]: E0302 13:12:11.564333 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.564639 kubelet[3222]: W0302 13:12:11.564344 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.564639 kubelet[3222]: E0302 13:12:11.564354 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.564930 kubelet[3222]: E0302 13:12:11.564918 3222 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:12:11.564998 kubelet[3222]: W0302 13:12:11.564987 3222 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:12:11.565049 kubelet[3222]: E0302 13:12:11.565040 3222 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:12:11.888753 containerd[1760]: time="2026-03-02T13:12:11.888706946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:11.891268 containerd[1760]: time="2026-03-02T13:12:11.891220868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3: active requests=0, bytes read=4456989" Mar 2 13:12:11.894998 containerd[1760]: time="2026-03-02T13:12:11.894429790Z" level=info msg="ImageCreate event name:\"sha256:3c477f840adeca332cbee81ef65da50ec7be99ded887a8de75d5cf25b896d6a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:11.898690 containerd[1760]: time="2026-03-02T13:12:11.898662433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:6cdc6cc2f7cdcbd4bf2d9b6a59c03ed98b5c47f22e467d78b5c06e5fd7bff132\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:11.899314 containerd[1760]: time="2026-03-02T13:12:11.899278913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" with image id \"sha256:3c477f840adeca332cbee81ef65da50ec7be99ded887a8de75d5cf25b896d6a9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:6cdc6cc2f7cdcbd4bf2d9b6a59c03ed98b5c47f22e467d78b5c06e5fd7bff132\", size \"5854474\" in 1.488478643s" Mar 2 13:12:11.899377 containerd[1760]: time="2026-03-02T13:12:11.899317153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" returns image reference \"sha256:3c477f840adeca332cbee81ef65da50ec7be99ded887a8de75d5cf25b896d6a9\"" Mar 2 13:12:11.908391 containerd[1760]: time="2026-03-02T13:12:11.908360359Z" level=info msg="CreateContainer within sandbox \"cc49d93c1458110a247495459162fbf00e8c4b9608acb051abfb662011a25ed6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 2 13:12:11.946726 containerd[1760]: time="2026-03-02T13:12:11.946690345Z" level=info msg="CreateContainer within sandbox \"cc49d93c1458110a247495459162fbf00e8c4b9608acb051abfb662011a25ed6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a75d3c796b59bb18ea462c2a73a6da91a9772fef2bd296bb7d757228d56c2265\"" Mar 2 13:12:11.947771 containerd[1760]: time="2026-03-02T13:12:11.947744826Z" level=info msg="StartContainer for \"a75d3c796b59bb18ea462c2a73a6da91a9772fef2bd296bb7d757228d56c2265\"" Mar 2 13:12:11.989390 systemd[1]: Started cri-containerd-a75d3c796b59bb18ea462c2a73a6da91a9772fef2bd296bb7d757228d56c2265.scope - libcontainer container a75d3c796b59bb18ea462c2a73a6da91a9772fef2bd296bb7d757228d56c2265. Mar 2 13:12:12.016349 containerd[1760]: time="2026-03-02T13:12:12.016293870Z" level=info msg="StartContainer for \"a75d3c796b59bb18ea462c2a73a6da91a9772fef2bd296bb7d757228d56c2265\" returns successfully" Mar 2 13:12:12.020858 systemd[1]: cri-containerd-a75d3c796b59bb18ea462c2a73a6da91a9772fef2bd296bb7d757228d56c2265.scope: Deactivated successfully. Mar 2 13:12:12.040707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a75d3c796b59bb18ea462c2a73a6da91a9772fef2bd296bb7d757228d56c2265-rootfs.mount: Deactivated successfully. Mar 2 13:12:12.515411 kubelet[3222]: I0302 13:12:12.515378 3222 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:12:13.197942 containerd[1760]: time="2026-03-02T13:12:13.197820199Z" level=info msg="shim disconnected" id=a75d3c796b59bb18ea462c2a73a6da91a9772fef2bd296bb7d757228d56c2265 namespace=k8s.io Mar 2 13:12:13.197942 containerd[1760]: time="2026-03-02T13:12:13.197889079Z" level=warning msg="cleaning up after shim disconnected" id=a75d3c796b59bb18ea462c2a73a6da91a9772fef2bd296bb7d757228d56c2265 namespace=k8s.io Mar 2 13:12:13.197942 containerd[1760]: time="2026-03-02T13:12:13.197898239Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:12:13.421947 kubelet[3222]: E0302 13:12:13.421877 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbvnj" podUID="662b1d4a-a279-406c-96fb-fce38eb91097" Mar 2 13:12:13.520121 containerd[1760]: time="2026-03-02T13:12:13.520085107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.3\"" Mar 2 13:12:15.421695 kubelet[3222]: E0302 13:12:15.421363 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbvnj" podUID="662b1d4a-a279-406c-96fb-fce38eb91097" Mar 2 13:12:17.421562 kubelet[3222]: E0302 13:12:17.421278 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbvnj" podUID="662b1d4a-a279-406c-96fb-fce38eb91097" Mar 2 13:12:19.421898 kubelet[3222]: E0302 13:12:19.421847 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbvnj" podUID="662b1d4a-a279-406c-96fb-fce38eb91097" Mar 2 13:12:20.768813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount948713785.mount: Deactivated successfully. Mar 2 13:12:21.165960 containerd[1760]: time="2026-03-02T13:12:21.165725305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:21.168542 containerd[1760]: time="2026-03-02T13:12:21.168406907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.3: active requests=0, bytes read=153583198" Mar 2 13:12:21.172180 containerd[1760]: time="2026-03-02T13:12:21.171937510Z" level=info msg="ImageCreate event name:\"sha256:98788f64d6cabef718c2551eb8b42ec11d1bfaa912cfeb4f6bf240f79159575d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:21.176829 containerd[1760]: time="2026-03-02T13:12:21.176529113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:c7aefc80042b94800407ab45640b59402d2897ae8755b9d8370516e7b0e404bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:21.177672 containerd[1760]: time="2026-03-02T13:12:21.177259193Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.3\" with image id \"sha256:98788f64d6cabef718c2551eb8b42ec11d1bfaa912cfeb4f6bf240f79159575d\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:c7aefc80042b94800407ab45640b59402d2897ae8755b9d8370516e7b0e404bc\", size \"153583060\" in 7.657134886s" Mar 2 13:12:21.177672 containerd[1760]: time="2026-03-02T13:12:21.177292593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.3\" returns image reference \"sha256:98788f64d6cabef718c2551eb8b42ec11d1bfaa912cfeb4f6bf240f79159575d\"" Mar 2 13:12:21.187489 containerd[1760]: time="2026-03-02T13:12:21.186970160Z" level=info msg="CreateContainer within sandbox \"cc49d93c1458110a247495459162fbf00e8c4b9608acb051abfb662011a25ed6\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 2 13:12:21.224760 containerd[1760]: time="2026-03-02T13:12:21.224720547Z" level=info msg="CreateContainer within sandbox \"cc49d93c1458110a247495459162fbf00e8c4b9608acb051abfb662011a25ed6\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"da0f835419dd69aa8d1a4bca1f53be38c1d77e53ea8c720e57986658cbd0e889\"" Mar 2 13:12:21.227184 containerd[1760]: time="2026-03-02T13:12:21.226597708Z" level=info msg="StartContainer for \"da0f835419dd69aa8d1a4bca1f53be38c1d77e53ea8c720e57986658cbd0e889\"" Mar 2 13:12:21.257393 systemd[1]: Started cri-containerd-da0f835419dd69aa8d1a4bca1f53be38c1d77e53ea8c720e57986658cbd0e889.scope - libcontainer container da0f835419dd69aa8d1a4bca1f53be38c1d77e53ea8c720e57986658cbd0e889. Mar 2 13:12:21.289925 containerd[1760]: time="2026-03-02T13:12:21.289857953Z" level=info msg="StartContainer for \"da0f835419dd69aa8d1a4bca1f53be38c1d77e53ea8c720e57986658cbd0e889\" returns successfully" Mar 2 13:12:21.324031 systemd[1]: cri-containerd-da0f835419dd69aa8d1a4bca1f53be38c1d77e53ea8c720e57986658cbd0e889.scope: Deactivated successfully. Mar 2 13:12:21.493870 kubelet[3222]: E0302 13:12:21.421439 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbvnj" podUID="662b1d4a-a279-406c-96fb-fce38eb91097" Mar 2 13:12:21.769206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da0f835419dd69aa8d1a4bca1f53be38c1d77e53ea8c720e57986658cbd0e889-rootfs.mount: Deactivated successfully. Mar 2 13:12:22.605985 containerd[1760]: time="2026-03-02T13:12:22.605905560Z" level=info msg="shim disconnected" id=da0f835419dd69aa8d1a4bca1f53be38c1d77e53ea8c720e57986658cbd0e889 namespace=k8s.io Mar 2 13:12:22.605985 containerd[1760]: time="2026-03-02T13:12:22.605979440Z" level=warning msg="cleaning up after shim disconnected" id=da0f835419dd69aa8d1a4bca1f53be38c1d77e53ea8c720e57986658cbd0e889 namespace=k8s.io Mar 2 13:12:22.605985 containerd[1760]: time="2026-03-02T13:12:22.605989800Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:12:23.422021 kubelet[3222]: E0302 13:12:23.421664 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbvnj" podUID="662b1d4a-a279-406c-96fb-fce38eb91097" Mar 2 13:12:23.543365 containerd[1760]: time="2026-03-02T13:12:23.542926381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.3\"" Mar 2 13:12:24.929202 kubelet[3222]: I0302 13:12:24.928804 3222 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:12:25.421565 kubelet[3222]: E0302 13:12:25.421478 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbvnj" podUID="662b1d4a-a279-406c-96fb-fce38eb91097" Mar 2 13:12:26.735100 containerd[1760]: time="2026-03-02T13:12:26.734328270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:26.737688 containerd[1760]: time="2026-03-02T13:12:26.737664312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.3: active requests=0, bytes read=65998037" Mar 2 13:12:26.740615 containerd[1760]: time="2026-03-02T13:12:26.740593034Z" level=info msg="ImageCreate event name:\"sha256:2aba526dc0b0f95b83ab38a811f41d3daf3ec5ae8876bf273b65b9f142277231\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:26.745661 containerd[1760]: time="2026-03-02T13:12:26.745637118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:c25deb6a4b79f5e595eb464adf9fb3735ea5623889e249d5b3efa0b42ffcbb47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:26.746852 containerd[1760]: time="2026-03-02T13:12:26.746500599Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.3\" with image id \"sha256:2aba526dc0b0f95b83ab38a811f41d3daf3ec5ae8876bf273b65b9f142277231\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:c25deb6a4b79f5e595eb464adf9fb3735ea5623889e249d5b3efa0b42ffcbb47\", size \"67395562\" in 3.203537538s" Mar 2 13:12:26.746946 containerd[1760]: time="2026-03-02T13:12:26.746931599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.3\" returns image reference \"sha256:2aba526dc0b0f95b83ab38a811f41d3daf3ec5ae8876bf273b65b9f142277231\"" Mar 2 13:12:26.756714 containerd[1760]: time="2026-03-02T13:12:26.756592206Z" level=info msg="CreateContainer within sandbox \"cc49d93c1458110a247495459162fbf00e8c4b9608acb051abfb662011a25ed6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 2 13:12:26.794415 containerd[1760]: time="2026-03-02T13:12:26.794369032Z" level=info msg="CreateContainer within sandbox \"cc49d93c1458110a247495459162fbf00e8c4b9608acb051abfb662011a25ed6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"578caab1a5ee92fd2bb1cd4d38e04d2abeb0312b972aa0074ede8517fd34c1fa\"" Mar 2 13:12:26.794828 containerd[1760]: time="2026-03-02T13:12:26.794789753Z" level=info msg="StartContainer for \"578caab1a5ee92fd2bb1cd4d38e04d2abeb0312b972aa0074ede8517fd34c1fa\"" Mar 2 13:12:26.822323 systemd[1]: Started cri-containerd-578caab1a5ee92fd2bb1cd4d38e04d2abeb0312b972aa0074ede8517fd34c1fa.scope - libcontainer container 578caab1a5ee92fd2bb1cd4d38e04d2abeb0312b972aa0074ede8517fd34c1fa. Mar 2 13:12:26.851592 containerd[1760]: time="2026-03-02T13:12:26.851556233Z" level=info msg="StartContainer for \"578caab1a5ee92fd2bb1cd4d38e04d2abeb0312b972aa0074ede8517fd34c1fa\" returns successfully" Mar 2 13:12:27.421411 kubelet[3222]: E0302 13:12:27.421357 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nbvnj" podUID="662b1d4a-a279-406c-96fb-fce38eb91097" Mar 2 13:12:28.192948 systemd[1]: cri-containerd-578caab1a5ee92fd2bb1cd4d38e04d2abeb0312b972aa0074ede8517fd34c1fa.scope: Deactivated successfully. Mar 2 13:12:28.213581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-578caab1a5ee92fd2bb1cd4d38e04d2abeb0312b972aa0074ede8517fd34c1fa-rootfs.mount: Deactivated successfully. Mar 2 13:12:28.280613 kubelet[3222]: I0302 13:12:28.280590 3222 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 2 13:12:29.059397 systemd[1]: Created slice kubepods-burstable-pod5c8fecf4_235a_438f_90ac_097988d29780.slice - libcontainer container kubepods-burstable-pod5c8fecf4_235a_438f_90ac_097988d29780.slice. Mar 2 13:12:29.069419 kubelet[3222]: I0302 13:12:29.068373 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c8fecf4-235a-438f-90ac-097988d29780-config-volume\") pod \"coredns-7d764666f9-v96dg\" (UID: \"5c8fecf4-235a-438f-90ac-097988d29780\") " pod="kube-system/coredns-7d764666f9-v96dg" Mar 2 13:12:29.069419 kubelet[3222]: I0302 13:12:29.068443 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml4nr\" (UniqueName: \"kubernetes.io/projected/5c8fecf4-235a-438f-90ac-097988d29780-kube-api-access-ml4nr\") pod \"coredns-7d764666f9-v96dg\" (UID: \"5c8fecf4-235a-438f-90ac-097988d29780\") " pod="kube-system/coredns-7d764666f9-v96dg" Mar 2 13:12:29.071821 containerd[1760]: time="2026-03-02T13:12:29.071480430Z" level=info msg="shim disconnected" id=578caab1a5ee92fd2bb1cd4d38e04d2abeb0312b972aa0074ede8517fd34c1fa namespace=k8s.io Mar 2 13:12:29.071821 containerd[1760]: time="2026-03-02T13:12:29.071766190Z" level=warning msg="cleaning up after shim disconnected" id=578caab1a5ee92fd2bb1cd4d38e04d2abeb0312b972aa0074ede8517fd34c1fa namespace=k8s.io Mar 2 13:12:29.071821 containerd[1760]: time="2026-03-02T13:12:29.071779350Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:12:29.074432 systemd[1]: Created slice kubepods-besteffort-pod662b1d4a_a279_406c_96fb_fce38eb91097.slice - libcontainer container kubepods-besteffort-pod662b1d4a_a279_406c_96fb_fce38eb91097.slice. Mar 2 13:12:29.091286 systemd[1]: Created slice kubepods-burstable-pod421d6308_a09a_4e47_a594_09b64ce05a98.slice - libcontainer container kubepods-burstable-pod421d6308_a09a_4e47_a594_09b64ce05a98.slice. Mar 2 13:12:29.100191 containerd[1760]: time="2026-03-02T13:12:29.100143330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbvnj,Uid:662b1d4a-a279-406c-96fb-fce38eb91097,Namespace:calico-system,Attempt:0,}" Mar 2 13:12:29.110597 systemd[1]: Created slice kubepods-besteffort-poda8eaf721_f0bb_465f_a36a_f220c434ea52.slice - libcontainer container kubepods-besteffort-poda8eaf721_f0bb_465f_a36a_f220c434ea52.slice. Mar 2 13:12:29.117520 systemd[1]: Created slice kubepods-besteffort-poddd3978ae_983f_4585_acd3_45fc4f7a81fe.slice - libcontainer container kubepods-besteffort-poddd3978ae_983f_4585_acd3_45fc4f7a81fe.slice. Mar 2 13:12:29.123171 systemd[1]: Created slice kubepods-besteffort-podfe44d783_a407_4475_bb45_4b5e475fc600.slice - libcontainer container kubepods-besteffort-podfe44d783_a407_4475_bb45_4b5e475fc600.slice. Mar 2 13:12:29.129630 systemd[1]: Created slice kubepods-besteffort-podda16cb8c_bc4e_45ca_bc1a_d50354581491.slice - libcontainer container kubepods-besteffort-podda16cb8c_bc4e_45ca_bc1a_d50354581491.slice. Mar 2 13:12:29.144588 systemd[1]: Created slice kubepods-besteffort-pod2b8c6e3f_27bc_4357_bc8f_90f4d1ae9548.slice - libcontainer container kubepods-besteffort-pod2b8c6e3f_27bc_4357_bc8f_90f4d1ae9548.slice. Mar 2 13:12:29.169336 kubelet[3222]: I0302 13:12:29.169233 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dd3978ae-983f-4585-acd3-45fc4f7a81fe-whisker-backend-key-pair\") pod \"whisker-554bdbbf9-xb6cd\" (UID: \"dd3978ae-983f-4585-acd3-45fc4f7a81fe\") " pod="calico-system/whisker-554bdbbf9-xb6cd" Mar 2 13:12:29.169336 kubelet[3222]: I0302 13:12:29.169278 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcgkb\" (UniqueName: \"kubernetes.io/projected/dd3978ae-983f-4585-acd3-45fc4f7a81fe-kube-api-access-pcgkb\") pod \"whisker-554bdbbf9-xb6cd\" (UID: \"dd3978ae-983f-4585-acd3-45fc4f7a81fe\") " pod="calico-system/whisker-554bdbbf9-xb6cd" Mar 2 13:12:29.169336 kubelet[3222]: I0302 13:12:29.169296 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2b8c6e3f-27bc-4357-bc8f-90f4d1ae9548-calico-apiserver-certs\") pod \"calico-apiserver-7f6ff4dd5d-nbv94\" (UID: \"2b8c6e3f-27bc-4357-bc8f-90f4d1ae9548\") " pod="calico-system/calico-apiserver-7f6ff4dd5d-nbv94" Mar 2 13:12:29.169514 kubelet[3222]: I0302 13:12:29.169370 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fe44d783-a407-4475-bb45-4b5e475fc600-goldmane-key-pair\") pod \"goldmane-7d7658d587-mwp5j\" (UID: \"fe44d783-a407-4475-bb45-4b5e475fc600\") " pod="calico-system/goldmane-7d7658d587-mwp5j" Mar 2 13:12:29.169514 kubelet[3222]: I0302 13:12:29.169389 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxvnr\" (UniqueName: \"kubernetes.io/projected/fe44d783-a407-4475-bb45-4b5e475fc600-kube-api-access-zxvnr\") pod \"goldmane-7d7658d587-mwp5j\" (UID: \"fe44d783-a407-4475-bb45-4b5e475fc600\") " pod="calico-system/goldmane-7d7658d587-mwp5j" Mar 2 13:12:29.169869 kubelet[3222]: I0302 13:12:29.169641 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhjqd\" (UniqueName: \"kubernetes.io/projected/2b8c6e3f-27bc-4357-bc8f-90f4d1ae9548-kube-api-access-hhjqd\") pod \"calico-apiserver-7f6ff4dd5d-nbv94\" (UID: \"2b8c6e3f-27bc-4357-bc8f-90f4d1ae9548\") " pod="calico-system/calico-apiserver-7f6ff4dd5d-nbv94" Mar 2 13:12:29.170974 kubelet[3222]: I0302 13:12:29.170860 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8eaf721-f0bb-465f-a36a-f220c434ea52-tigera-ca-bundle\") pod \"calico-kube-controllers-775879d688-ks9z6\" (UID: \"a8eaf721-f0bb-465f-a36a-f220c434ea52\") " pod="calico-system/calico-kube-controllers-775879d688-ks9z6" Mar 2 13:12:29.170974 kubelet[3222]: I0302 13:12:29.170892 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbjjq\" (UniqueName: \"kubernetes.io/projected/da16cb8c-bc4e-45ca-bc1a-d50354581491-kube-api-access-kbjjq\") pod \"calico-apiserver-7f6ff4dd5d-77p2h\" (UID: \"da16cb8c-bc4e-45ca-bc1a-d50354581491\") " pod="calico-system/calico-apiserver-7f6ff4dd5d-77p2h" Mar 2 13:12:29.170974 kubelet[3222]: I0302 13:12:29.170914 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flmrl\" (UniqueName: \"kubernetes.io/projected/421d6308-a09a-4e47-a594-09b64ce05a98-kube-api-access-flmrl\") pod \"coredns-7d764666f9-fl49c\" (UID: \"421d6308-a09a-4e47-a594-09b64ce05a98\") " pod="kube-system/coredns-7d764666f9-fl49c" Mar 2 13:12:29.170974 kubelet[3222]: I0302 13:12:29.170938 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbflf\" (UniqueName: \"kubernetes.io/projected/a8eaf721-f0bb-465f-a36a-f220c434ea52-kube-api-access-jbflf\") pod \"calico-kube-controllers-775879d688-ks9z6\" (UID: \"a8eaf721-f0bb-465f-a36a-f220c434ea52\") " pod="calico-system/calico-kube-controllers-775879d688-ks9z6" Mar 2 13:12:29.171121 kubelet[3222]: I0302 13:12:29.170987 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/dd3978ae-983f-4585-acd3-45fc4f7a81fe-nginx-config\") pod \"whisker-554bdbbf9-xb6cd\" (UID: \"dd3978ae-983f-4585-acd3-45fc4f7a81fe\") " pod="calico-system/whisker-554bdbbf9-xb6cd" Mar 2 13:12:29.172212 kubelet[3222]: I0302 13:12:29.172156 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/da16cb8c-bc4e-45ca-bc1a-d50354581491-calico-apiserver-certs\") pod \"calico-apiserver-7f6ff4dd5d-77p2h\" (UID: \"da16cb8c-bc4e-45ca-bc1a-d50354581491\") " pod="calico-system/calico-apiserver-7f6ff4dd5d-77p2h" Mar 2 13:12:29.172667 kubelet[3222]: I0302 13:12:29.172428 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe44d783-a407-4475-bb45-4b5e475fc600-goldmane-ca-bundle\") pod \"goldmane-7d7658d587-mwp5j\" (UID: \"fe44d783-a407-4475-bb45-4b5e475fc600\") " pod="calico-system/goldmane-7d7658d587-mwp5j" Mar 2 13:12:29.172980 kubelet[3222]: I0302 13:12:29.172953 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/421d6308-a09a-4e47-a594-09b64ce05a98-config-volume\") pod \"coredns-7d764666f9-fl49c\" (UID: \"421d6308-a09a-4e47-a594-09b64ce05a98\") " pod="kube-system/coredns-7d764666f9-fl49c" Mar 2 13:12:29.173274 kubelet[3222]: I0302 13:12:29.173239 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd3978ae-983f-4585-acd3-45fc4f7a81fe-whisker-ca-bundle\") pod \"whisker-554bdbbf9-xb6cd\" (UID: \"dd3978ae-983f-4585-acd3-45fc4f7a81fe\") " pod="calico-system/whisker-554bdbbf9-xb6cd" Mar 2 13:12:29.173274 kubelet[3222]: I0302 13:12:29.173293 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe44d783-a407-4475-bb45-4b5e475fc600-config\") pod \"goldmane-7d7658d587-mwp5j\" (UID: \"fe44d783-a407-4475-bb45-4b5e475fc600\") " pod="calico-system/goldmane-7d7658d587-mwp5j" Mar 2 13:12:29.208686 containerd[1760]: time="2026-03-02T13:12:29.208630365Z" level=error msg="Failed to destroy network for sandbox \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.209440 containerd[1760]: time="2026-03-02T13:12:29.209405326Z" level=error msg="encountered an error cleaning up failed sandbox \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.209497 containerd[1760]: time="2026-03-02T13:12:29.209461966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbvnj,Uid:662b1d4a-a279-406c-96fb-fce38eb91097,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.211135 kubelet[3222]: E0302 13:12:29.209671 3222 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.210831 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730-shm.mount: Deactivated successfully. Mar 2 13:12:29.212219 kubelet[3222]: E0302 13:12:29.211528 3222 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nbvnj" Mar 2 13:12:29.212219 kubelet[3222]: E0302 13:12:29.211565 3222 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nbvnj" Mar 2 13:12:29.212219 kubelet[3222]: E0302 13:12:29.211626 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nbvnj_calico-system(662b1d4a-a279-406c-96fb-fce38eb91097)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nbvnj_calico-system(662b1d4a-a279-406c-96fb-fce38eb91097)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nbvnj" podUID="662b1d4a-a279-406c-96fb-fce38eb91097" Mar 2 13:12:29.371216 containerd[1760]: time="2026-03-02T13:12:29.370915398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-v96dg,Uid:5c8fecf4-235a-438f-90ac-097988d29780,Namespace:kube-system,Attempt:0,}" Mar 2 13:12:29.421487 containerd[1760]: time="2026-03-02T13:12:29.421420394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-775879d688-ks9z6,Uid:a8eaf721-f0bb-465f-a36a-f220c434ea52,Namespace:calico-system,Attempt:0,}" Mar 2 13:12:29.422352 containerd[1760]: time="2026-03-02T13:12:29.422186114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-fl49c,Uid:421d6308-a09a-4e47-a594-09b64ce05a98,Namespace:kube-system,Attempt:0,}" Mar 2 13:12:29.427369 containerd[1760]: time="2026-03-02T13:12:29.427344078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-554bdbbf9-xb6cd,Uid:dd3978ae-983f-4585-acd3-45fc4f7a81fe,Namespace:calico-system,Attempt:0,}" Mar 2 13:12:29.433476 containerd[1760]: time="2026-03-02T13:12:29.433356802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7d7658d587-mwp5j,Uid:fe44d783-a407-4475-bb45-4b5e475fc600,Namespace:calico-system,Attempt:0,}" Mar 2 13:12:29.439131 containerd[1760]: time="2026-03-02T13:12:29.439096606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6ff4dd5d-77p2h,Uid:da16cb8c-bc4e-45ca-bc1a-d50354581491,Namespace:calico-system,Attempt:0,}" Mar 2 13:12:29.452860 containerd[1760]: time="2026-03-02T13:12:29.452810256Z" level=error msg="Failed to destroy network for sandbox \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.456404 containerd[1760]: time="2026-03-02T13:12:29.456368778Z" level=error msg="encountered an error cleaning up failed sandbox \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.456476 containerd[1760]: time="2026-03-02T13:12:29.456426058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-v96dg,Uid:5c8fecf4-235a-438f-90ac-097988d29780,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.456631 kubelet[3222]: E0302 13:12:29.456599 3222 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.456681 kubelet[3222]: E0302 13:12:29.456649 3222 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-v96dg" Mar 2 13:12:29.456681 kubelet[3222]: E0302 13:12:29.456667 3222 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-v96dg" Mar 2 13:12:29.456731 kubelet[3222]: E0302 13:12:29.456711 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-v96dg_kube-system(5c8fecf4-235a-438f-90ac-097988d29780)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-v96dg_kube-system(5c8fecf4-235a-438f-90ac-097988d29780)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-v96dg" podUID="5c8fecf4-235a-438f-90ac-097988d29780" Mar 2 13:12:29.458919 containerd[1760]: time="2026-03-02T13:12:29.458862660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6ff4dd5d-nbv94,Uid:2b8c6e3f-27bc-4357-bc8f-90f4d1ae9548,Namespace:calico-system,Attempt:0,}" Mar 2 13:12:29.553655 kubelet[3222]: I0302 13:12:29.553525 3222 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Mar 2 13:12:29.554424 containerd[1760]: time="2026-03-02T13:12:29.554106486Z" level=info msg="StopPodSandbox for \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\"" Mar 2 13:12:29.556110 containerd[1760]: time="2026-03-02T13:12:29.555971328Z" level=info msg="Ensure that sandbox a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253 in task-service has been cleanup successfully" Mar 2 13:12:29.560804 kubelet[3222]: I0302 13:12:29.560103 3222 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Mar 2 13:12:29.563899 containerd[1760]: time="2026-03-02T13:12:29.563866253Z" level=info msg="StopPodSandbox for \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\"" Mar 2 13:12:29.564059 containerd[1760]: time="2026-03-02T13:12:29.564037773Z" level=info msg="Ensure that sandbox 3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730 in task-service has been cleanup successfully" Mar 2 13:12:29.577473 containerd[1760]: time="2026-03-02T13:12:29.577335982Z" level=info msg="CreateContainer within sandbox \"cc49d93c1458110a247495459162fbf00e8c4b9608acb051abfb662011a25ed6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 2 13:12:29.623599 containerd[1760]: time="2026-03-02T13:12:29.623494055Z" level=error msg="StopPodSandbox for \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\" failed" error="failed to destroy network for sandbox \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.624427 kubelet[3222]: E0302 13:12:29.623975 3222 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Mar 2 13:12:29.624427 kubelet[3222]: E0302 13:12:29.624295 3222 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730"} Mar 2 13:12:29.624427 kubelet[3222]: E0302 13:12:29.624352 3222 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"662b1d4a-a279-406c-96fb-fce38eb91097\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 13:12:29.624427 kubelet[3222]: E0302 13:12:29.624378 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"662b1d4a-a279-406c-96fb-fce38eb91097\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nbvnj" podUID="662b1d4a-a279-406c-96fb-fce38eb91097" Mar 2 13:12:29.657228 containerd[1760]: time="2026-03-02T13:12:29.656969078Z" level=error msg="StopPodSandbox for \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\" failed" error="failed to destroy network for sandbox \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.657353 kubelet[3222]: E0302 13:12:29.657198 3222 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Mar 2 13:12:29.657353 kubelet[3222]: E0302 13:12:29.657245 3222 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253"} Mar 2 13:12:29.657353 kubelet[3222]: E0302 13:12:29.657285 3222 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5c8fecf4-235a-438f-90ac-097988d29780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 13:12:29.657353 kubelet[3222]: E0302 13:12:29.657311 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5c8fecf4-235a-438f-90ac-097988d29780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-v96dg" podUID="5c8fecf4-235a-438f-90ac-097988d29780" Mar 2 13:12:29.673889 containerd[1760]: time="2026-03-02T13:12:29.673778210Z" level=info msg="CreateContainer within sandbox \"cc49d93c1458110a247495459162fbf00e8c4b9608acb051abfb662011a25ed6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"98274eb76963d1eb75475eb3315a751e5eed3412c5c391b97f432307f10c4bf7\"" Mar 2 13:12:29.675258 containerd[1760]: time="2026-03-02T13:12:29.674559970Z" level=info msg="StartContainer for \"98274eb76963d1eb75475eb3315a751e5eed3412c5c391b97f432307f10c4bf7\"" Mar 2 13:12:29.713356 systemd[1]: Started cri-containerd-98274eb76963d1eb75475eb3315a751e5eed3412c5c391b97f432307f10c4bf7.scope - libcontainer container 98274eb76963d1eb75475eb3315a751e5eed3412c5c391b97f432307f10c4bf7. Mar 2 13:12:29.750735 containerd[1760]: time="2026-03-02T13:12:29.750196063Z" level=error msg="Failed to destroy network for sandbox \"c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.753119 containerd[1760]: time="2026-03-02T13:12:29.753074705Z" level=error msg="encountered an error cleaning up failed sandbox \"c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.753837 containerd[1760]: time="2026-03-02T13:12:29.753719545Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6ff4dd5d-77p2h,Uid:da16cb8c-bc4e-45ca-bc1a-d50354581491,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.756255 kubelet[3222]: E0302 13:12:29.756191 3222 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.756411 kubelet[3222]: E0302 13:12:29.756295 3222 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7f6ff4dd5d-77p2h" Mar 2 13:12:29.756411 kubelet[3222]: E0302 13:12:29.756317 3222 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7f6ff4dd5d-77p2h" Mar 2 13:12:29.756411 kubelet[3222]: E0302 13:12:29.756375 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f6ff4dd5d-77p2h_calico-system(da16cb8c-bc4e-45ca-bc1a-d50354581491)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f6ff4dd5d-77p2h_calico-system(da16cb8c-bc4e-45ca-bc1a-d50354581491)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7f6ff4dd5d-77p2h" podUID="da16cb8c-bc4e-45ca-bc1a-d50354581491" Mar 2 13:12:29.767370 containerd[1760]: time="2026-03-02T13:12:29.767342235Z" level=error msg="Failed to destroy network for sandbox \"4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.767989 containerd[1760]: time="2026-03-02T13:12:29.767958395Z" level=error msg="encountered an error cleaning up failed sandbox \"4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.768467 containerd[1760]: time="2026-03-02T13:12:29.768368676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-775879d688-ks9z6,Uid:a8eaf721-f0bb-465f-a36a-f220c434ea52,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.769205 kubelet[3222]: E0302 13:12:29.768860 3222 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.769205 kubelet[3222]: E0302 13:12:29.769111 3222 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-775879d688-ks9z6" Mar 2 13:12:29.769205 kubelet[3222]: E0302 13:12:29.769130 3222 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-775879d688-ks9z6" Mar 2 13:12:29.769383 kubelet[3222]: E0302 13:12:29.769213 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-775879d688-ks9z6_calico-system(a8eaf721-f0bb-465f-a36a-f220c434ea52)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-775879d688-ks9z6_calico-system(a8eaf721-f0bb-465f-a36a-f220c434ea52)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-775879d688-ks9z6" podUID="a8eaf721-f0bb-465f-a36a-f220c434ea52" Mar 2 13:12:29.784950 containerd[1760]: time="2026-03-02T13:12:29.784876327Z" level=error msg="Failed to destroy network for sandbox \"6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.785852 containerd[1760]: time="2026-03-02T13:12:29.785428968Z" level=error msg="encountered an error cleaning up failed sandbox \"6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.785944 containerd[1760]: time="2026-03-02T13:12:29.785734528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-554bdbbf9-xb6cd,Uid:dd3978ae-983f-4585-acd3-45fc4f7a81fe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.788372 kubelet[3222]: E0302 13:12:29.788337 3222 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.788460 kubelet[3222]: E0302 13:12:29.788388 3222 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-554bdbbf9-xb6cd" Mar 2 13:12:29.788460 kubelet[3222]: E0302 13:12:29.788405 3222 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-554bdbbf9-xb6cd" Mar 2 13:12:29.788508 kubelet[3222]: E0302 13:12:29.788453 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-554bdbbf9-xb6cd_calico-system(dd3978ae-983f-4585-acd3-45fc4f7a81fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-554bdbbf9-xb6cd_calico-system(dd3978ae-983f-4585-acd3-45fc4f7a81fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-554bdbbf9-xb6cd" podUID="dd3978ae-983f-4585-acd3-45fc4f7a81fe" Mar 2 13:12:29.803217 containerd[1760]: time="2026-03-02T13:12:29.803091820Z" level=error msg="Failed to destroy network for sandbox \"5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.806918 containerd[1760]: time="2026-03-02T13:12:29.806877423Z" level=error msg="encountered an error cleaning up failed sandbox \"5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.807069 containerd[1760]: time="2026-03-02T13:12:29.807037743Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6ff4dd5d-nbv94,Uid:2b8c6e3f-27bc-4357-bc8f-90f4d1ae9548,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.808871 kubelet[3222]: E0302 13:12:29.807810 3222 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.808871 kubelet[3222]: E0302 13:12:29.807859 3222 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7f6ff4dd5d-nbv94" Mar 2 13:12:29.808871 kubelet[3222]: E0302 13:12:29.807880 3222 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7f6ff4dd5d-nbv94" Mar 2 13:12:29.809007 kubelet[3222]: E0302 13:12:29.807921 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f6ff4dd5d-nbv94_calico-system(2b8c6e3f-27bc-4357-bc8f-90f4d1ae9548)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f6ff4dd5d-nbv94_calico-system(2b8c6e3f-27bc-4357-bc8f-90f4d1ae9548)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7f6ff4dd5d-nbv94" podUID="2b8c6e3f-27bc-4357-bc8f-90f4d1ae9548" Mar 2 13:12:29.809598 containerd[1760]: time="2026-03-02T13:12:29.809565984Z" level=info msg="StartContainer for \"98274eb76963d1eb75475eb3315a751e5eed3412c5c391b97f432307f10c4bf7\" returns successfully" Mar 2 13:12:29.824292 containerd[1760]: time="2026-03-02T13:12:29.824251955Z" level=error msg="Failed to destroy network for sandbox \"f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.824775 containerd[1760]: time="2026-03-02T13:12:29.824749275Z" level=error msg="encountered an error cleaning up failed sandbox \"f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.825261 containerd[1760]: time="2026-03-02T13:12:29.825234395Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-fl49c,Uid:421d6308-a09a-4e47-a594-09b64ce05a98,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.825917 kubelet[3222]: E0302 13:12:29.825878 3222 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.825988 kubelet[3222]: E0302 13:12:29.825940 3222 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-fl49c" Mar 2 13:12:29.825988 kubelet[3222]: E0302 13:12:29.825958 3222 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-fl49c" Mar 2 13:12:29.826049 kubelet[3222]: E0302 13:12:29.826006 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-fl49c_kube-system(421d6308-a09a-4e47-a594-09b64ce05a98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-fl49c_kube-system(421d6308-a09a-4e47-a594-09b64ce05a98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-fl49c" podUID="421d6308-a09a-4e47-a594-09b64ce05a98" Mar 2 13:12:29.826718 containerd[1760]: time="2026-03-02T13:12:29.825766076Z" level=error msg="Failed to destroy network for sandbox \"c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.827442 containerd[1760]: time="2026-03-02T13:12:29.827196757Z" level=error msg="encountered an error cleaning up failed sandbox \"c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.827442 containerd[1760]: time="2026-03-02T13:12:29.827247197Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7d7658d587-mwp5j,Uid:fe44d783-a407-4475-bb45-4b5e475fc600,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.828107 kubelet[3222]: E0302 13:12:29.827764 3222 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:12:29.828107 kubelet[3222]: E0302 13:12:29.827815 3222 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7d7658d587-mwp5j" Mar 2 13:12:29.828107 kubelet[3222]: E0302 13:12:29.827924 3222 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7d7658d587-mwp5j" Mar 2 13:12:29.828363 kubelet[3222]: E0302 13:12:29.827964 3222 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7d7658d587-mwp5j_calico-system(fe44d783-a407-4475-bb45-4b5e475fc600)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7d7658d587-mwp5j_calico-system(fe44d783-a407-4475-bb45-4b5e475fc600)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7d7658d587-mwp5j" podUID="fe44d783-a407-4475-bb45-4b5e475fc600" Mar 2 13:12:30.563817 kubelet[3222]: I0302 13:12:30.563786 3222 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Mar 2 13:12:30.564593 containerd[1760]: time="2026-03-02T13:12:30.564484351Z" level=info msg="StopPodSandbox for \"c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb\"" Mar 2 13:12:30.565175 containerd[1760]: time="2026-03-02T13:12:30.564944671Z" level=info msg="Ensure that sandbox c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb in task-service has been cleanup successfully" Mar 2 13:12:30.573004 kubelet[3222]: I0302 13:12:30.572981 3222 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Mar 2 13:12:30.575185 containerd[1760]: time="2026-03-02T13:12:30.574601878Z" level=info msg="StopPodSandbox for \"5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d\"" Mar 2 13:12:30.577198 containerd[1760]: time="2026-03-02T13:12:30.577025080Z" level=info msg="Ensure that sandbox 5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d in task-service has been cleanup successfully" Mar 2 13:12:30.578736 kubelet[3222]: I0302 13:12:30.578308 3222 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Mar 2 13:12:30.580778 containerd[1760]: time="2026-03-02T13:12:30.580746002Z" level=info msg="StopPodSandbox for \"c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892\"" Mar 2 13:12:30.581733 kubelet[3222]: I0302 13:12:30.581711 3222 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Mar 2 13:12:30.582789 containerd[1760]: time="2026-03-02T13:12:30.582765044Z" level=info msg="StopPodSandbox for \"6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc\"" Mar 2 13:12:30.583886 containerd[1760]: time="2026-03-02T13:12:30.583780924Z" level=info msg="Ensure that sandbox 6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc in task-service has been cleanup successfully" Mar 2 13:12:30.584789 containerd[1760]: time="2026-03-02T13:12:30.583298084Z" level=info msg="Ensure that sandbox c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892 in task-service has been cleanup successfully" Mar 2 13:12:30.588547 kubelet[3222]: I0302 13:12:30.588533 3222 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Mar 2 13:12:30.590524 containerd[1760]: time="2026-03-02T13:12:30.590492689Z" level=info msg="StopPodSandbox for \"4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735\"" Mar 2 13:12:30.591443 containerd[1760]: time="2026-03-02T13:12:30.590989369Z" level=info msg="Ensure that sandbox 4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735 in task-service has been cleanup successfully" Mar 2 13:12:30.593268 kubelet[3222]: I0302 13:12:30.593098 3222 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Mar 2 13:12:30.594551 containerd[1760]: time="2026-03-02T13:12:30.594526412Z" level=info msg="StopPodSandbox for \"f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc\"" Mar 2 13:12:30.595817 containerd[1760]: time="2026-03-02T13:12:30.595748453Z" level=info msg="Ensure that sandbox f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc in task-service has been cleanup successfully" Mar 2 13:12:30.607156 kubelet[3222]: I0302 13:12:30.605655 3222 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-x74x2" podStartSLOduration=1.932013257 podStartE2EDuration="23.60564254s" podCreationTimestamp="2026-03-02 13:12:07 +0000 UTC" firstStartedPulling="2026-03-02 13:12:07.886722488 +0000 UTC m=+21.559960235" lastFinishedPulling="2026-03-02 13:12:29.560351771 +0000 UTC m=+43.233589518" observedRunningTime="2026-03-02 13:12:30.603925979 +0000 UTC m=+44.277163766" watchObservedRunningTime="2026-03-02 13:12:30.60564254 +0000 UTC m=+44.278880287" Mar 2 13:12:30.885580 containerd[1760]: 2026-03-02 13:12:30.766 [INFO][4383] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Mar 2 13:12:30.885580 containerd[1760]: 2026-03-02 13:12:30.766 [INFO][4383] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" iface="eth0" netns="/var/run/netns/cni-5df5dda6-c421-479f-b468-632f1ec0e78f" Mar 2 13:12:30.885580 containerd[1760]: 2026-03-02 13:12:30.766 [INFO][4383] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" iface="eth0" netns="/var/run/netns/cni-5df5dda6-c421-479f-b468-632f1ec0e78f" Mar 2 13:12:30.885580 containerd[1760]: 2026-03-02 13:12:30.766 [INFO][4383] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" iface="eth0" netns="/var/run/netns/cni-5df5dda6-c421-479f-b468-632f1ec0e78f" Mar 2 13:12:30.885580 containerd[1760]: 2026-03-02 13:12:30.766 [INFO][4383] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Mar 2 13:12:30.885580 containerd[1760]: 2026-03-02 13:12:30.766 [INFO][4383] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Mar 2 13:12:30.885580 containerd[1760]: 2026-03-02 13:12:30.842 [INFO][4474] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" HandleID="k8s-pod-network.5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0" Mar 2 13:12:30.885580 containerd[1760]: 2026-03-02 13:12:30.844 [INFO][4474] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:30.885580 containerd[1760]: 2026-03-02 13:12:30.844 [INFO][4474] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:30.885580 containerd[1760]: 2026-03-02 13:12:30.871 [WARNING][4474] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" HandleID="k8s-pod-network.5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0" Mar 2 13:12:30.885580 containerd[1760]: 2026-03-02 13:12:30.871 [INFO][4474] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" HandleID="k8s-pod-network.5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0" Mar 2 13:12:30.885580 containerd[1760]: 2026-03-02 13:12:30.872 [INFO][4474] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:30.885580 containerd[1760]: 2026-03-02 13:12:30.881 [INFO][4383] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Mar 2 13:12:30.890010 containerd[1760]: time="2026-03-02T13:12:30.889519058Z" level=info msg="TearDown network for sandbox \"5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d\" successfully" Mar 2 13:12:30.890010 containerd[1760]: time="2026-03-02T13:12:30.889551938Z" level=info msg="StopPodSandbox for \"5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d\" returns successfully" Mar 2 13:12:30.890670 systemd[1]: run-netns-cni\x2d5df5dda6\x2dc421\x2d479f\x2db468\x2d632f1ec0e78f.mount: Deactivated successfully. Mar 2 13:12:30.904474 containerd[1760]: 2026-03-02 13:12:30.737 [INFO][4363] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Mar 2 13:12:30.904474 containerd[1760]: 2026-03-02 13:12:30.738 [INFO][4363] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" iface="eth0" netns="/var/run/netns/cni-2c9df91e-ee14-db36-2de3-afee288d10b4" Mar 2 13:12:30.904474 containerd[1760]: 2026-03-02 13:12:30.738 [INFO][4363] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" iface="eth0" netns="/var/run/netns/cni-2c9df91e-ee14-db36-2de3-afee288d10b4" Mar 2 13:12:30.904474 containerd[1760]: 2026-03-02 13:12:30.742 [INFO][4363] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" iface="eth0" netns="/var/run/netns/cni-2c9df91e-ee14-db36-2de3-afee288d10b4" Mar 2 13:12:30.904474 containerd[1760]: 2026-03-02 13:12:30.742 [INFO][4363] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Mar 2 13:12:30.904474 containerd[1760]: 2026-03-02 13:12:30.742 [INFO][4363] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Mar 2 13:12:30.904474 containerd[1760]: 2026-03-02 13:12:30.848 [INFO][4466] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" HandleID="k8s-pod-network.c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0" Mar 2 13:12:30.904474 containerd[1760]: 2026-03-02 13:12:30.849 [INFO][4466] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:30.904474 containerd[1760]: 2026-03-02 13:12:30.873 [INFO][4466] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:30.904474 containerd[1760]: 2026-03-02 13:12:30.893 [WARNING][4466] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" HandleID="k8s-pod-network.c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0" Mar 2 13:12:30.904474 containerd[1760]: 2026-03-02 13:12:30.893 [INFO][4466] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" HandleID="k8s-pod-network.c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0" Mar 2 13:12:30.904474 containerd[1760]: 2026-03-02 13:12:30.896 [INFO][4466] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:30.904474 containerd[1760]: 2026-03-02 13:12:30.901 [INFO][4363] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Mar 2 13:12:30.906782 containerd[1760]: time="2026-03-02T13:12:30.906434630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6ff4dd5d-nbv94,Uid:2b8c6e3f-27bc-4357-bc8f-90f4d1ae9548,Namespace:calico-system,Attempt:1,}" Mar 2 13:12:30.906782 containerd[1760]: time="2026-03-02T13:12:30.906684110Z" level=info msg="TearDown network for sandbox \"c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb\" successfully" Mar 2 13:12:30.906782 containerd[1760]: time="2026-03-02T13:12:30.906707030Z" level=info msg="StopPodSandbox for \"c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb\" returns successfully" Mar 2 13:12:30.912131 systemd[1]: run-netns-cni\x2d2c9df91e\x2dee14\x2ddb36\x2d2de3\x2dafee288d10b4.mount: Deactivated successfully. Mar 2 13:12:30.914939 containerd[1760]: time="2026-03-02T13:12:30.914505635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6ff4dd5d-77p2h,Uid:da16cb8c-bc4e-45ca-bc1a-d50354581491,Namespace:calico-system,Attempt:1,}" Mar 2 13:12:30.920202 containerd[1760]: 2026-03-02 13:12:30.793 [INFO][4412] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Mar 2 13:12:30.920202 containerd[1760]: 2026-03-02 13:12:30.793 [INFO][4412] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" iface="eth0" netns="/var/run/netns/cni-f134dab2-959b-8c60-7ea4-d3528f2632a4" Mar 2 13:12:30.920202 containerd[1760]: 2026-03-02 13:12:30.794 [INFO][4412] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" iface="eth0" netns="/var/run/netns/cni-f134dab2-959b-8c60-7ea4-d3528f2632a4" Mar 2 13:12:30.920202 containerd[1760]: 2026-03-02 13:12:30.795 [INFO][4412] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" iface="eth0" netns="/var/run/netns/cni-f134dab2-959b-8c60-7ea4-d3528f2632a4" Mar 2 13:12:30.920202 containerd[1760]: 2026-03-02 13:12:30.795 [INFO][4412] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Mar 2 13:12:30.920202 containerd[1760]: 2026-03-02 13:12:30.795 [INFO][4412] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Mar 2 13:12:30.920202 containerd[1760]: 2026-03-02 13:12:30.876 [INFO][4480] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" HandleID="k8s-pod-network.6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Workload="ci--4081.3.101--d5e61b93e9-k8s-whisker--554bdbbf9--xb6cd-eth0" Mar 2 13:12:30.920202 containerd[1760]: 2026-03-02 13:12:30.876 [INFO][4480] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:30.920202 containerd[1760]: 2026-03-02 13:12:30.895 [INFO][4480] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:30.920202 containerd[1760]: 2026-03-02 13:12:30.911 [WARNING][4480] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" HandleID="k8s-pod-network.6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Workload="ci--4081.3.101--d5e61b93e9-k8s-whisker--554bdbbf9--xb6cd-eth0" Mar 2 13:12:30.920202 containerd[1760]: 2026-03-02 13:12:30.912 [INFO][4480] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" HandleID="k8s-pod-network.6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Workload="ci--4081.3.101--d5e61b93e9-k8s-whisker--554bdbbf9--xb6cd-eth0" Mar 2 13:12:30.920202 containerd[1760]: 2026-03-02 13:12:30.915 [INFO][4480] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:30.920202 containerd[1760]: 2026-03-02 13:12:30.917 [INFO][4412] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Mar 2 13:12:30.921807 containerd[1760]: time="2026-03-02T13:12:30.921779600Z" level=info msg="TearDown network for sandbox \"6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc\" successfully" Mar 2 13:12:30.921922 containerd[1760]: time="2026-03-02T13:12:30.921906800Z" level=info msg="StopPodSandbox for \"6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc\" returns successfully" Mar 2 13:12:30.938950 containerd[1760]: 2026-03-02 13:12:30.802 [INFO][4436] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Mar 2 13:12:30.938950 containerd[1760]: 2026-03-02 13:12:30.802 [INFO][4436] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" iface="eth0" netns="/var/run/netns/cni-257f8e05-e19a-5b47-8b62-a87ae17fa940" Mar 2 13:12:30.938950 containerd[1760]: 2026-03-02 13:12:30.804 [INFO][4436] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" iface="eth0" netns="/var/run/netns/cni-257f8e05-e19a-5b47-8b62-a87ae17fa940" Mar 2 13:12:30.938950 containerd[1760]: 2026-03-02 13:12:30.804 [INFO][4436] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" iface="eth0" netns="/var/run/netns/cni-257f8e05-e19a-5b47-8b62-a87ae17fa940" Mar 2 13:12:30.938950 containerd[1760]: 2026-03-02 13:12:30.804 [INFO][4436] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Mar 2 13:12:30.938950 containerd[1760]: 2026-03-02 13:12:30.804 [INFO][4436] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Mar 2 13:12:30.938950 containerd[1760]: 2026-03-02 13:12:30.884 [INFO][4485] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" HandleID="k8s-pod-network.f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0" Mar 2 13:12:30.938950 containerd[1760]: 2026-03-02 13:12:30.884 [INFO][4485] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:30.938950 containerd[1760]: 2026-03-02 13:12:30.915 [INFO][4485] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:30.938950 containerd[1760]: 2026-03-02 13:12:30.926 [WARNING][4485] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" HandleID="k8s-pod-network.f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0" Mar 2 13:12:30.938950 containerd[1760]: 2026-03-02 13:12:30.926 [INFO][4485] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" HandleID="k8s-pod-network.f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0" Mar 2 13:12:30.938950 containerd[1760]: 2026-03-02 13:12:30.929 [INFO][4485] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:30.938950 containerd[1760]: 2026-03-02 13:12:30.937 [INFO][4436] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Mar 2 13:12:30.940115 containerd[1760]: time="2026-03-02T13:12:30.939931933Z" level=info msg="TearDown network for sandbox \"f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc\" successfully" Mar 2 13:12:30.940115 containerd[1760]: time="2026-03-02T13:12:30.939961773Z" level=info msg="StopPodSandbox for \"f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc\" returns successfully" Mar 2 13:12:30.946954 containerd[1760]: time="2026-03-02T13:12:30.946811298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-fl49c,Uid:421d6308-a09a-4e47-a594-09b64ce05a98,Namespace:kube-system,Attempt:1,}" Mar 2 13:12:30.953346 containerd[1760]: 2026-03-02 13:12:30.821 [INFO][4428] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Mar 2 13:12:30.953346 containerd[1760]: 2026-03-02 13:12:30.821 [INFO][4428] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" iface="eth0" netns="/var/run/netns/cni-3092cc2a-117b-cc86-9e5a-728513c695e3" Mar 2 13:12:30.953346 containerd[1760]: 2026-03-02 13:12:30.821 [INFO][4428] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" iface="eth0" netns="/var/run/netns/cni-3092cc2a-117b-cc86-9e5a-728513c695e3" Mar 2 13:12:30.953346 containerd[1760]: 2026-03-02 13:12:30.822 [INFO][4428] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" iface="eth0" netns="/var/run/netns/cni-3092cc2a-117b-cc86-9e5a-728513c695e3" Mar 2 13:12:30.953346 containerd[1760]: 2026-03-02 13:12:30.822 [INFO][4428] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Mar 2 13:12:30.953346 containerd[1760]: 2026-03-02 13:12:30.822 [INFO][4428] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Mar 2 13:12:30.953346 containerd[1760]: 2026-03-02 13:12:30.889 [INFO][4495] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" HandleID="k8s-pod-network.c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Workload="ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0" Mar 2 13:12:30.953346 containerd[1760]: 2026-03-02 13:12:30.889 [INFO][4495] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:30.953346 containerd[1760]: 2026-03-02 13:12:30.930 [INFO][4495] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:30.953346 containerd[1760]: 2026-03-02 13:12:30.946 [WARNING][4495] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" HandleID="k8s-pod-network.c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Workload="ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0" Mar 2 13:12:30.953346 containerd[1760]: 2026-03-02 13:12:30.946 [INFO][4495] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" HandleID="k8s-pod-network.c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Workload="ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0" Mar 2 13:12:30.953346 containerd[1760]: 2026-03-02 13:12:30.948 [INFO][4495] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:30.953346 containerd[1760]: 2026-03-02 13:12:30.950 [INFO][4428] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Mar 2 13:12:30.958436 containerd[1760]: time="2026-03-02T13:12:30.953457102Z" level=info msg="TearDown network for sandbox \"c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892\" successfully" Mar 2 13:12:30.958436 containerd[1760]: time="2026-03-02T13:12:30.953475462Z" level=info msg="StopPodSandbox for \"c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892\" returns successfully" Mar 2 13:12:30.967926 containerd[1760]: time="2026-03-02T13:12:30.967888512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7d7658d587-mwp5j,Uid:fe44d783-a407-4475-bb45-4b5e475fc600,Namespace:calico-system,Attempt:1,}" Mar 2 13:12:30.971156 containerd[1760]: 2026-03-02 13:12:30.797 [INFO][4427] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Mar 2 13:12:30.971156 containerd[1760]: 2026-03-02 13:12:30.797 [INFO][4427] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" iface="eth0" netns="/var/run/netns/cni-de409b52-3c54-9454-d302-4fb03e24ac5a" Mar 2 13:12:30.971156 containerd[1760]: 2026-03-02 13:12:30.797 [INFO][4427] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" iface="eth0" netns="/var/run/netns/cni-de409b52-3c54-9454-d302-4fb03e24ac5a" Mar 2 13:12:30.971156 containerd[1760]: 2026-03-02 13:12:30.798 [INFO][4427] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" iface="eth0" netns="/var/run/netns/cni-de409b52-3c54-9454-d302-4fb03e24ac5a" Mar 2 13:12:30.971156 containerd[1760]: 2026-03-02 13:12:30.798 [INFO][4427] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Mar 2 13:12:30.971156 containerd[1760]: 2026-03-02 13:12:30.798 [INFO][4427] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Mar 2 13:12:30.971156 containerd[1760]: 2026-03-02 13:12:30.895 [INFO][4482] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" HandleID="k8s-pod-network.4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0" Mar 2 13:12:30.971156 containerd[1760]: 2026-03-02 13:12:30.895 [INFO][4482] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:30.971156 containerd[1760]: 2026-03-02 13:12:30.948 [INFO][4482] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:30.971156 containerd[1760]: 2026-03-02 13:12:30.956 [WARNING][4482] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" HandleID="k8s-pod-network.4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0" Mar 2 13:12:30.971156 containerd[1760]: 2026-03-02 13:12:30.958 [INFO][4482] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" HandleID="k8s-pod-network.4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0" Mar 2 13:12:30.971156 containerd[1760]: 2026-03-02 13:12:30.965 [INFO][4482] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:30.971156 containerd[1760]: 2026-03-02 13:12:30.968 [INFO][4427] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Mar 2 13:12:30.971871 containerd[1760]: time="2026-03-02T13:12:30.971640275Z" level=info msg="TearDown network for sandbox \"4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735\" successfully" Mar 2 13:12:30.971871 containerd[1760]: time="2026-03-02T13:12:30.971668315Z" level=info msg="StopPodSandbox for \"4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735\" returns successfully" Mar 2 13:12:30.980014 containerd[1760]: time="2026-03-02T13:12:30.979980441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-775879d688-ks9z6,Uid:a8eaf721-f0bb-465f-a36a-f220c434ea52,Namespace:calico-system,Attempt:1,}" Mar 2 13:12:30.989112 kubelet[3222]: I0302 13:12:30.988888 3222 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd3978ae-983f-4585-acd3-45fc4f7a81fe-whisker-ca-bundle" pod "dd3978ae-983f-4585-acd3-45fc4f7a81fe" (UID: "dd3978ae-983f-4585-acd3-45fc4f7a81fe"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:12:30.989972 kubelet[3222]: I0302 13:12:30.989937 3222 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/dd3978ae-983f-4585-acd3-45fc4f7a81fe-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd3978ae-983f-4585-acd3-45fc4f7a81fe-whisker-ca-bundle\") pod \"dd3978ae-983f-4585-acd3-45fc4f7a81fe\" (UID: \"dd3978ae-983f-4585-acd3-45fc4f7a81fe\") " Mar 2 13:12:30.990025 kubelet[3222]: I0302 13:12:30.990008 3222 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/dd3978ae-983f-4585-acd3-45fc4f7a81fe-kube-api-access-pcgkb\" (UniqueName: \"kubernetes.io/projected/dd3978ae-983f-4585-acd3-45fc4f7a81fe-kube-api-access-pcgkb\") pod \"dd3978ae-983f-4585-acd3-45fc4f7a81fe\" (UID: \"dd3978ae-983f-4585-acd3-45fc4f7a81fe\") " Mar 2 13:12:30.990058 kubelet[3222]: I0302 13:12:30.990039 3222 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/dd3978ae-983f-4585-acd3-45fc4f7a81fe-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dd3978ae-983f-4585-acd3-45fc4f7a81fe-whisker-backend-key-pair\") pod \"dd3978ae-983f-4585-acd3-45fc4f7a81fe\" (UID: \"dd3978ae-983f-4585-acd3-45fc4f7a81fe\") " Mar 2 13:12:30.990084 kubelet[3222]: I0302 13:12:30.990067 3222 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/dd3978ae-983f-4585-acd3-45fc4f7a81fe-nginx-config\" (UniqueName: \"kubernetes.io/configmap/dd3978ae-983f-4585-acd3-45fc4f7a81fe-nginx-config\") pod \"dd3978ae-983f-4585-acd3-45fc4f7a81fe\" (UID: \"dd3978ae-983f-4585-acd3-45fc4f7a81fe\") " Mar 2 13:12:30.990378 kubelet[3222]: I0302 13:12:30.990140 3222 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd3978ae-983f-4585-acd3-45fc4f7a81fe-whisker-ca-bundle\") on node \"ci-4081.3.101-d5e61b93e9\" DevicePath \"\"" Mar 2 13:12:30.990436 kubelet[3222]: I0302 13:12:30.990400 3222 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd3978ae-983f-4585-acd3-45fc4f7a81fe-nginx-config" pod "dd3978ae-983f-4585-acd3-45fc4f7a81fe" (UID: "dd3978ae-983f-4585-acd3-45fc4f7a81fe"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:12:30.995137 kubelet[3222]: I0302 13:12:30.994932 3222 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd3978ae-983f-4585-acd3-45fc4f7a81fe-whisker-backend-key-pair" pod "dd3978ae-983f-4585-acd3-45fc4f7a81fe" (UID: "dd3978ae-983f-4585-acd3-45fc4f7a81fe"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 2 13:12:30.995489 kubelet[3222]: I0302 13:12:30.995467 3222 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd3978ae-983f-4585-acd3-45fc4f7a81fe-kube-api-access-pcgkb" pod "dd3978ae-983f-4585-acd3-45fc4f7a81fe" (UID: "dd3978ae-983f-4585-acd3-45fc4f7a81fe"). InnerVolumeSpecName "kube-api-access-pcgkb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:12:31.090387 kubelet[3222]: I0302 13:12:31.090341 3222 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pcgkb\" (UniqueName: \"kubernetes.io/projected/dd3978ae-983f-4585-acd3-45fc4f7a81fe-kube-api-access-pcgkb\") on node \"ci-4081.3.101-d5e61b93e9\" DevicePath \"\"" Mar 2 13:12:31.090387 kubelet[3222]: I0302 13:12:31.090376 3222 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dd3978ae-983f-4585-acd3-45fc4f7a81fe-whisker-backend-key-pair\") on node \"ci-4081.3.101-d5e61b93e9\" DevicePath \"\"" Mar 2 13:12:31.090387 kubelet[3222]: I0302 13:12:31.090387 3222 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/dd3978ae-983f-4585-acd3-45fc4f7a81fe-nginx-config\") on node \"ci-4081.3.101-d5e61b93e9\" DevicePath \"\"" Mar 2 13:12:31.278231 systemd-networkd[1374]: caliaa755832667: Link UP Mar 2 13:12:31.278879 systemd-networkd[1374]: caliaa755832667: Gained carrier Mar 2 13:12:31.301658 systemd[1]: run-netns-cni\x2d3092cc2a\x2d117b\x2dcc86\x2d9e5a\x2d728513c695e3.mount: Deactivated successfully. Mar 2 13:12:31.301736 systemd[1]: run-netns-cni\x2d257f8e05\x2de19a\x2d5b47\x2d8b62\x2da87ae17fa940.mount: Deactivated successfully. Mar 2 13:12:31.301785 systemd[1]: run-netns-cni\x2df134dab2\x2d959b\x2d8c60\x2d7ea4\x2dd3528f2632a4.mount: Deactivated successfully. Mar 2 13:12:31.301831 systemd[1]: run-netns-cni\x2dde409b52\x2d3c54\x2d9454\x2dd302\x2d4fb03e24ac5a.mount: Deactivated successfully. Mar 2 13:12:31.301877 systemd[1]: var-lib-kubelet-pods-dd3978ae\x2d983f\x2d4585\x2dacd3\x2d45fc4f7a81fe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpcgkb.mount: Deactivated successfully. Mar 2 13:12:31.301928 systemd[1]: var-lib-kubelet-pods-dd3978ae\x2d983f\x2d4585\x2dacd3\x2d45fc4f7a81fe-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 2 13:12:31.326802 systemd-networkd[1374]: cali093d9108fea: Link UP Mar 2 13:12:31.326932 systemd-networkd[1374]: cali093d9108fea: Gained carrier Mar 2 13:12:31.354932 containerd[1760]: 2026-03-02 13:12:31.033 [ERROR][4523] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:12:31.354932 containerd[1760]: 2026-03-02 13:12:31.050 [INFO][4523] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0 calico-apiserver-7f6ff4dd5d- calico-system da16cb8c-bc4e-45ca-bc1a-d50354581491 894 0 2026-03-02 13:12:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f6ff4dd5d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.101-d5e61b93e9 calico-apiserver-7f6ff4dd5d-77p2h eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali093d9108fea [] [] }} ContainerID="425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b" Namespace="calico-system" Pod="calico-apiserver-7f6ff4dd5d-77p2h" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-" Mar 2 13:12:31.354932 containerd[1760]: 2026-03-02 13:12:31.050 [INFO][4523] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b" Namespace="calico-system" Pod="calico-apiserver-7f6ff4dd5d-77p2h" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0" Mar 2 13:12:31.354932 containerd[1760]: 2026-03-02 13:12:31.134 [INFO][4544] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b" HandleID="k8s-pod-network.425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0" Mar 2 13:12:31.354932 containerd[1760]: 2026-03-02 13:12:31.171 [INFO][4544] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b" HandleID="k8s-pod-network.425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002735a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-d5e61b93e9", "pod":"calico-apiserver-7f6ff4dd5d-77p2h", "timestamp":"2026-03-02 13:12:31.134817549 +0000 UTC"}, Hostname:"ci-4081.3.101-d5e61b93e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400010cb00)} Mar 2 13:12:31.354932 containerd[1760]: 2026-03-02 13:12:31.171 [INFO][4544] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:31.354932 containerd[1760]: 2026-03-02 13:12:31.213 [INFO][4544] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:31.354932 containerd[1760]: 2026-03-02 13:12:31.213 [INFO][4544] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-d5e61b93e9' Mar 2 13:12:31.354932 containerd[1760]: 2026-03-02 13:12:31.216 [INFO][4544] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.354932 containerd[1760]: 2026-03-02 13:12:31.249 [INFO][4544] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.354932 containerd[1760]: 2026-03-02 13:12:31.258 [INFO][4544] ipam/ipam.go 526: Trying affinity for 192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.354932 containerd[1760]: 2026-03-02 13:12:31.262 [INFO][4544] ipam/ipam.go 160: Attempting to load block cidr=192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.354932 containerd[1760]: 2026-03-02 13:12:31.271 [INFO][4544] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.354932 containerd[1760]: 2026-03-02 13:12:31.271 [INFO][4544] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.60.0/26 handle="k8s-pod-network.425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.354932 containerd[1760]: 2026-03-02 13:12:31.277 [INFO][4544] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b Mar 2 13:12:31.354932 containerd[1760]: 2026-03-02 13:12:31.290 [INFO][4544] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.60.0/26 handle="k8s-pod-network.425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.354932 containerd[1760]: 2026-03-02 13:12:31.299 [INFO][4544] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.60.2/26] block=192.168.60.0/26 handle="k8s-pod-network.425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.354932 containerd[1760]: 2026-03-02 13:12:31.299 [INFO][4544] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.60.2/26] handle="k8s-pod-network.425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.354932 containerd[1760]: 2026-03-02 13:12:31.299 [INFO][4544] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:31.354932 containerd[1760]: 2026-03-02 13:12:31.299 [INFO][4544] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.60.2/26] IPv6=[] ContainerID="425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b" HandleID="k8s-pod-network.425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0" Mar 2 13:12:31.355489 containerd[1760]: 2026-03-02 13:12:31.320 [INFO][4523] cni-plugin/k8s.go 418: Populated endpoint ContainerID="425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b" Namespace="calico-system" Pod="calico-apiserver-7f6ff4dd5d-77p2h" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0", GenerateName:"calico-apiserver-7f6ff4dd5d-", Namespace:"calico-system", SelfLink:"", UID:"da16cb8c-bc4e-45ca-bc1a-d50354581491", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6ff4dd5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"", Pod:"calico-apiserver-7f6ff4dd5d-77p2h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali093d9108fea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:31.355489 containerd[1760]: 2026-03-02 13:12:31.320 [INFO][4523] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.2/32] ContainerID="425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b" Namespace="calico-system" Pod="calico-apiserver-7f6ff4dd5d-77p2h" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0" Mar 2 13:12:31.355489 containerd[1760]: 2026-03-02 13:12:31.320 [INFO][4523] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali093d9108fea ContainerID="425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b" Namespace="calico-system" Pod="calico-apiserver-7f6ff4dd5d-77p2h" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0" Mar 2 13:12:31.355489 containerd[1760]: 2026-03-02 13:12:31.322 [INFO][4523] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b" Namespace="calico-system" Pod="calico-apiserver-7f6ff4dd5d-77p2h" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0" Mar 2 13:12:31.355489 containerd[1760]: 2026-03-02 13:12:31.323 [INFO][4523] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b" Namespace="calico-system" Pod="calico-apiserver-7f6ff4dd5d-77p2h" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0", GenerateName:"calico-apiserver-7f6ff4dd5d-", Namespace:"calico-system", SelfLink:"", UID:"da16cb8c-bc4e-45ca-bc1a-d50354581491", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6ff4dd5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b", Pod:"calico-apiserver-7f6ff4dd5d-77p2h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali093d9108fea", MAC:"66:62:00:78:20:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:31.355489 containerd[1760]: 2026-03-02 13:12:31.339 [INFO][4523] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b" Namespace="calico-system" Pod="calico-apiserver-7f6ff4dd5d-77p2h" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0" Mar 2 13:12:31.356303 containerd[1760]: 2026-03-02 13:12:31.006 [ERROR][4512] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:12:31.356303 containerd[1760]: 2026-03-02 13:12:31.023 [INFO][4512] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0 calico-apiserver-7f6ff4dd5d- calico-system 2b8c6e3f-27bc-4357-bc8f-90f4d1ae9548 895 0 2026-03-02 13:12:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f6ff4dd5d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.101-d5e61b93e9 calico-apiserver-7f6ff4dd5d-nbv94 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] caliaa755832667 [] [] }} ContainerID="a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd" Namespace="calico-system" Pod="calico-apiserver-7f6ff4dd5d-nbv94" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-" Mar 2 13:12:31.356303 containerd[1760]: 2026-03-02 13:12:31.023 [INFO][4512] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd" Namespace="calico-system" Pod="calico-apiserver-7f6ff4dd5d-nbv94" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0" Mar 2 13:12:31.356303 containerd[1760]: 2026-03-02 13:12:31.064 [INFO][4535] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd" HandleID="k8s-pod-network.a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0" Mar 2 13:12:31.356303 containerd[1760]: 2026-03-02 13:12:31.096 [INFO][4535] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd" HandleID="k8s-pod-network.a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273160), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-d5e61b93e9", "pod":"calico-apiserver-7f6ff4dd5d-nbv94", "timestamp":"2026-03-02 13:12:31.0648461 +0000 UTC"}, Hostname:"ci-4081.3.101-d5e61b93e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400040ef20)} Mar 2 13:12:31.356303 containerd[1760]: 2026-03-02 13:12:31.096 [INFO][4535] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:31.356303 containerd[1760]: 2026-03-02 13:12:31.096 [INFO][4535] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:31.356303 containerd[1760]: 2026-03-02 13:12:31.097 [INFO][4535] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-d5e61b93e9' Mar 2 13:12:31.356303 containerd[1760]: 2026-03-02 13:12:31.104 [INFO][4535] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.356303 containerd[1760]: 2026-03-02 13:12:31.135 [INFO][4535] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.356303 containerd[1760]: 2026-03-02 13:12:31.159 [INFO][4535] ipam/ipam.go 526: Trying affinity for 192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.356303 containerd[1760]: 2026-03-02 13:12:31.167 [INFO][4535] ipam/ipam.go 160: Attempting to load block cidr=192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.356303 containerd[1760]: 2026-03-02 13:12:31.175 [INFO][4535] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.356303 containerd[1760]: 2026-03-02 13:12:31.175 [INFO][4535] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.60.0/26 handle="k8s-pod-network.a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.356303 containerd[1760]: 2026-03-02 13:12:31.181 [INFO][4535] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd Mar 2 13:12:31.356303 containerd[1760]: 2026-03-02 13:12:31.199 [INFO][4535] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.60.0/26 handle="k8s-pod-network.a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.356303 containerd[1760]: 2026-03-02 13:12:31.213 [INFO][4535] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.60.1/26] block=192.168.60.0/26 handle="k8s-pod-network.a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.356303 containerd[1760]: 2026-03-02 13:12:31.213 [INFO][4535] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.60.1/26] handle="k8s-pod-network.a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.356303 containerd[1760]: 2026-03-02 13:12:31.213 [INFO][4535] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:31.356303 containerd[1760]: 2026-03-02 13:12:31.218 [INFO][4535] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.60.1/26] IPv6=[] ContainerID="a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd" HandleID="k8s-pod-network.a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0" Mar 2 13:12:31.356784 containerd[1760]: 2026-03-02 13:12:31.233 [INFO][4512] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd" Namespace="calico-system" Pod="calico-apiserver-7f6ff4dd5d-nbv94" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0", GenerateName:"calico-apiserver-7f6ff4dd5d-", Namespace:"calico-system", SelfLink:"", UID:"2b8c6e3f-27bc-4357-bc8f-90f4d1ae9548", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6ff4dd5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"", Pod:"calico-apiserver-7f6ff4dd5d-nbv94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliaa755832667", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:31.356784 containerd[1760]: 2026-03-02 13:12:31.233 [INFO][4512] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.1/32] ContainerID="a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd" Namespace="calico-system" Pod="calico-apiserver-7f6ff4dd5d-nbv94" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0" Mar 2 13:12:31.356784 containerd[1760]: 2026-03-02 13:12:31.234 [INFO][4512] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa755832667 ContainerID="a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd" Namespace="calico-system" Pod="calico-apiserver-7f6ff4dd5d-nbv94" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0" Mar 2 13:12:31.356784 containerd[1760]: 2026-03-02 13:12:31.280 [INFO][4512] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd" Namespace="calico-system" Pod="calico-apiserver-7f6ff4dd5d-nbv94" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0" Mar 2 13:12:31.356784 containerd[1760]: 2026-03-02 13:12:31.291 [INFO][4512] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd" Namespace="calico-system" Pod="calico-apiserver-7f6ff4dd5d-nbv94" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0", GenerateName:"calico-apiserver-7f6ff4dd5d-", Namespace:"calico-system", SelfLink:"", UID:"2b8c6e3f-27bc-4357-bc8f-90f4d1ae9548", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6ff4dd5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd", Pod:"calico-apiserver-7f6ff4dd5d-nbv94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliaa755832667", MAC:"12:c7:a9:56:b4:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:31.356784 containerd[1760]: 2026-03-02 13:12:31.334 [INFO][4512] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd" Namespace="calico-system" Pod="calico-apiserver-7f6ff4dd5d-nbv94" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0" Mar 2 13:12:31.438814 containerd[1760]: time="2026-03-02T13:12:31.437312560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:12:31.438814 containerd[1760]: time="2026-03-02T13:12:31.437497640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:12:31.438814 containerd[1760]: time="2026-03-02T13:12:31.437510640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:12:31.438814 containerd[1760]: time="2026-03-02T13:12:31.437596760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:12:31.485405 containerd[1760]: time="2026-03-02T13:12:31.483970792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:12:31.485405 containerd[1760]: time="2026-03-02T13:12:31.484027072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:12:31.485405 containerd[1760]: time="2026-03-02T13:12:31.484049872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:12:31.485405 containerd[1760]: time="2026-03-02T13:12:31.484128713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:12:31.523763 systemd[1]: Started cri-containerd-425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b.scope - libcontainer container 425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b. Mar 2 13:12:31.548945 systemd-networkd[1374]: cali8e919f7999c: Link UP Mar 2 13:12:31.552215 systemd-networkd[1374]: cali8e919f7999c: Gained carrier Mar 2 13:12:31.555961 systemd[1]: Started cri-containerd-a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd.scope - libcontainer container a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd. Mar 2 13:12:31.580240 containerd[1760]: 2026-03-02 13:12:31.209 [ERROR][4557] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:12:31.580240 containerd[1760]: 2026-03-02 13:12:31.258 [INFO][4557] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0 calico-kube-controllers-775879d688- calico-system a8eaf721-f0bb-465f-a36a-f220c434ea52 897 0 2026-03-02 13:12:07 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:775879d688 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.101-d5e61b93e9 calico-kube-controllers-775879d688-ks9z6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8e919f7999c [] [] }} ContainerID="2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093" Namespace="calico-system" Pod="calico-kube-controllers-775879d688-ks9z6" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-" Mar 2 13:12:31.580240 containerd[1760]: 2026-03-02 13:12:31.259 [INFO][4557] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093" Namespace="calico-system" Pod="calico-kube-controllers-775879d688-ks9z6" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0" Mar 2 13:12:31.580240 containerd[1760]: 2026-03-02 13:12:31.377 [INFO][4628] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093" HandleID="k8s-pod-network.2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0" Mar 2 13:12:31.580240 containerd[1760]: 2026-03-02 13:12:31.423 [INFO][4628] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093" HandleID="k8s-pod-network.2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb450), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-d5e61b93e9", "pod":"calico-kube-controllers-775879d688-ks9z6", "timestamp":"2026-03-02 13:12:31.377574438 +0000 UTC"}, Hostname:"ci-4081.3.101-d5e61b93e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003bcdc0)} Mar 2 13:12:31.580240 containerd[1760]: 2026-03-02 13:12:31.423 [INFO][4628] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:31.580240 containerd[1760]: 2026-03-02 13:12:31.423 [INFO][4628] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:31.580240 containerd[1760]: 2026-03-02 13:12:31.423 [INFO][4628] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-d5e61b93e9' Mar 2 13:12:31.580240 containerd[1760]: 2026-03-02 13:12:31.426 [INFO][4628] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.580240 containerd[1760]: 2026-03-02 13:12:31.459 [INFO][4628] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.580240 containerd[1760]: 2026-03-02 13:12:31.478 [INFO][4628] ipam/ipam.go 526: Trying affinity for 192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.580240 containerd[1760]: 2026-03-02 13:12:31.480 [INFO][4628] ipam/ipam.go 160: Attempting to load block cidr=192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.580240 containerd[1760]: 2026-03-02 13:12:31.483 [INFO][4628] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.580240 containerd[1760]: 2026-03-02 13:12:31.483 [INFO][4628] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.60.0/26 handle="k8s-pod-network.2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.580240 containerd[1760]: 2026-03-02 13:12:31.488 [INFO][4628] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093 Mar 2 13:12:31.580240 containerd[1760]: 2026-03-02 13:12:31.515 [INFO][4628] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.60.0/26 handle="k8s-pod-network.2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.580240 containerd[1760]: 2026-03-02 13:12:31.528 [INFO][4628] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.60.3/26] block=192.168.60.0/26 handle="k8s-pod-network.2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.580240 containerd[1760]: 2026-03-02 13:12:31.528 [INFO][4628] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.60.3/26] handle="k8s-pod-network.2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.580240 containerd[1760]: 2026-03-02 13:12:31.528 [INFO][4628] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:31.580240 containerd[1760]: 2026-03-02 13:12:31.529 [INFO][4628] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.60.3/26] IPv6=[] ContainerID="2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093" HandleID="k8s-pod-network.2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0" Mar 2 13:12:31.580945 containerd[1760]: 2026-03-02 13:12:31.538 [INFO][4557] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093" Namespace="calico-system" Pod="calico-kube-controllers-775879d688-ks9z6" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0", GenerateName:"calico-kube-controllers-775879d688-", Namespace:"calico-system", SelfLink:"", UID:"a8eaf721-f0bb-465f-a36a-f220c434ea52", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"775879d688", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"", Pod:"calico-kube-controllers-775879d688-ks9z6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8e919f7999c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:31.580945 containerd[1760]: 2026-03-02 13:12:31.538 [INFO][4557] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.3/32] ContainerID="2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093" Namespace="calico-system" Pod="calico-kube-controllers-775879d688-ks9z6" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0" Mar 2 13:12:31.580945 containerd[1760]: 2026-03-02 13:12:31.538 [INFO][4557] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e919f7999c ContainerID="2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093" Namespace="calico-system" Pod="calico-kube-controllers-775879d688-ks9z6" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0" Mar 2 13:12:31.580945 containerd[1760]: 2026-03-02 13:12:31.552 [INFO][4557] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093" Namespace="calico-system" Pod="calico-kube-controllers-775879d688-ks9z6" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0" Mar 2 13:12:31.580945 containerd[1760]: 2026-03-02 13:12:31.557 [INFO][4557] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093" Namespace="calico-system" Pod="calico-kube-controllers-775879d688-ks9z6" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0", GenerateName:"calico-kube-controllers-775879d688-", Namespace:"calico-system", SelfLink:"", UID:"a8eaf721-f0bb-465f-a36a-f220c434ea52", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"775879d688", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093", Pod:"calico-kube-controllers-775879d688-ks9z6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8e919f7999c", MAC:"2e:69:a2:fe:13:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:31.580945 containerd[1760]: 2026-03-02 13:12:31.572 [INFO][4557] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093" Namespace="calico-system" Pod="calico-kube-controllers-775879d688-ks9z6" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0" Mar 2 13:12:31.611468 systemd[1]: Removed slice kubepods-besteffort-poddd3978ae_983f_4585_acd3_45fc4f7a81fe.slice - libcontainer container kubepods-besteffort-poddd3978ae_983f_4585_acd3_45fc4f7a81fe.slice. Mar 2 13:12:31.655747 systemd-networkd[1374]: cali42a8c80f77c: Link UP Mar 2 13:12:31.655923 systemd-networkd[1374]: cali42a8c80f77c: Gained carrier Mar 2 13:12:31.676837 containerd[1760]: time="2026-03-02T13:12:31.637764980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:12:31.676837 containerd[1760]: time="2026-03-02T13:12:31.637813340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:12:31.676837 containerd[1760]: time="2026-03-02T13:12:31.637830940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:12:31.676837 containerd[1760]: time="2026-03-02T13:12:31.637929060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:12:31.680621 containerd[1760]: time="2026-03-02T13:12:31.680525010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6ff4dd5d-77p2h,Uid:da16cb8c-bc4e-45ca-bc1a-d50354581491,Namespace:calico-system,Attempt:1,} returns sandbox id \"425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b\"" Mar 2 13:12:31.693628 containerd[1760]: 2026-03-02 13:12:31.170 [ERROR][4548] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:12:31.693628 containerd[1760]: 2026-03-02 13:12:31.232 [INFO][4548] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0 coredns-7d764666f9- kube-system 421d6308-a09a-4e47-a594-09b64ce05a98 898 0 2026-03-02 13:11:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.101-d5e61b93e9 coredns-7d764666f9-fl49c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali42a8c80f77c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c" Namespace="kube-system" Pod="coredns-7d764666f9-fl49c" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-" Mar 2 13:12:31.693628 containerd[1760]: 2026-03-02 13:12:31.233 [INFO][4548] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c" Namespace="kube-system" Pod="coredns-7d764666f9-fl49c" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0" Mar 2 13:12:31.693628 containerd[1760]: 2026-03-02 13:12:31.429 [INFO][4601] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c" HandleID="k8s-pod-network.d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0" Mar 2 13:12:31.693628 containerd[1760]: 2026-03-02 13:12:31.459 [INFO][4601] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c" HandleID="k8s-pod-network.d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbef0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.101-d5e61b93e9", "pod":"coredns-7d764666f9-fl49c", "timestamp":"2026-03-02 13:12:31.429781675 +0000 UTC"}, Hostname:"ci-4081.3.101-d5e61b93e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000405ce0)} Mar 2 13:12:31.693628 containerd[1760]: 2026-03-02 13:12:31.459 [INFO][4601] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:31.693628 containerd[1760]: 2026-03-02 13:12:31.528 [INFO][4601] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:31.693628 containerd[1760]: 2026-03-02 13:12:31.528 [INFO][4601] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-d5e61b93e9' Mar 2 13:12:31.693628 containerd[1760]: 2026-03-02 13:12:31.544 [INFO][4601] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.693628 containerd[1760]: 2026-03-02 13:12:31.575 [INFO][4601] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.693628 containerd[1760]: 2026-03-02 13:12:31.587 [INFO][4601] ipam/ipam.go 526: Trying affinity for 192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.693628 containerd[1760]: 2026-03-02 13:12:31.590 [INFO][4601] ipam/ipam.go 160: Attempting to load block cidr=192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.693628 containerd[1760]: 2026-03-02 13:12:31.593 [INFO][4601] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.693628 containerd[1760]: 2026-03-02 13:12:31.594 [INFO][4601] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.60.0/26 handle="k8s-pod-network.d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.693628 containerd[1760]: 2026-03-02 13:12:31.596 [INFO][4601] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c Mar 2 13:12:31.693628 containerd[1760]: 2026-03-02 13:12:31.605 [INFO][4601] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.60.0/26 handle="k8s-pod-network.d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.693628 containerd[1760]: 2026-03-02 13:12:31.631 [INFO][4601] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.60.4/26] block=192.168.60.0/26 handle="k8s-pod-network.d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.693628 containerd[1760]: 2026-03-02 13:12:31.634 [INFO][4601] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.60.4/26] handle="k8s-pod-network.d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.693628 containerd[1760]: 2026-03-02 13:12:31.634 [INFO][4601] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:31.693628 containerd[1760]: 2026-03-02 13:12:31.635 [INFO][4601] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.60.4/26] IPv6=[] ContainerID="d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c" HandleID="k8s-pod-network.d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0" Mar 2 13:12:31.694147 containerd[1760]: 2026-03-02 13:12:31.647 [INFO][4548] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c" Namespace="kube-system" Pod="coredns-7d764666f9-fl49c" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"421d6308-a09a-4e47-a594-09b64ce05a98", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 11, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"", Pod:"coredns-7d764666f9-fl49c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42a8c80f77c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:31.694147 containerd[1760]: 2026-03-02 13:12:31.647 [INFO][4548] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.4/32] ContainerID="d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c" Namespace="kube-system" Pod="coredns-7d764666f9-fl49c" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0" Mar 2 13:12:31.694147 containerd[1760]: 2026-03-02 13:12:31.647 [INFO][4548] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42a8c80f77c ContainerID="d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c" Namespace="kube-system" Pod="coredns-7d764666f9-fl49c" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0" Mar 2 13:12:31.694147 containerd[1760]: 2026-03-02 13:12:31.655 [INFO][4548] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c" Namespace="kube-system" Pod="coredns-7d764666f9-fl49c" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0" Mar 2 13:12:31.694147 containerd[1760]: 2026-03-02 13:12:31.658 [INFO][4548] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c" Namespace="kube-system" Pod="coredns-7d764666f9-fl49c" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"421d6308-a09a-4e47-a594-09b64ce05a98", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 11, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c", Pod:"coredns-7d764666f9-fl49c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42a8c80f77c", MAC:"26:bc:c7:61:81:b8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:31.694346 containerd[1760]: 2026-03-02 13:12:31.683 [INFO][4548] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c" Namespace="kube-system" Pod="coredns-7d764666f9-fl49c" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0" Mar 2 13:12:31.699961 containerd[1760]: time="2026-03-02T13:12:31.698454422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\"" Mar 2 13:12:31.757068 containerd[1760]: time="2026-03-02T13:12:31.756813623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:12:31.757068 containerd[1760]: time="2026-03-02T13:12:31.756875823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:12:31.757068 containerd[1760]: time="2026-03-02T13:12:31.756890983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:12:31.757068 containerd[1760]: time="2026-03-02T13:12:31.756974943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:12:31.790467 systemd-networkd[1374]: calie8e95b0cba4: Link UP Mar 2 13:12:31.792263 systemd-networkd[1374]: calie8e95b0cba4: Gained carrier Mar 2 13:12:31.804302 kubelet[3222]: I0302 13:12:31.802195 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/fed73ec2-a24a-4456-99df-a4e264633517-nginx-config\") pod \"whisker-5cddd77687-ljvpr\" (UID: \"fed73ec2-a24a-4456-99df-a4e264633517\") " pod="calico-system/whisker-5cddd77687-ljvpr" Mar 2 13:12:31.804302 kubelet[3222]: I0302 13:12:31.802235 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fed73ec2-a24a-4456-99df-a4e264633517-whisker-backend-key-pair\") pod \"whisker-5cddd77687-ljvpr\" (UID: \"fed73ec2-a24a-4456-99df-a4e264633517\") " pod="calico-system/whisker-5cddd77687-ljvpr" Mar 2 13:12:31.804302 kubelet[3222]: I0302 13:12:31.802258 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5pzb\" (UniqueName: \"kubernetes.io/projected/fed73ec2-a24a-4456-99df-a4e264633517-kube-api-access-p5pzb\") pod \"whisker-5cddd77687-ljvpr\" (UID: \"fed73ec2-a24a-4456-99df-a4e264633517\") " pod="calico-system/whisker-5cddd77687-ljvpr" Mar 2 13:12:31.804302 kubelet[3222]: I0302 13:12:31.802274 3222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fed73ec2-a24a-4456-99df-a4e264633517-whisker-ca-bundle\") pod \"whisker-5cddd77687-ljvpr\" (UID: \"fed73ec2-a24a-4456-99df-a4e264633517\") " pod="calico-system/whisker-5cddd77687-ljvpr" Mar 2 13:12:31.829521 containerd[1760]: 2026-03-02 13:12:31.192 [ERROR][4562] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:12:31.829521 containerd[1760]: 2026-03-02 13:12:31.231 [INFO][4562] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0 goldmane-7d7658d587- calico-system fe44d783-a407-4475-bb45-4b5e475fc600 899 0 2026-03-02 13:12:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7d7658d587 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.101-d5e61b93e9 goldmane-7d7658d587-mwp5j eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie8e95b0cba4 [] [] }} ContainerID="e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c" Namespace="calico-system" Pod="goldmane-7d7658d587-mwp5j" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-" Mar 2 13:12:31.829521 containerd[1760]: 2026-03-02 13:12:31.231 [INFO][4562] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c" Namespace="calico-system" Pod="goldmane-7d7658d587-mwp5j" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0" Mar 2 13:12:31.829521 containerd[1760]: 2026-03-02 13:12:31.428 [INFO][4603] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c" HandleID="k8s-pod-network.e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c" Workload="ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0" Mar 2 13:12:31.829521 containerd[1760]: 2026-03-02 13:12:31.478 [INFO][4603] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c" HandleID="k8s-pod-network.e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c" Workload="ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000606010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-d5e61b93e9", "pod":"goldmane-7d7658d587-mwp5j", "timestamp":"2026-03-02 13:12:31.428217274 +0000 UTC"}, Hostname:"ci-4081.3.101-d5e61b93e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400050f1e0)} Mar 2 13:12:31.829521 containerd[1760]: 2026-03-02 13:12:31.478 [INFO][4603] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:31.829521 containerd[1760]: 2026-03-02 13:12:31.636 [INFO][4603] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:31.829521 containerd[1760]: 2026-03-02 13:12:31.637 [INFO][4603] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-d5e61b93e9' Mar 2 13:12:31.829521 containerd[1760]: 2026-03-02 13:12:31.652 [INFO][4603] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.829521 containerd[1760]: 2026-03-02 13:12:31.683 [INFO][4603] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.829521 containerd[1760]: 2026-03-02 13:12:31.700 [INFO][4603] ipam/ipam.go 526: Trying affinity for 192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.829521 containerd[1760]: 2026-03-02 13:12:31.711 [INFO][4603] ipam/ipam.go 160: Attempting to load block cidr=192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.829521 containerd[1760]: 2026-03-02 13:12:31.721 [INFO][4603] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.829521 containerd[1760]: 2026-03-02 13:12:31.724 [INFO][4603] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.60.0/26 handle="k8s-pod-network.e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.829521 containerd[1760]: 2026-03-02 13:12:31.730 [INFO][4603] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c Mar 2 13:12:31.829521 containerd[1760]: 2026-03-02 13:12:31.743 [INFO][4603] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.60.0/26 handle="k8s-pod-network.e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.829521 containerd[1760]: 2026-03-02 13:12:31.786 [INFO][4603] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.60.5/26] block=192.168.60.0/26 handle="k8s-pod-network.e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.829521 containerd[1760]: 2026-03-02 13:12:31.786 [INFO][4603] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.60.5/26] handle="k8s-pod-network.e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:31.829521 containerd[1760]: 2026-03-02 13:12:31.786 [INFO][4603] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:31.829521 containerd[1760]: 2026-03-02 13:12:31.786 [INFO][4603] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.60.5/26] IPv6=[] ContainerID="e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c" HandleID="k8s-pod-network.e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c" Workload="ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0" Mar 2 13:12:31.827823 systemd[1]: Started cri-containerd-2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093.scope - libcontainer container 2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093. Mar 2 13:12:31.831738 containerd[1760]: 2026-03-02 13:12:31.788 [INFO][4562] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c" Namespace="calico-system" Pod="goldmane-7d7658d587-mwp5j" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0", GenerateName:"goldmane-7d7658d587-", Namespace:"calico-system", SelfLink:"", UID:"fe44d783-a407-4475-bb45-4b5e475fc600", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7d7658d587", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"", Pod:"goldmane-7d7658d587-mwp5j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.60.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie8e95b0cba4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:31.831738 containerd[1760]: 2026-03-02 13:12:31.788 [INFO][4562] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.5/32] ContainerID="e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c" Namespace="calico-system" Pod="goldmane-7d7658d587-mwp5j" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0" Mar 2 13:12:31.831738 containerd[1760]: 2026-03-02 13:12:31.788 [INFO][4562] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8e95b0cba4 ContainerID="e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c" Namespace="calico-system" Pod="goldmane-7d7658d587-mwp5j" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0" Mar 2 13:12:31.831738 containerd[1760]: 2026-03-02 13:12:31.791 [INFO][4562] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c" Namespace="calico-system" Pod="goldmane-7d7658d587-mwp5j" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0" Mar 2 13:12:31.831738 containerd[1760]: 2026-03-02 13:12:31.791 [INFO][4562] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c" Namespace="calico-system" Pod="goldmane-7d7658d587-mwp5j" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0", GenerateName:"goldmane-7d7658d587-", Namespace:"calico-system", SelfLink:"", UID:"fe44d783-a407-4475-bb45-4b5e475fc600", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7d7658d587", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c", Pod:"goldmane-7d7658d587-mwp5j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.60.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie8e95b0cba4", MAC:"e2:ca:22:99:c0:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:31.831738 containerd[1760]: 2026-03-02 13:12:31.816 [INFO][4562] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c" Namespace="calico-system" Pod="goldmane-7d7658d587-mwp5j" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0" Mar 2 13:12:31.840113 systemd[1]: Started cri-containerd-d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c.scope - libcontainer container d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c. Mar 2 13:12:31.851323 systemd[1]: Created slice kubepods-besteffort-podfed73ec2_a24a_4456_99df_a4e264633517.slice - libcontainer container kubepods-besteffort-podfed73ec2_a24a_4456_99df_a4e264633517.slice. Mar 2 13:12:31.870215 containerd[1760]: time="2026-03-02T13:12:31.869927262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:12:31.870215 containerd[1760]: time="2026-03-02T13:12:31.869990022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:12:31.870215 containerd[1760]: time="2026-03-02T13:12:31.870013222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:12:31.870215 containerd[1760]: time="2026-03-02T13:12:31.870079462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:12:31.876319 containerd[1760]: time="2026-03-02T13:12:31.876015826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f6ff4dd5d-nbv94,Uid:2b8c6e3f-27bc-4357-bc8f-90f4d1ae9548,Namespace:calico-system,Attempt:1,} returns sandbox id \"a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd\"" Mar 2 13:12:31.907355 systemd[1]: Started cri-containerd-e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c.scope - libcontainer container e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c. Mar 2 13:12:31.936579 containerd[1760]: time="2026-03-02T13:12:31.936536388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-fl49c,Uid:421d6308-a09a-4e47-a594-09b64ce05a98,Namespace:kube-system,Attempt:1,} returns sandbox id \"d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c\"" Mar 2 13:12:31.951965 containerd[1760]: time="2026-03-02T13:12:31.951843239Z" level=info msg="CreateContainer within sandbox \"d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:12:31.973895 containerd[1760]: time="2026-03-02T13:12:31.973862934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-775879d688-ks9z6,Uid:a8eaf721-f0bb-465f-a36a-f220c434ea52,Namespace:calico-system,Attempt:1,} returns sandbox id \"2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093\"" Mar 2 13:12:32.008062 containerd[1760]: time="2026-03-02T13:12:32.007864958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7d7658d587-mwp5j,Uid:fe44d783-a407-4475-bb45-4b5e475fc600,Namespace:calico-system,Attempt:1,} returns sandbox id \"e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c\"" Mar 2 13:12:32.032191 containerd[1760]: time="2026-03-02T13:12:32.031725174Z" level=info msg="CreateContainer within sandbox \"d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d0de57f27790324523d100582a59541870467734541ab2b9028ba55bc0ec323\"" Mar 2 13:12:32.032593 containerd[1760]: time="2026-03-02T13:12:32.032466975Z" level=info msg="StartContainer for \"0d0de57f27790324523d100582a59541870467734541ab2b9028ba55bc0ec323\"" Mar 2 13:12:32.068322 systemd[1]: Started cri-containerd-0d0de57f27790324523d100582a59541870467734541ab2b9028ba55bc0ec323.scope - libcontainer container 0d0de57f27790324523d100582a59541870467734541ab2b9028ba55bc0ec323. Mar 2 13:12:32.105304 containerd[1760]: time="2026-03-02T13:12:32.105264186Z" level=info msg="StartContainer for \"0d0de57f27790324523d100582a59541870467734541ab2b9028ba55bc0ec323\" returns successfully" Mar 2 13:12:32.164027 containerd[1760]: time="2026-03-02T13:12:32.162155265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cddd77687-ljvpr,Uid:fed73ec2-a24a-4456-99df-a4e264633517,Namespace:calico-system,Attempt:0,}" Mar 2 13:12:32.301193 kernel: calico-node[5007]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 2 13:12:32.316316 systemd-networkd[1374]: caliaa755832667: Gained IPv6LL Mar 2 13:12:32.368466 systemd-networkd[1374]: cali4176aaa4dc9: Link UP Mar 2 13:12:32.372231 systemd-networkd[1374]: cali4176aaa4dc9: Gained carrier Mar 2 13:12:32.399196 containerd[1760]: 2026-03-02 13:12:32.256 [INFO][5013] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--d5e61b93e9-k8s-whisker--5cddd77687--ljvpr-eth0 whisker-5cddd77687- calico-system fed73ec2-a24a-4456-99df-a4e264633517 931 0 2026-03-02 13:12:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5cddd77687 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.101-d5e61b93e9 whisker-5cddd77687-ljvpr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4176aaa4dc9 [] [] }} ContainerID="c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb" Namespace="calico-system" Pod="whisker-5cddd77687-ljvpr" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-whisker--5cddd77687--ljvpr-" Mar 2 13:12:32.399196 containerd[1760]: 2026-03-02 13:12:32.256 [INFO][5013] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb" Namespace="calico-system" Pod="whisker-5cddd77687-ljvpr" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-whisker--5cddd77687--ljvpr-eth0" Mar 2 13:12:32.399196 containerd[1760]: 2026-03-02 13:12:32.306 [INFO][5037] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb" HandleID="k8s-pod-network.c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb" Workload="ci--4081.3.101--d5e61b93e9-k8s-whisker--5cddd77687--ljvpr-eth0" Mar 2 13:12:32.399196 containerd[1760]: 2026-03-02 13:12:32.318 [INFO][5037] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb" HandleID="k8s-pod-network.c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb" Workload="ci--4081.3.101--d5e61b93e9-k8s-whisker--5cddd77687--ljvpr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273af0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-d5e61b93e9", "pod":"whisker-5cddd77687-ljvpr", "timestamp":"2026-03-02 13:12:32.306266646 +0000 UTC"}, Hostname:"ci-4081.3.101-d5e61b93e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002f2f20)} Mar 2 13:12:32.399196 containerd[1760]: 2026-03-02 13:12:32.318 [INFO][5037] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:32.399196 containerd[1760]: 2026-03-02 13:12:32.318 [INFO][5037] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:32.399196 containerd[1760]: 2026-03-02 13:12:32.318 [INFO][5037] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-d5e61b93e9' Mar 2 13:12:32.399196 containerd[1760]: 2026-03-02 13:12:32.323 [INFO][5037] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:32.399196 containerd[1760]: 2026-03-02 13:12:32.329 [INFO][5037] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:32.399196 containerd[1760]: 2026-03-02 13:12:32.335 [INFO][5037] ipam/ipam.go 526: Trying affinity for 192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:32.399196 containerd[1760]: 2026-03-02 13:12:32.340 [INFO][5037] ipam/ipam.go 160: Attempting to load block cidr=192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:32.399196 containerd[1760]: 2026-03-02 13:12:32.344 [INFO][5037] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:32.399196 containerd[1760]: 2026-03-02 13:12:32.344 [INFO][5037] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.60.0/26 handle="k8s-pod-network.c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:32.399196 containerd[1760]: 2026-03-02 13:12:32.346 [INFO][5037] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb Mar 2 13:12:32.399196 containerd[1760]: 2026-03-02 13:12:32.352 [INFO][5037] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.60.0/26 handle="k8s-pod-network.c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:32.399196 containerd[1760]: 2026-03-02 13:12:32.362 [INFO][5037] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.60.6/26] block=192.168.60.0/26 handle="k8s-pod-network.c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:32.399196 containerd[1760]: 2026-03-02 13:12:32.362 [INFO][5037] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.60.6/26] handle="k8s-pod-network.c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:32.399196 containerd[1760]: 2026-03-02 13:12:32.362 [INFO][5037] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:32.399196 containerd[1760]: 2026-03-02 13:12:32.362 [INFO][5037] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.60.6/26] IPv6=[] ContainerID="c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb" HandleID="k8s-pod-network.c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb" Workload="ci--4081.3.101--d5e61b93e9-k8s-whisker--5cddd77687--ljvpr-eth0" Mar 2 13:12:32.399743 containerd[1760]: 2026-03-02 13:12:32.365 [INFO][5013] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb" Namespace="calico-system" Pod="whisker-5cddd77687-ljvpr" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-whisker--5cddd77687--ljvpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-whisker--5cddd77687--ljvpr-eth0", GenerateName:"whisker-5cddd77687-", Namespace:"calico-system", SelfLink:"", UID:"fed73ec2-a24a-4456-99df-a4e264633517", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5cddd77687", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"", Pod:"whisker-5cddd77687-ljvpr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.60.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4176aaa4dc9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:32.399743 containerd[1760]: 2026-03-02 13:12:32.365 [INFO][5013] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.6/32] ContainerID="c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb" Namespace="calico-system" Pod="whisker-5cddd77687-ljvpr" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-whisker--5cddd77687--ljvpr-eth0" Mar 2 13:12:32.399743 containerd[1760]: 2026-03-02 13:12:32.365 [INFO][5013] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4176aaa4dc9 ContainerID="c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb" Namespace="calico-system" Pod="whisker-5cddd77687-ljvpr" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-whisker--5cddd77687--ljvpr-eth0" Mar 2 13:12:32.399743 containerd[1760]: 2026-03-02 13:12:32.372 [INFO][5013] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb" Namespace="calico-system" Pod="whisker-5cddd77687-ljvpr" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-whisker--5cddd77687--ljvpr-eth0" Mar 2 13:12:32.399743 containerd[1760]: 2026-03-02 13:12:32.374 [INFO][5013] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb" Namespace="calico-system" Pod="whisker-5cddd77687-ljvpr" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-whisker--5cddd77687--ljvpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-whisker--5cddd77687--ljvpr-eth0", GenerateName:"whisker-5cddd77687-", Namespace:"calico-system", SelfLink:"", UID:"fed73ec2-a24a-4456-99df-a4e264633517", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5cddd77687", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb", Pod:"whisker-5cddd77687-ljvpr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.60.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4176aaa4dc9", MAC:"ca:39:f2:c5:1d:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:32.399743 containerd[1760]: 2026-03-02 13:12:32.390 [INFO][5013] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb" Namespace="calico-system" Pod="whisker-5cddd77687-ljvpr" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-whisker--5cddd77687--ljvpr-eth0" Mar 2 13:12:32.425976 kubelet[3222]: I0302 13:12:32.425791 3222 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="dd3978ae-983f-4585-acd3-45fc4f7a81fe" path="/var/lib/kubelet/pods/dd3978ae-983f-4585-acd3-45fc4f7a81fe/volumes" Mar 2 13:12:32.427696 containerd[1760]: time="2026-03-02T13:12:32.427499571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:12:32.427696 containerd[1760]: time="2026-03-02T13:12:32.427556331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:12:32.427696 containerd[1760]: time="2026-03-02T13:12:32.427573211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:12:32.427696 containerd[1760]: time="2026-03-02T13:12:32.427651971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:12:32.459309 systemd[1]: Started cri-containerd-c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb.scope - libcontainer container c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb. Mar 2 13:12:32.511542 containerd[1760]: time="2026-03-02T13:12:32.511484309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cddd77687-ljvpr,Uid:fed73ec2-a24a-4456-99df-a4e264633517,Namespace:calico-system,Attempt:0,} returns sandbox id \"c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb\"" Mar 2 13:12:32.623484 kubelet[3222]: I0302 13:12:32.623291 3222 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-fl49c" podStartSLOduration=39.623279427 podStartE2EDuration="39.623279427s" podCreationTimestamp="2026-03-02 13:11:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:12:32.621275026 +0000 UTC m=+46.294512813" watchObservedRunningTime="2026-03-02 13:12:32.623279427 +0000 UTC m=+46.296517214" Mar 2 13:12:32.701499 systemd-networkd[1374]: cali8e919f7999c: Gained IPv6LL Mar 2 13:12:32.702011 systemd-networkd[1374]: cali093d9108fea: Gained IPv6LL Mar 2 13:12:32.865789 systemd-networkd[1374]: vxlan.calico: Link UP Mar 2 13:12:32.865796 systemd-networkd[1374]: vxlan.calico: Gained carrier Mar 2 13:12:33.276337 systemd-networkd[1374]: cali42a8c80f77c: Gained IPv6LL Mar 2 13:12:33.340354 systemd-networkd[1374]: calie8e95b0cba4: Gained IPv6LL Mar 2 13:12:33.852324 systemd-networkd[1374]: cali4176aaa4dc9: Gained IPv6LL Mar 2 13:12:34.529916 containerd[1760]: time="2026-03-02T13:12:34.529874837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:34.533831 containerd[1760]: time="2026-03-02T13:12:34.533635400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.3: active requests=0, bytes read=45512258" Mar 2 13:12:34.537391 containerd[1760]: time="2026-03-02T13:12:34.537311002Z" level=info msg="ImageCreate event name:\"sha256:6c1d6f109ccbdc040de9bade4e1d6f18ad2b7e93a2479f2ff827985a6b5c9653\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:34.542753 containerd[1760]: time="2026-03-02T13:12:34.541948206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:34.542753 containerd[1760]: time="2026-03-02T13:12:34.542637966Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" with image id \"sha256:6c1d6f109ccbdc040de9bade4e1d6f18ad2b7e93a2479f2ff827985a6b5c9653\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\", size \"46909799\" in 2.844140864s" Mar 2 13:12:34.542753 containerd[1760]: time="2026-03-02T13:12:34.542665566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" returns image reference \"sha256:6c1d6f109ccbdc040de9bade4e1d6f18ad2b7e93a2479f2ff827985a6b5c9653\"" Mar 2 13:12:34.544259 containerd[1760]: time="2026-03-02T13:12:34.544232207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\"" Mar 2 13:12:34.551768 containerd[1760]: time="2026-03-02T13:12:34.551738052Z" level=info msg="CreateContainer within sandbox \"425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 2 13:12:34.581152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2190297725.mount: Deactivated successfully. Mar 2 13:12:34.593337 containerd[1760]: time="2026-03-02T13:12:34.593273481Z" level=info msg="CreateContainer within sandbox \"425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"68f0da36e24b7427868d3604aeaccb698b6cdeedf2d15cfddd2d0ed73ce51658\"" Mar 2 13:12:34.593879 containerd[1760]: time="2026-03-02T13:12:34.593733682Z" level=info msg="StartContainer for \"68f0da36e24b7427868d3604aeaccb698b6cdeedf2d15cfddd2d0ed73ce51658\"" Mar 2 13:12:34.625352 systemd[1]: Started cri-containerd-68f0da36e24b7427868d3604aeaccb698b6cdeedf2d15cfddd2d0ed73ce51658.scope - libcontainer container 68f0da36e24b7427868d3604aeaccb698b6cdeedf2d15cfddd2d0ed73ce51658. Mar 2 13:12:34.657685 containerd[1760]: time="2026-03-02T13:12:34.657641166Z" level=info msg="StartContainer for \"68f0da36e24b7427868d3604aeaccb698b6cdeedf2d15cfddd2d0ed73ce51658\" returns successfully" Mar 2 13:12:34.684338 systemd-networkd[1374]: vxlan.calico: Gained IPv6LL Mar 2 13:12:34.868771 containerd[1760]: time="2026-03-02T13:12:34.868664753Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:34.872351 containerd[1760]: time="2026-03-02T13:12:34.872322756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.3: active requests=0, bytes read=77" Mar 2 13:12:34.874325 containerd[1760]: time="2026-03-02T13:12:34.874187237Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" with image id \"sha256:6c1d6f109ccbdc040de9bade4e1d6f18ad2b7e93a2479f2ff827985a6b5c9653\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\", size \"46909799\" in 329.92011ms" Mar 2 13:12:34.874325 containerd[1760]: time="2026-03-02T13:12:34.874225237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" returns image reference \"sha256:6c1d6f109ccbdc040de9bade4e1d6f18ad2b7e93a2479f2ff827985a6b5c9653\"" Mar 2 13:12:34.876100 containerd[1760]: time="2026-03-02T13:12:34.876068479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\"" Mar 2 13:12:34.882309 containerd[1760]: time="2026-03-02T13:12:34.882278443Z" level=info msg="CreateContainer within sandbox \"a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 2 13:12:34.924033 containerd[1760]: time="2026-03-02T13:12:34.923924792Z" level=info msg="CreateContainer within sandbox \"a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"47f5c28471f7f31d716824ce43885853e953c6223378abf9e0ed5381917eb85d\"" Mar 2 13:12:34.924623 containerd[1760]: time="2026-03-02T13:12:34.924600872Z" level=info msg="StartContainer for \"47f5c28471f7f31d716824ce43885853e953c6223378abf9e0ed5381917eb85d\"" Mar 2 13:12:34.957317 systemd[1]: Started cri-containerd-47f5c28471f7f31d716824ce43885853e953c6223378abf9e0ed5381917eb85d.scope - libcontainer container 47f5c28471f7f31d716824ce43885853e953c6223378abf9e0ed5381917eb85d. Mar 2 13:12:35.085611 containerd[1760]: time="2026-03-02T13:12:35.085401345Z" level=info msg="StartContainer for \"47f5c28471f7f31d716824ce43885853e953c6223378abf9e0ed5381917eb85d\" returns successfully" Mar 2 13:12:35.666247 kubelet[3222]: I0302 13:12:35.665457 3222 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7f6ff4dd5d-77p2h" podStartSLOduration=26.817127522 podStartE2EDuration="29.665443189s" podCreationTimestamp="2026-03-02 13:12:06 +0000 UTC" firstStartedPulling="2026-03-02 13:12:31.6956021 +0000 UTC m=+45.368839887" lastFinishedPulling="2026-03-02 13:12:34.543917767 +0000 UTC m=+48.217155554" observedRunningTime="2026-03-02 13:12:35.664742669 +0000 UTC m=+49.337980456" watchObservedRunningTime="2026-03-02 13:12:35.665443189 +0000 UTC m=+49.338680976" Mar 2 13:12:35.666247 kubelet[3222]: I0302 13:12:35.665816 3222 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7f6ff4dd5d-nbv94" podStartSLOduration=26.673649062 podStartE2EDuration="29.665808429s" podCreationTimestamp="2026-03-02 13:12:06 +0000 UTC" firstStartedPulling="2026-03-02 13:12:31.882849551 +0000 UTC m=+45.556087338" lastFinishedPulling="2026-03-02 13:12:34.875008918 +0000 UTC m=+48.548246705" observedRunningTime="2026-03-02 13:12:35.644217374 +0000 UTC m=+49.317455201" watchObservedRunningTime="2026-03-02 13:12:35.665808429 +0000 UTC m=+49.339046216" Mar 2 13:12:36.632973 kubelet[3222]: I0302 13:12:36.632870 3222 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:12:36.632973 kubelet[3222]: I0302 13:12:36.632897 3222 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:12:39.108631 kubelet[3222]: I0302 13:12:39.108598 3222 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:12:40.055980 containerd[1760]: time="2026-03-02T13:12:40.055346446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:40.058537 containerd[1760]: time="2026-03-02T13:12:40.058500489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.3: active requests=0, bytes read=49157508" Mar 2 13:12:40.061905 containerd[1760]: time="2026-03-02T13:12:40.061872371Z" level=info msg="ImageCreate event name:\"sha256:f91182157dd9b43afadc3f9d6dbd919b0ec222fc40e9fa608989310b81c1f18c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:40.066255 containerd[1760]: time="2026-03-02T13:12:40.066221895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:081fd6c3de7754ba9892532b2c7c6cae9ba7bd1cca4c42e4590ee8d0f5a5696b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:40.067095 containerd[1760]: time="2026-03-02T13:12:40.067064455Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" with image id \"sha256:f91182157dd9b43afadc3f9d6dbd919b0ec222fc40e9fa608989310b81c1f18c\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:081fd6c3de7754ba9892532b2c7c6cae9ba7bd1cca4c42e4590ee8d0f5a5696b\", size \"50555001\" in 5.190958176s" Mar 2 13:12:40.067155 containerd[1760]: time="2026-03-02T13:12:40.067097656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" returns image reference \"sha256:f91182157dd9b43afadc3f9d6dbd919b0ec222fc40e9fa608989310b81c1f18c\"" Mar 2 13:12:40.068523 containerd[1760]: time="2026-03-02T13:12:40.068362697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.3\"" Mar 2 13:12:40.081938 containerd[1760]: time="2026-03-02T13:12:40.081908267Z" level=info msg="CreateContainer within sandbox \"2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 2 13:12:40.117415 containerd[1760]: time="2026-03-02T13:12:40.117374056Z" level=info msg="CreateContainer within sandbox \"2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b441d72943159e838b4dbedfa26239707086ef3b0397f2cb8db9d175592c3b63\"" Mar 2 13:12:40.120472 containerd[1760]: time="2026-03-02T13:12:40.118753297Z" level=info msg="StartContainer for \"b441d72943159e838b4dbedfa26239707086ef3b0397f2cb8db9d175592c3b63\"" Mar 2 13:12:40.150389 systemd[1]: Started cri-containerd-b441d72943159e838b4dbedfa26239707086ef3b0397f2cb8db9d175592c3b63.scope - libcontainer container b441d72943159e838b4dbedfa26239707086ef3b0397f2cb8db9d175592c3b63. Mar 2 13:12:40.211494 containerd[1760]: time="2026-03-02T13:12:40.211231010Z" level=info msg="StartContainer for \"b441d72943159e838b4dbedfa26239707086ef3b0397f2cb8db9d175592c3b63\" returns successfully" Mar 2 13:12:40.424004 containerd[1760]: time="2026-03-02T13:12:40.423886339Z" level=info msg="StopPodSandbox for \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\"" Mar 2 13:12:40.525071 containerd[1760]: 2026-03-02 13:12:40.485 [INFO][5378] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Mar 2 13:12:40.525071 containerd[1760]: 2026-03-02 13:12:40.487 [INFO][5378] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" iface="eth0" netns="/var/run/netns/cni-1bec46ad-fffc-d033-e672-922e49fcc1f7" Mar 2 13:12:40.525071 containerd[1760]: 2026-03-02 13:12:40.488 [INFO][5378] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" iface="eth0" netns="/var/run/netns/cni-1bec46ad-fffc-d033-e672-922e49fcc1f7" Mar 2 13:12:40.525071 containerd[1760]: 2026-03-02 13:12:40.489 [INFO][5378] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" iface="eth0" netns="/var/run/netns/cni-1bec46ad-fffc-d033-e672-922e49fcc1f7" Mar 2 13:12:40.525071 containerd[1760]: 2026-03-02 13:12:40.489 [INFO][5378] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Mar 2 13:12:40.525071 containerd[1760]: 2026-03-02 13:12:40.490 [INFO][5378] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Mar 2 13:12:40.525071 containerd[1760]: 2026-03-02 13:12:40.511 [INFO][5385] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" HandleID="k8s-pod-network.3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Workload="ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0" Mar 2 13:12:40.525071 containerd[1760]: 2026-03-02 13:12:40.511 [INFO][5385] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:40.525071 containerd[1760]: 2026-03-02 13:12:40.511 [INFO][5385] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:40.525071 containerd[1760]: 2026-03-02 13:12:40.519 [WARNING][5385] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" HandleID="k8s-pod-network.3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Workload="ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0" Mar 2 13:12:40.525071 containerd[1760]: 2026-03-02 13:12:40.520 [INFO][5385] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" HandleID="k8s-pod-network.3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Workload="ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0" Mar 2 13:12:40.525071 containerd[1760]: 2026-03-02 13:12:40.521 [INFO][5385] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:40.525071 containerd[1760]: 2026-03-02 13:12:40.522 [INFO][5378] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Mar 2 13:12:40.525071 containerd[1760]: time="2026-03-02T13:12:40.525005900Z" level=info msg="TearDown network for sandbox \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\" successfully" Mar 2 13:12:40.525071 containerd[1760]: time="2026-03-02T13:12:40.525032820Z" level=info msg="StopPodSandbox for \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\" returns successfully" Mar 2 13:12:40.661519 kubelet[3222]: I0302 13:12:40.659400 3222 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-775879d688-ks9z6" podStartSLOduration=25.577861854 podStartE2EDuration="33.659384487s" podCreationTimestamp="2026-03-02 13:12:07 +0000 UTC" firstStartedPulling="2026-03-02 13:12:31.986494943 +0000 UTC m=+45.659732730" lastFinishedPulling="2026-03-02 13:12:40.068017536 +0000 UTC m=+53.741255363" observedRunningTime="2026-03-02 13:12:40.658479246 +0000 UTC m=+54.331717033" watchObservedRunningTime="2026-03-02 13:12:40.659384487 +0000 UTC m=+54.332622274" Mar 2 13:12:40.749687 kubelet[3222]: I0302 13:12:40.747065 3222 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:12:40.750768 containerd[1760]: time="2026-03-02T13:12:40.750732119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbvnj,Uid:662b1d4a-a279-406c-96fb-fce38eb91097,Namespace:calico-system,Attempt:1,}" Mar 2 13:12:40.903701 systemd-networkd[1374]: caliae4430da851: Link UP Mar 2 13:12:40.904153 systemd-networkd[1374]: caliae4430da851: Gained carrier Mar 2 13:12:40.925862 containerd[1760]: 2026-03-02 13:12:40.827 [INFO][5412] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0 csi-node-driver- calico-system 662b1d4a-a279-406c-96fb-fce38eb91097 1000 0 2026-03-02 13:12:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5d8f55657d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.101-d5e61b93e9 csi-node-driver-nbvnj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliae4430da851 [] [] }} ContainerID="eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252" Namespace="calico-system" Pod="csi-node-driver-nbvnj" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-" Mar 2 13:12:40.925862 containerd[1760]: 2026-03-02 13:12:40.827 [INFO][5412] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252" Namespace="calico-system" Pod="csi-node-driver-nbvnj" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0" Mar 2 13:12:40.925862 containerd[1760]: 2026-03-02 13:12:40.853 [INFO][5427] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252" HandleID="k8s-pod-network.eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252" Workload="ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0" Mar 2 13:12:40.925862 containerd[1760]: 2026-03-02 13:12:40.866 [INFO][5427] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252" HandleID="k8s-pod-network.eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252" Workload="ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f3450), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-d5e61b93e9", "pod":"csi-node-driver-nbvnj", "timestamp":"2026-03-02 13:12:40.853236241 +0000 UTC"}, Hostname:"ci-4081.3.101-d5e61b93e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400024adc0)} Mar 2 13:12:40.925862 containerd[1760]: 2026-03-02 13:12:40.866 [INFO][5427] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:40.925862 containerd[1760]: 2026-03-02 13:12:40.866 [INFO][5427] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:40.925862 containerd[1760]: 2026-03-02 13:12:40.866 [INFO][5427] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-d5e61b93e9' Mar 2 13:12:40.925862 containerd[1760]: 2026-03-02 13:12:40.868 [INFO][5427] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:40.925862 containerd[1760]: 2026-03-02 13:12:40.872 [INFO][5427] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:40.925862 containerd[1760]: 2026-03-02 13:12:40.877 [INFO][5427] ipam/ipam.go 526: Trying affinity for 192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:40.925862 containerd[1760]: 2026-03-02 13:12:40.880 [INFO][5427] ipam/ipam.go 160: Attempting to load block cidr=192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:40.925862 containerd[1760]: 2026-03-02 13:12:40.882 [INFO][5427] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:40.925862 containerd[1760]: 2026-03-02 13:12:40.882 [INFO][5427] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.60.0/26 handle="k8s-pod-network.eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:40.925862 containerd[1760]: 2026-03-02 13:12:40.883 [INFO][5427] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252 Mar 2 13:12:40.925862 containerd[1760]: 2026-03-02 13:12:40.888 [INFO][5427] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.60.0/26 handle="k8s-pod-network.eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:40.925862 containerd[1760]: 2026-03-02 13:12:40.897 [INFO][5427] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.60.7/26] block=192.168.60.0/26 handle="k8s-pod-network.eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:40.925862 containerd[1760]: 2026-03-02 13:12:40.897 [INFO][5427] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.60.7/26] handle="k8s-pod-network.eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:40.925862 containerd[1760]: 2026-03-02 13:12:40.897 [INFO][5427] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:40.925862 containerd[1760]: 2026-03-02 13:12:40.897 [INFO][5427] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.60.7/26] IPv6=[] ContainerID="eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252" HandleID="k8s-pod-network.eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252" Workload="ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0" Mar 2 13:12:40.927300 containerd[1760]: 2026-03-02 13:12:40.900 [INFO][5412] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252" Namespace="calico-system" Pod="csi-node-driver-nbvnj" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"662b1d4a-a279-406c-96fb-fce38eb91097", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5d8f55657d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"", Pod:"csi-node-driver-nbvnj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliae4430da851", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:40.927300 containerd[1760]: 2026-03-02 13:12:40.900 [INFO][5412] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.7/32] ContainerID="eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252" Namespace="calico-system" Pod="csi-node-driver-nbvnj" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0" Mar 2 13:12:40.927300 containerd[1760]: 2026-03-02 13:12:40.900 [INFO][5412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae4430da851 ContainerID="eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252" Namespace="calico-system" Pod="csi-node-driver-nbvnj" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0" Mar 2 13:12:40.927300 containerd[1760]: 2026-03-02 13:12:40.905 [INFO][5412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252" Namespace="calico-system" Pod="csi-node-driver-nbvnj" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0" Mar 2 13:12:40.927300 containerd[1760]: 2026-03-02 13:12:40.906 [INFO][5412] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252" Namespace="calico-system" Pod="csi-node-driver-nbvnj" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"662b1d4a-a279-406c-96fb-fce38eb91097", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5d8f55657d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252", Pod:"csi-node-driver-nbvnj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliae4430da851", MAC:"96:4a:21:85:4a:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:40.927300 containerd[1760]: 2026-03-02 13:12:40.923 [INFO][5412] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252" Namespace="calico-system" Pod="csi-node-driver-nbvnj" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0" Mar 2 13:12:40.979271 containerd[1760]: time="2026-03-02T13:12:40.978924501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:12:40.979271 containerd[1760]: time="2026-03-02T13:12:40.979065741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:12:40.979271 containerd[1760]: time="2026-03-02T13:12:40.979108981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:12:40.979557 containerd[1760]: time="2026-03-02T13:12:40.979240981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:12:41.001302 systemd[1]: Started cri-containerd-eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252.scope - libcontainer container eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252. Mar 2 13:12:41.042341 containerd[1760]: time="2026-03-02T13:12:41.042233791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nbvnj,Uid:662b1d4a-a279-406c-96fb-fce38eb91097,Namespace:calico-system,Attempt:1,} returns sandbox id \"eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252\"" Mar 2 13:12:41.077491 systemd[1]: run-netns-cni\x2d1bec46ad\x2dfffc\x2dd033\x2de672\x2d922e49fcc1f7.mount: Deactivated successfully. Mar 2 13:12:42.108807 systemd-networkd[1374]: caliae4430da851: Gained IPv6LL Mar 2 13:12:42.424682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount239039024.mount: Deactivated successfully. Mar 2 13:12:42.425440 containerd[1760]: time="2026-03-02T13:12:42.424865011Z" level=info msg="StopPodSandbox for \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\"" Mar 2 13:12:42.528988 containerd[1760]: 2026-03-02 13:12:42.489 [INFO][5517] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Mar 2 13:12:42.528988 containerd[1760]: 2026-03-02 13:12:42.489 [INFO][5517] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" iface="eth0" netns="/var/run/netns/cni-748f979a-9f39-8a5d-9cee-3d4a869b95f4" Mar 2 13:12:42.528988 containerd[1760]: 2026-03-02 13:12:42.489 [INFO][5517] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" iface="eth0" netns="/var/run/netns/cni-748f979a-9f39-8a5d-9cee-3d4a869b95f4" Mar 2 13:12:42.528988 containerd[1760]: 2026-03-02 13:12:42.490 [INFO][5517] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" iface="eth0" netns="/var/run/netns/cni-748f979a-9f39-8a5d-9cee-3d4a869b95f4" Mar 2 13:12:42.528988 containerd[1760]: 2026-03-02 13:12:42.490 [INFO][5517] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Mar 2 13:12:42.528988 containerd[1760]: 2026-03-02 13:12:42.490 [INFO][5517] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Mar 2 13:12:42.528988 containerd[1760]: 2026-03-02 13:12:42.510 [INFO][5524] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" HandleID="k8s-pod-network.a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0" Mar 2 13:12:42.528988 containerd[1760]: 2026-03-02 13:12:42.515 [INFO][5524] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:42.528988 containerd[1760]: 2026-03-02 13:12:42.516 [INFO][5524] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:42.528988 containerd[1760]: 2026-03-02 13:12:42.524 [WARNING][5524] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" HandleID="k8s-pod-network.a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0" Mar 2 13:12:42.528988 containerd[1760]: 2026-03-02 13:12:42.524 [INFO][5524] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" HandleID="k8s-pod-network.a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0" Mar 2 13:12:42.528988 containerd[1760]: 2026-03-02 13:12:42.525 [INFO][5524] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:42.528988 containerd[1760]: 2026-03-02 13:12:42.527 [INFO][5517] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Mar 2 13:12:42.531283 containerd[1760]: time="2026-03-02T13:12:42.531246536Z" level=info msg="TearDown network for sandbox \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\" successfully" Mar 2 13:12:42.531283 containerd[1760]: time="2026-03-02T13:12:42.531279936Z" level=info msg="StopPodSandbox for \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\" returns successfully" Mar 2 13:12:42.532728 systemd[1]: run-netns-cni\x2d748f979a\x2d9f39\x2d8a5d\x2d9cee\x2d3d4a869b95f4.mount: Deactivated successfully. Mar 2 13:12:42.538971 containerd[1760]: time="2026-03-02T13:12:42.538620142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-v96dg,Uid:5c8fecf4-235a-438f-90ac-097988d29780,Namespace:kube-system,Attempt:1,}" Mar 2 13:12:42.691690 systemd-networkd[1374]: cali280f9a1fbe1: Link UP Mar 2 13:12:42.693240 systemd-networkd[1374]: cali280f9a1fbe1: Gained carrier Mar 2 13:12:42.711420 containerd[1760]: 2026-03-02 13:12:42.618 [INFO][5530] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0 coredns-7d764666f9- kube-system 5c8fecf4-235a-438f-90ac-097988d29780 1025 0 2026-03-02 13:11:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.101-d5e61b93e9 coredns-7d764666f9-v96dg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali280f9a1fbe1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092" Namespace="kube-system" Pod="coredns-7d764666f9-v96dg" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-" Mar 2 13:12:42.711420 containerd[1760]: 2026-03-02 13:12:42.618 [INFO][5530] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092" Namespace="kube-system" Pod="coredns-7d764666f9-v96dg" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0" Mar 2 13:12:42.711420 containerd[1760]: 2026-03-02 13:12:42.643 [INFO][5542] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092" HandleID="k8s-pod-network.ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0" Mar 2 13:12:42.711420 containerd[1760]: 2026-03-02 13:12:42.654 [INFO][5542] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092" HandleID="k8s-pod-network.ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273230), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.101-d5e61b93e9", "pod":"coredns-7d764666f9-v96dg", "timestamp":"2026-03-02 13:12:42.643386585 +0000 UTC"}, Hostname:"ci-4081.3.101-d5e61b93e9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400026b080)} Mar 2 13:12:42.711420 containerd[1760]: 2026-03-02 13:12:42.654 [INFO][5542] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:42.711420 containerd[1760]: 2026-03-02 13:12:42.654 [INFO][5542] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:42.711420 containerd[1760]: 2026-03-02 13:12:42.654 [INFO][5542] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-d5e61b93e9' Mar 2 13:12:42.711420 containerd[1760]: 2026-03-02 13:12:42.656 [INFO][5542] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:42.711420 containerd[1760]: 2026-03-02 13:12:42.660 [INFO][5542] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:42.711420 containerd[1760]: 2026-03-02 13:12:42.665 [INFO][5542] ipam/ipam.go 526: Trying affinity for 192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:42.711420 containerd[1760]: 2026-03-02 13:12:42.666 [INFO][5542] ipam/ipam.go 160: Attempting to load block cidr=192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:42.711420 containerd[1760]: 2026-03-02 13:12:42.668 [INFO][5542] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.60.0/26 host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:42.711420 containerd[1760]: 2026-03-02 13:12:42.669 [INFO][5542] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.60.0/26 handle="k8s-pod-network.ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:42.711420 containerd[1760]: 2026-03-02 13:12:42.670 [INFO][5542] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092 Mar 2 13:12:42.711420 containerd[1760]: 2026-03-02 13:12:42.677 [INFO][5542] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.60.0/26 handle="k8s-pod-network.ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:42.711420 containerd[1760]: 2026-03-02 13:12:42.686 [INFO][5542] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.60.8/26] block=192.168.60.0/26 handle="k8s-pod-network.ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:42.711420 containerd[1760]: 2026-03-02 13:12:42.687 [INFO][5542] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.60.8/26] handle="k8s-pod-network.ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092" host="ci-4081.3.101-d5e61b93e9" Mar 2 13:12:42.711420 containerd[1760]: 2026-03-02 13:12:42.687 [INFO][5542] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:42.711420 containerd[1760]: 2026-03-02 13:12:42.687 [INFO][5542] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.60.8/26] IPv6=[] ContainerID="ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092" HandleID="k8s-pod-network.ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0" Mar 2 13:12:42.711997 containerd[1760]: 2026-03-02 13:12:42.689 [INFO][5530] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092" Namespace="kube-system" Pod="coredns-7d764666f9-v96dg" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"5c8fecf4-235a-438f-90ac-097988d29780", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 11, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"", Pod:"coredns-7d764666f9-v96dg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali280f9a1fbe1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:42.711997 containerd[1760]: 2026-03-02 13:12:42.689 [INFO][5530] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.60.8/32] ContainerID="ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092" Namespace="kube-system" Pod="coredns-7d764666f9-v96dg" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0" Mar 2 13:12:42.711997 containerd[1760]: 2026-03-02 13:12:42.689 [INFO][5530] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali280f9a1fbe1 ContainerID="ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092" Namespace="kube-system" Pod="coredns-7d764666f9-v96dg" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0" Mar 2 13:12:42.711997 containerd[1760]: 2026-03-02 13:12:42.693 [INFO][5530] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092" Namespace="kube-system" Pod="coredns-7d764666f9-v96dg" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0" Mar 2 13:12:42.711997 containerd[1760]: 2026-03-02 13:12:42.693 [INFO][5530] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092" Namespace="kube-system" Pod="coredns-7d764666f9-v96dg" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"5c8fecf4-235a-438f-90ac-097988d29780", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 11, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092", Pod:"coredns-7d764666f9-v96dg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali280f9a1fbe1", MAC:"1a:78:24:30:b3:92", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:42.712288 containerd[1760]: 2026-03-02 13:12:42.707 [INFO][5530] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092" Namespace="kube-system" Pod="coredns-7d764666f9-v96dg" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0" Mar 2 13:12:42.738183 containerd[1760]: time="2026-03-02T13:12:42.737920260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:12:42.738183 containerd[1760]: time="2026-03-02T13:12:42.738016101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:12:42.738183 containerd[1760]: time="2026-03-02T13:12:42.738043661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:12:42.739415 containerd[1760]: time="2026-03-02T13:12:42.738142061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:12:42.757452 systemd[1]: Started cri-containerd-ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092.scope - libcontainer container ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092. Mar 2 13:12:42.804316 containerd[1760]: time="2026-03-02T13:12:42.804279673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-v96dg,Uid:5c8fecf4-235a-438f-90ac-097988d29780,Namespace:kube-system,Attempt:1,} returns sandbox id \"ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092\"" Mar 2 13:12:42.818232 containerd[1760]: time="2026-03-02T13:12:42.818037884Z" level=info msg="CreateContainer within sandbox \"ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:12:42.866417 containerd[1760]: time="2026-03-02T13:12:42.866371483Z" level=info msg="CreateContainer within sandbox \"ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8aabb665bed940b6b6c5a29aaabb4ade8901fc38feaae00f68b77839226d2785\"" Mar 2 13:12:42.867029 containerd[1760]: time="2026-03-02T13:12:42.867008043Z" level=info msg="StartContainer for \"8aabb665bed940b6b6c5a29aaabb4ade8901fc38feaae00f68b77839226d2785\"" Mar 2 13:12:42.894316 systemd[1]: Started cri-containerd-8aabb665bed940b6b6c5a29aaabb4ade8901fc38feaae00f68b77839226d2785.scope - libcontainer container 8aabb665bed940b6b6c5a29aaabb4ade8901fc38feaae00f68b77839226d2785. Mar 2 13:12:42.919755 containerd[1760]: time="2026-03-02T13:12:42.919719125Z" level=info msg="StartContainer for \"8aabb665bed940b6b6c5a29aaabb4ade8901fc38feaae00f68b77839226d2785\" returns successfully" Mar 2 13:12:43.675435 kubelet[3222]: I0302 13:12:43.674880 3222 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-v96dg" podStartSLOduration=50.674865446 podStartE2EDuration="50.674865446s" podCreationTimestamp="2026-03-02 13:11:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:12:43.673679645 +0000 UTC m=+57.346917432" watchObservedRunningTime="2026-03-02 13:12:43.674865446 +0000 UTC m=+57.348103233" Mar 2 13:12:43.872262 containerd[1760]: time="2026-03-02T13:12:43.872215403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:43.875750 containerd[1760]: time="2026-03-02T13:12:43.875720846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.3: active requests=0, bytes read=51600693" Mar 2 13:12:43.880806 containerd[1760]: time="2026-03-02T13:12:43.880503769Z" level=info msg="ImageCreate event name:\"sha256:d40b2a23702c4c62ef242fb10a0dae8b80d5b5a0fd36ecec29e43b227f22611d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:43.886212 containerd[1760]: time="2026-03-02T13:12:43.886184294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:e85ffa1d9468908b0bd44664de0d023da6669faefb3e1013b3a15b63dfa1f9a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:43.887063 containerd[1760]: time="2026-03-02T13:12:43.886991495Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.3\" with image id \"sha256:d40b2a23702c4c62ef242fb10a0dae8b80d5b5a0fd36ecec29e43b227f22611d\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:e85ffa1d9468908b0bd44664de0d023da6669faefb3e1013b3a15b63dfa1f9a9\", size \"51600539\" in 3.818599078s" Mar 2 13:12:43.887151 containerd[1760]: time="2026-03-02T13:12:43.887135295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.3\" returns image reference \"sha256:d40b2a23702c4c62ef242fb10a0dae8b80d5b5a0fd36ecec29e43b227f22611d\"" Mar 2 13:12:43.889907 containerd[1760]: time="2026-03-02T13:12:43.889323776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.3\"" Mar 2 13:12:43.895902 containerd[1760]: time="2026-03-02T13:12:43.895875182Z" level=info msg="CreateContainer within sandbox \"e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 2 13:12:43.952918 containerd[1760]: time="2026-03-02T13:12:43.952792467Z" level=info msg="CreateContainer within sandbox \"e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"54cbc4181b0f749876e13ab0ad37a7c7918997379bd293d01430956cd51dd6d3\"" Mar 2 13:12:43.954841 containerd[1760]: time="2026-03-02T13:12:43.953358867Z" level=info msg="StartContainer for \"54cbc4181b0f749876e13ab0ad37a7c7918997379bd293d01430956cd51dd6d3\"" Mar 2 13:12:43.994335 systemd[1]: Started cri-containerd-54cbc4181b0f749876e13ab0ad37a7c7918997379bd293d01430956cd51dd6d3.scope - libcontainer container 54cbc4181b0f749876e13ab0ad37a7c7918997379bd293d01430956cd51dd6d3. Mar 2 13:12:44.030989 containerd[1760]: time="2026-03-02T13:12:44.030840609Z" level=info msg="StartContainer for \"54cbc4181b0f749876e13ab0ad37a7c7918997379bd293d01430956cd51dd6d3\" returns successfully" Mar 2 13:12:44.121159 systemd[1]: run-containerd-runc-k8s.io-54cbc4181b0f749876e13ab0ad37a7c7918997379bd293d01430956cd51dd6d3-runc.KJ6Sxf.mount: Deactivated successfully. Mar 2 13:12:44.675568 kubelet[3222]: I0302 13:12:44.675496 3222 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-7d7658d587-mwp5j" podStartSLOduration=26.797158494 podStartE2EDuration="38.675392911s" podCreationTimestamp="2026-03-02 13:12:06 +0000 UTC" firstStartedPulling="2026-03-02 13:12:32.009922519 +0000 UTC m=+45.683160306" lastFinishedPulling="2026-03-02 13:12:43.888156936 +0000 UTC m=+57.561394723" observedRunningTime="2026-03-02 13:12:44.67412835 +0000 UTC m=+58.347366097" watchObservedRunningTime="2026-03-02 13:12:44.675392911 +0000 UTC m=+58.348630698" Mar 2 13:12:44.732303 systemd-networkd[1374]: cali280f9a1fbe1: Gained IPv6LL Mar 2 13:12:45.419241 containerd[1760]: time="2026-03-02T13:12:45.419194643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:45.421743 containerd[1760]: time="2026-03-02T13:12:45.421665764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.3: active requests=0, bytes read=5881068" Mar 2 13:12:45.424615 containerd[1760]: time="2026-03-02T13:12:45.424585727Z" level=info msg="ImageCreate event name:\"sha256:860a7f2cdb9123795f95a07e0cc91bc6b511927d1a4d1d588c303c9c59e0fa59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:45.429547 containerd[1760]: time="2026-03-02T13:12:45.429481090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:3a388b567fff5cc31c64399d4af0fd03d2f4d243ef26e6f6b77a49386dbadeca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:45.430384 containerd[1760]: time="2026-03-02T13:12:45.430276171Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.3\" with image id \"sha256:860a7f2cdb9123795f95a07e0cc91bc6b511927d1a4d1d588c303c9c59e0fa59\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:3a388b567fff5cc31c64399d4af0fd03d2f4d243ef26e6f6b77a49386dbadeca\", size \"7278585\" in 1.540923394s" Mar 2 13:12:45.430384 containerd[1760]: time="2026-03-02T13:12:45.430306531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.3\" returns image reference \"sha256:860a7f2cdb9123795f95a07e0cc91bc6b511927d1a4d1d588c303c9c59e0fa59\"" Mar 2 13:12:45.433134 containerd[1760]: time="2026-03-02T13:12:45.432147812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.3\"" Mar 2 13:12:45.440255 containerd[1760]: time="2026-03-02T13:12:45.440225498Z" level=info msg="CreateContainer within sandbox \"c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 2 13:12:45.477436 containerd[1760]: time="2026-03-02T13:12:45.477328524Z" level=info msg="CreateContainer within sandbox \"c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"97eb0761175cb3d4ba768b2915e112da52520e84bf13188f122efbe824d232c8\"" Mar 2 13:12:45.479028 containerd[1760]: time="2026-03-02T13:12:45.478532445Z" level=info msg="StartContainer for \"97eb0761175cb3d4ba768b2915e112da52520e84bf13188f122efbe824d232c8\"" Mar 2 13:12:45.514321 systemd[1]: Started cri-containerd-97eb0761175cb3d4ba768b2915e112da52520e84bf13188f122efbe824d232c8.scope - libcontainer container 97eb0761175cb3d4ba768b2915e112da52520e84bf13188f122efbe824d232c8. Mar 2 13:12:45.548650 containerd[1760]: time="2026-03-02T13:12:45.548436335Z" level=info msg="StartContainer for \"97eb0761175cb3d4ba768b2915e112da52520e84bf13188f122efbe824d232c8\" returns successfully" Mar 2 13:12:46.422221 containerd[1760]: time="2026-03-02T13:12:46.421676760Z" level=info msg="StopPodSandbox for \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\"" Mar 2 13:12:46.501526 containerd[1760]: 2026-03-02 13:12:46.463 [WARNING][5785] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"5c8fecf4-235a-438f-90ac-097988d29780", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 11, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092", Pod:"coredns-7d764666f9-v96dg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali280f9a1fbe1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:46.501526 containerd[1760]: 2026-03-02 13:12:46.463 [INFO][5785] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Mar 2 13:12:46.501526 containerd[1760]: 2026-03-02 13:12:46.463 [INFO][5785] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" iface="eth0" netns="" Mar 2 13:12:46.501526 containerd[1760]: 2026-03-02 13:12:46.463 [INFO][5785] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Mar 2 13:12:46.501526 containerd[1760]: 2026-03-02 13:12:46.463 [INFO][5785] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Mar 2 13:12:46.501526 containerd[1760]: 2026-03-02 13:12:46.486 [INFO][5794] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" HandleID="k8s-pod-network.a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0" Mar 2 13:12:46.501526 containerd[1760]: 2026-03-02 13:12:46.487 [INFO][5794] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:46.501526 containerd[1760]: 2026-03-02 13:12:46.487 [INFO][5794] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:46.501526 containerd[1760]: 2026-03-02 13:12:46.496 [WARNING][5794] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" HandleID="k8s-pod-network.a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0" Mar 2 13:12:46.501526 containerd[1760]: 2026-03-02 13:12:46.496 [INFO][5794] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" HandleID="k8s-pod-network.a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0" Mar 2 13:12:46.501526 containerd[1760]: 2026-03-02 13:12:46.497 [INFO][5794] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:46.501526 containerd[1760]: 2026-03-02 13:12:46.499 [INFO][5785] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Mar 2 13:12:46.501927 containerd[1760]: time="2026-03-02T13:12:46.501546097Z" level=info msg="TearDown network for sandbox \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\" successfully" Mar 2 13:12:46.501927 containerd[1760]: time="2026-03-02T13:12:46.501568497Z" level=info msg="StopPodSandbox for \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\" returns successfully" Mar 2 13:12:46.502733 containerd[1760]: time="2026-03-02T13:12:46.502595817Z" level=info msg="RemovePodSandbox for \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\"" Mar 2 13:12:46.506928 containerd[1760]: time="2026-03-02T13:12:46.506818301Z" level=info msg="Forcibly stopping sandbox \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\"" Mar 2 13:12:46.571093 containerd[1760]: 2026-03-02 13:12:46.540 [WARNING][5808] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"5c8fecf4-235a-438f-90ac-097988d29780", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 11, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"ef9e280486f847f964c5a4dd44eb51ad1995e1751435fbf00772ad0d568ab092", Pod:"coredns-7d764666f9-v96dg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali280f9a1fbe1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:46.571093 containerd[1760]: 2026-03-02 13:12:46.540 [INFO][5808] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Mar 2 13:12:46.571093 containerd[1760]: 2026-03-02 13:12:46.540 [INFO][5808] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" iface="eth0" netns="" Mar 2 13:12:46.571093 containerd[1760]: 2026-03-02 13:12:46.540 [INFO][5808] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Mar 2 13:12:46.571093 containerd[1760]: 2026-03-02 13:12:46.540 [INFO][5808] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Mar 2 13:12:46.571093 containerd[1760]: 2026-03-02 13:12:46.558 [INFO][5815] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" HandleID="k8s-pod-network.a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0" Mar 2 13:12:46.571093 containerd[1760]: 2026-03-02 13:12:46.558 [INFO][5815] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:46.571093 containerd[1760]: 2026-03-02 13:12:46.558 [INFO][5815] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:46.571093 containerd[1760]: 2026-03-02 13:12:46.566 [WARNING][5815] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" HandleID="k8s-pod-network.a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0" Mar 2 13:12:46.571093 containerd[1760]: 2026-03-02 13:12:46.566 [INFO][5815] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" HandleID="k8s-pod-network.a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--v96dg-eth0" Mar 2 13:12:46.571093 containerd[1760]: 2026-03-02 13:12:46.567 [INFO][5815] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:46.571093 containerd[1760]: 2026-03-02 13:12:46.569 [INFO][5808] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253" Mar 2 13:12:46.571565 containerd[1760]: time="2026-03-02T13:12:46.571123867Z" level=info msg="TearDown network for sandbox \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\" successfully" Mar 2 13:12:46.578793 containerd[1760]: time="2026-03-02T13:12:46.578757952Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:12:46.578870 containerd[1760]: time="2026-03-02T13:12:46.578830752Z" level=info msg="RemovePodSandbox \"a90551b2d36108b172ce38bcccbb067d7298a2438464d92fbd98fd820202b253\" returns successfully" Mar 2 13:12:46.579425 containerd[1760]: time="2026-03-02T13:12:46.579403912Z" level=info msg="StopPodSandbox for \"5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d\"" Mar 2 13:12:46.666318 containerd[1760]: 2026-03-02 13:12:46.616 [WARNING][5829] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0", GenerateName:"calico-apiserver-7f6ff4dd5d-", Namespace:"calico-system", SelfLink:"", UID:"2b8c6e3f-27bc-4357-bc8f-90f4d1ae9548", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6ff4dd5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd", Pod:"calico-apiserver-7f6ff4dd5d-nbv94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliaa755832667", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:46.666318 containerd[1760]: 2026-03-02 13:12:46.616 [INFO][5829] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Mar 2 13:12:46.666318 containerd[1760]: 2026-03-02 13:12:46.616 [INFO][5829] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" iface="eth0" netns="" Mar 2 13:12:46.666318 containerd[1760]: 2026-03-02 13:12:46.616 [INFO][5829] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Mar 2 13:12:46.666318 containerd[1760]: 2026-03-02 13:12:46.616 [INFO][5829] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Mar 2 13:12:46.666318 containerd[1760]: 2026-03-02 13:12:46.649 [INFO][5838] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" HandleID="k8s-pod-network.5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0" Mar 2 13:12:46.666318 containerd[1760]: 2026-03-02 13:12:46.649 [INFO][5838] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:46.666318 containerd[1760]: 2026-03-02 13:12:46.649 [INFO][5838] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:46.666318 containerd[1760]: 2026-03-02 13:12:46.658 [WARNING][5838] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" HandleID="k8s-pod-network.5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0" Mar 2 13:12:46.666318 containerd[1760]: 2026-03-02 13:12:46.658 [INFO][5838] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" HandleID="k8s-pod-network.5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0" Mar 2 13:12:46.666318 containerd[1760]: 2026-03-02 13:12:46.661 [INFO][5838] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:46.666318 containerd[1760]: 2026-03-02 13:12:46.663 [INFO][5829] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Mar 2 13:12:46.667402 containerd[1760]: time="2026-03-02T13:12:46.666345655Z" level=info msg="TearDown network for sandbox \"5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d\" successfully" Mar 2 13:12:46.667402 containerd[1760]: time="2026-03-02T13:12:46.666373695Z" level=info msg="StopPodSandbox for \"5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d\" returns successfully" Mar 2 13:12:46.667402 containerd[1760]: time="2026-03-02T13:12:46.666850775Z" level=info msg="RemovePodSandbox for \"5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d\"" Mar 2 13:12:46.667402 containerd[1760]: time="2026-03-02T13:12:46.666876055Z" level=info msg="Forcibly stopping sandbox \"5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d\"" Mar 2 13:12:46.755179 containerd[1760]: 2026-03-02 13:12:46.716 [WARNING][5853] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0", GenerateName:"calico-apiserver-7f6ff4dd5d-", Namespace:"calico-system", SelfLink:"", UID:"2b8c6e3f-27bc-4357-bc8f-90f4d1ae9548", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6ff4dd5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"a8d379cf1f92c151a350b5e570d6331617092c3c59f91ff6ea506fc0d73574cd", Pod:"calico-apiserver-7f6ff4dd5d-nbv94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliaa755832667", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:46.755179 containerd[1760]: 2026-03-02 13:12:46.716 [INFO][5853] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Mar 2 13:12:46.755179 containerd[1760]: 2026-03-02 13:12:46.716 [INFO][5853] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" iface="eth0" netns="" Mar 2 13:12:46.755179 containerd[1760]: 2026-03-02 13:12:46.716 [INFO][5853] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Mar 2 13:12:46.755179 containerd[1760]: 2026-03-02 13:12:46.716 [INFO][5853] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Mar 2 13:12:46.755179 containerd[1760]: 2026-03-02 13:12:46.737 [INFO][5860] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" HandleID="k8s-pod-network.5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0" Mar 2 13:12:46.755179 containerd[1760]: 2026-03-02 13:12:46.737 [INFO][5860] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:46.755179 containerd[1760]: 2026-03-02 13:12:46.737 [INFO][5860] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:46.755179 containerd[1760]: 2026-03-02 13:12:46.746 [WARNING][5860] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" HandleID="k8s-pod-network.5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0" Mar 2 13:12:46.755179 containerd[1760]: 2026-03-02 13:12:46.746 [INFO][5860] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" HandleID="k8s-pod-network.5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--nbv94-eth0" Mar 2 13:12:46.755179 containerd[1760]: 2026-03-02 13:12:46.747 [INFO][5860] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:46.755179 containerd[1760]: 2026-03-02 13:12:46.753 [INFO][5853] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d" Mar 2 13:12:46.755179 containerd[1760]: time="2026-03-02T13:12:46.755120078Z" level=info msg="TearDown network for sandbox \"5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d\" successfully" Mar 2 13:12:47.803253 containerd[1760]: time="2026-03-02T13:12:47.803097708Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:12:47.803253 containerd[1760]: time="2026-03-02T13:12:47.803183148Z" level=info msg="RemovePodSandbox \"5615dbc361f698d09d655fa8e662048dfa96a915b2200c1243209a2d59bd472d\" returns successfully" Mar 2 13:12:47.804135 containerd[1760]: time="2026-03-02T13:12:47.804100588Z" level=info msg="StopPodSandbox for \"f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc\"" Mar 2 13:12:47.917357 containerd[1760]: 2026-03-02 13:12:47.857 [WARNING][5881] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"421d6308-a09a-4e47-a594-09b64ce05a98", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 11, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c", Pod:"coredns-7d764666f9-fl49c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42a8c80f77c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:47.917357 containerd[1760]: 2026-03-02 13:12:47.857 [INFO][5881] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Mar 2 13:12:47.917357 containerd[1760]: 2026-03-02 13:12:47.857 [INFO][5881] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" iface="eth0" netns="" Mar 2 13:12:47.917357 containerd[1760]: 2026-03-02 13:12:47.857 [INFO][5881] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Mar 2 13:12:47.917357 containerd[1760]: 2026-03-02 13:12:47.857 [INFO][5881] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Mar 2 13:12:47.917357 containerd[1760]: 2026-03-02 13:12:47.895 [INFO][5892] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" HandleID="k8s-pod-network.f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0" Mar 2 13:12:47.917357 containerd[1760]: 2026-03-02 13:12:47.895 [INFO][5892] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:47.917357 containerd[1760]: 2026-03-02 13:12:47.895 [INFO][5892] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:47.917357 containerd[1760]: 2026-03-02 13:12:47.908 [WARNING][5892] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" HandleID="k8s-pod-network.f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0" Mar 2 13:12:47.917357 containerd[1760]: 2026-03-02 13:12:47.908 [INFO][5892] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" HandleID="k8s-pod-network.f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0" Mar 2 13:12:47.917357 containerd[1760]: 2026-03-02 13:12:47.911 [INFO][5892] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:47.917357 containerd[1760]: 2026-03-02 13:12:47.914 [INFO][5881] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Mar 2 13:12:47.918258 containerd[1760]: time="2026-03-02T13:12:47.917929510Z" level=info msg="TearDown network for sandbox \"f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc\" successfully" Mar 2 13:12:47.918258 containerd[1760]: time="2026-03-02T13:12:47.917959390Z" level=info msg="StopPodSandbox for \"f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc\" returns successfully" Mar 2 13:12:47.918834 containerd[1760]: time="2026-03-02T13:12:47.918682830Z" level=info msg="RemovePodSandbox for \"f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc\"" Mar 2 13:12:47.918834 containerd[1760]: time="2026-03-02T13:12:47.918720430Z" level=info msg="Forcibly stopping sandbox \"f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc\"" Mar 2 13:12:47.996793 containerd[1760]: time="2026-03-02T13:12:47.996743646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:48.001022 containerd[1760]: time="2026-03-02T13:12:48.000995529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.3: active requests=0, bytes read=8255947" Mar 2 13:12:48.005305 containerd[1760]: time="2026-03-02T13:12:48.005223892Z" level=info msg="ImageCreate event name:\"sha256:a7b37b6d011a8219915c610022e2c5ef47396285db6e7e10d7694ff3dea87dc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:48.011888 containerd[1760]: 2026-03-02 13:12:47.973 [WARNING][5906] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"421d6308-a09a-4e47-a594-09b64ce05a98", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 11, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"d972e24b76d0bbff0e18e1fb784938ae91f2fe04cceef0b17e72ec5441c2f97c", Pod:"coredns-7d764666f9-fl49c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali42a8c80f77c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:48.011888 containerd[1760]: 2026-03-02 13:12:47.977 [INFO][5906] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Mar 2 13:12:48.011888 containerd[1760]: 2026-03-02 13:12:47.977 [INFO][5906] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" iface="eth0" netns="" Mar 2 13:12:48.011888 containerd[1760]: 2026-03-02 13:12:47.977 [INFO][5906] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Mar 2 13:12:48.011888 containerd[1760]: 2026-03-02 13:12:47.977 [INFO][5906] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Mar 2 13:12:48.011888 containerd[1760]: 2026-03-02 13:12:47.994 [INFO][5913] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" HandleID="k8s-pod-network.f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0" Mar 2 13:12:48.011888 containerd[1760]: 2026-03-02 13:12:47.994 [INFO][5913] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:48.011888 containerd[1760]: 2026-03-02 13:12:47.994 [INFO][5913] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:48.011888 containerd[1760]: 2026-03-02 13:12:48.003 [WARNING][5913] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" HandleID="k8s-pod-network.f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0" Mar 2 13:12:48.011888 containerd[1760]: 2026-03-02 13:12:48.003 [INFO][5913] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" HandleID="k8s-pod-network.f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Workload="ci--4081.3.101--d5e61b93e9-k8s-coredns--7d764666f9--fl49c-eth0" Mar 2 13:12:48.011888 containerd[1760]: 2026-03-02 13:12:48.006 [INFO][5913] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:48.011888 containerd[1760]: 2026-03-02 13:12:48.010 [INFO][5906] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc" Mar 2 13:12:48.012385 containerd[1760]: time="2026-03-02T13:12:48.011908897Z" level=info msg="TearDown network for sandbox \"f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc\" successfully" Mar 2 13:12:48.013753 containerd[1760]: time="2026-03-02T13:12:48.013650058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:3d04cd6265f850f0420b413351275ebfd244991b1b9e69c64efe8b4eff45b53f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:48.014801 containerd[1760]: time="2026-03-02T13:12:48.014682899Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.3\" with image id \"sha256:a7b37b6d011a8219915c610022e2c5ef47396285db6e7e10d7694ff3dea87dc5\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:3d04cd6265f850f0420b413351275ebfd244991b1b9e69c64efe8b4eff45b53f\", size \"9653472\" in 2.582485767s" Mar 2 13:12:48.014801 containerd[1760]: time="2026-03-02T13:12:48.014719779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.3\" returns image reference \"sha256:a7b37b6d011a8219915c610022e2c5ef47396285db6e7e10d7694ff3dea87dc5\"" Mar 2 13:12:48.016687 containerd[1760]: time="2026-03-02T13:12:48.016424540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\"" Mar 2 13:12:48.024328 containerd[1760]: time="2026-03-02T13:12:48.024299746Z" level=info msg="CreateContainer within sandbox \"eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 2 13:12:48.030018 containerd[1760]: time="2026-03-02T13:12:48.029992550Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:12:48.030271 containerd[1760]: time="2026-03-02T13:12:48.030247550Z" level=info msg="RemovePodSandbox \"f7615fbd463f3fb61b794eaf8abb140e855b3431ea3968414a6c057d174646dc\" returns successfully" Mar 2 13:12:48.030787 containerd[1760]: time="2026-03-02T13:12:48.030765710Z" level=info msg="StopPodSandbox for \"4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735\"" Mar 2 13:12:48.064513 containerd[1760]: time="2026-03-02T13:12:48.064408854Z" level=info msg="CreateContainer within sandbox \"eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"36fc163f82036fd1fbc32fae1cd931f1cdee62019a30f3499ff414d7f2867785\"" Mar 2 13:12:48.065560 containerd[1760]: time="2026-03-02T13:12:48.065539375Z" level=info msg="StartContainer for \"36fc163f82036fd1fbc32fae1cd931f1cdee62019a30f3499ff414d7f2867785\"" Mar 2 13:12:48.110567 systemd[1]: Started cri-containerd-36fc163f82036fd1fbc32fae1cd931f1cdee62019a30f3499ff414d7f2867785.scope - libcontainer container 36fc163f82036fd1fbc32fae1cd931f1cdee62019a30f3499ff414d7f2867785. Mar 2 13:12:48.125664 containerd[1760]: 2026-03-02 13:12:48.071 [WARNING][5927] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0", GenerateName:"calico-kube-controllers-775879d688-", Namespace:"calico-system", SelfLink:"", UID:"a8eaf721-f0bb-465f-a36a-f220c434ea52", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"775879d688", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093", Pod:"calico-kube-controllers-775879d688-ks9z6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8e919f7999c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:48.125664 containerd[1760]: 2026-03-02 13:12:48.072 [INFO][5927] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Mar 2 13:12:48.125664 containerd[1760]: 2026-03-02 13:12:48.072 [INFO][5927] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" iface="eth0" netns="" Mar 2 13:12:48.125664 containerd[1760]: 2026-03-02 13:12:48.072 [INFO][5927] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Mar 2 13:12:48.125664 containerd[1760]: 2026-03-02 13:12:48.072 [INFO][5927] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Mar 2 13:12:48.125664 containerd[1760]: 2026-03-02 13:12:48.109 [INFO][5935] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" HandleID="k8s-pod-network.4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0" Mar 2 13:12:48.125664 containerd[1760]: 2026-03-02 13:12:48.109 [INFO][5935] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:48.125664 containerd[1760]: 2026-03-02 13:12:48.109 [INFO][5935] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:48.125664 containerd[1760]: 2026-03-02 13:12:48.120 [WARNING][5935] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" HandleID="k8s-pod-network.4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0" Mar 2 13:12:48.125664 containerd[1760]: 2026-03-02 13:12:48.120 [INFO][5935] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" HandleID="k8s-pod-network.4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0" Mar 2 13:12:48.125664 containerd[1760]: 2026-03-02 13:12:48.122 [INFO][5935] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:48.125664 containerd[1760]: 2026-03-02 13:12:48.124 [INFO][5927] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Mar 2 13:12:48.126821 containerd[1760]: time="2026-03-02T13:12:48.126137739Z" level=info msg="TearDown network for sandbox \"4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735\" successfully" Mar 2 13:12:48.126821 containerd[1760]: time="2026-03-02T13:12:48.126223139Z" level=info msg="StopPodSandbox for \"4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735\" returns successfully" Mar 2 13:12:48.126821 containerd[1760]: time="2026-03-02T13:12:48.126618979Z" level=info msg="RemovePodSandbox for \"4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735\"" Mar 2 13:12:48.126821 containerd[1760]: time="2026-03-02T13:12:48.126645579Z" level=info msg="Forcibly stopping sandbox \"4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735\"" Mar 2 13:12:48.147614 containerd[1760]: time="2026-03-02T13:12:48.147465554Z" level=info msg="StartContainer for \"36fc163f82036fd1fbc32fae1cd931f1cdee62019a30f3499ff414d7f2867785\" returns successfully" Mar 2 13:12:48.205648 containerd[1760]: 2026-03-02 13:12:48.168 [WARNING][5973] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0", GenerateName:"calico-kube-controllers-775879d688-", Namespace:"calico-system", SelfLink:"", UID:"a8eaf721-f0bb-465f-a36a-f220c434ea52", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"775879d688", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"2a71dad6e995034570caec4b2b923793b20a8bcc0d1f4e0c8252cb4c6c60c093", Pod:"calico-kube-controllers-775879d688-ks9z6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8e919f7999c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:48.205648 containerd[1760]: 2026-03-02 13:12:48.168 [INFO][5973] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Mar 2 13:12:48.205648 containerd[1760]: 2026-03-02 13:12:48.168 [INFO][5973] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" iface="eth0" netns="" Mar 2 13:12:48.205648 containerd[1760]: 2026-03-02 13:12:48.168 [INFO][5973] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Mar 2 13:12:48.205648 containerd[1760]: 2026-03-02 13:12:48.168 [INFO][5973] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Mar 2 13:12:48.205648 containerd[1760]: 2026-03-02 13:12:48.190 [INFO][5988] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" HandleID="k8s-pod-network.4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0" Mar 2 13:12:48.205648 containerd[1760]: 2026-03-02 13:12:48.190 [INFO][5988] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:48.205648 containerd[1760]: 2026-03-02 13:12:48.191 [INFO][5988] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:48.205648 containerd[1760]: 2026-03-02 13:12:48.200 [WARNING][5988] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" HandleID="k8s-pod-network.4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0" Mar 2 13:12:48.205648 containerd[1760]: 2026-03-02 13:12:48.200 [INFO][5988] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" HandleID="k8s-pod-network.4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--kube--controllers--775879d688--ks9z6-eth0" Mar 2 13:12:48.205648 containerd[1760]: 2026-03-02 13:12:48.201 [INFO][5988] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:48.205648 containerd[1760]: 2026-03-02 13:12:48.203 [INFO][5973] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735" Mar 2 13:12:48.206127 containerd[1760]: time="2026-03-02T13:12:48.205689156Z" level=info msg="TearDown network for sandbox \"4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735\" successfully" Mar 2 13:12:48.215782 containerd[1760]: time="2026-03-02T13:12:48.215737563Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:12:48.215872 containerd[1760]: time="2026-03-02T13:12:48.215822123Z" level=info msg="RemovePodSandbox \"4cf59bf4b402a38fdf04ccea564a1607be3e0b2cc9ce2e0c72228fc05fc12735\" returns successfully" Mar 2 13:12:48.216652 containerd[1760]: time="2026-03-02T13:12:48.216455723Z" level=info msg="StopPodSandbox for \"c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892\"" Mar 2 13:12:48.279636 containerd[1760]: 2026-03-02 13:12:48.246 [WARNING][6002] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0", GenerateName:"goldmane-7d7658d587-", Namespace:"calico-system", SelfLink:"", UID:"fe44d783-a407-4475-bb45-4b5e475fc600", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7d7658d587", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c", Pod:"goldmane-7d7658d587-mwp5j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.60.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie8e95b0cba4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:48.279636 containerd[1760]: 2026-03-02 13:12:48.246 [INFO][6002] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Mar 2 13:12:48.279636 containerd[1760]: 2026-03-02 13:12:48.246 [INFO][6002] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" iface="eth0" netns="" Mar 2 13:12:48.279636 containerd[1760]: 2026-03-02 13:12:48.246 [INFO][6002] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Mar 2 13:12:48.279636 containerd[1760]: 2026-03-02 13:12:48.246 [INFO][6002] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Mar 2 13:12:48.279636 containerd[1760]: 2026-03-02 13:12:48.263 [INFO][6009] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" HandleID="k8s-pod-network.c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Workload="ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0" Mar 2 13:12:48.279636 containerd[1760]: 2026-03-02 13:12:48.263 [INFO][6009] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:48.279636 containerd[1760]: 2026-03-02 13:12:48.263 [INFO][6009] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:48.279636 containerd[1760]: 2026-03-02 13:12:48.274 [WARNING][6009] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" HandleID="k8s-pod-network.c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Workload="ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0" Mar 2 13:12:48.279636 containerd[1760]: 2026-03-02 13:12:48.274 [INFO][6009] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" HandleID="k8s-pod-network.c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Workload="ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0" Mar 2 13:12:48.279636 containerd[1760]: 2026-03-02 13:12:48.276 [INFO][6009] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:48.279636 containerd[1760]: 2026-03-02 13:12:48.278 [INFO][6002] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Mar 2 13:12:48.280204 containerd[1760]: time="2026-03-02T13:12:48.279727488Z" level=info msg="TearDown network for sandbox \"c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892\" successfully" Mar 2 13:12:48.280204 containerd[1760]: time="2026-03-02T13:12:48.279751289Z" level=info msg="StopPodSandbox for \"c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892\" returns successfully" Mar 2 13:12:48.280204 containerd[1760]: time="2026-03-02T13:12:48.280087049Z" level=info msg="RemovePodSandbox for \"c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892\"" Mar 2 13:12:48.280204 containerd[1760]: time="2026-03-02T13:12:48.280112489Z" level=info msg="Forcibly stopping sandbox \"c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892\"" Mar 2 13:12:48.351518 containerd[1760]: 2026-03-02 13:12:48.317 [WARNING][6023] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0", GenerateName:"goldmane-7d7658d587-", Namespace:"calico-system", SelfLink:"", UID:"fe44d783-a407-4475-bb45-4b5e475fc600", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7d7658d587", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"e8a3200a7d9bffcd7eff8e4513ae0858119d36ba9af9e4f73c63ebdfdf25487c", Pod:"goldmane-7d7658d587-mwp5j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.60.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie8e95b0cba4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:48.351518 containerd[1760]: 2026-03-02 13:12:48.317 [INFO][6023] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Mar 2 13:12:48.351518 containerd[1760]: 2026-03-02 13:12:48.317 [INFO][6023] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" iface="eth0" netns="" Mar 2 13:12:48.351518 containerd[1760]: 2026-03-02 13:12:48.317 [INFO][6023] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Mar 2 13:12:48.351518 containerd[1760]: 2026-03-02 13:12:48.317 [INFO][6023] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Mar 2 13:12:48.351518 containerd[1760]: 2026-03-02 13:12:48.336 [INFO][6031] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" HandleID="k8s-pod-network.c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Workload="ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0" Mar 2 13:12:48.351518 containerd[1760]: 2026-03-02 13:12:48.337 [INFO][6031] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:48.351518 containerd[1760]: 2026-03-02 13:12:48.337 [INFO][6031] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:48.351518 containerd[1760]: 2026-03-02 13:12:48.345 [WARNING][6031] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" HandleID="k8s-pod-network.c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Workload="ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0" Mar 2 13:12:48.351518 containerd[1760]: 2026-03-02 13:12:48.346 [INFO][6031] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" HandleID="k8s-pod-network.c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Workload="ci--4081.3.101--d5e61b93e9-k8s-goldmane--7d7658d587--mwp5j-eth0" Mar 2 13:12:48.351518 containerd[1760]: 2026-03-02 13:12:48.348 [INFO][6031] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:48.351518 containerd[1760]: 2026-03-02 13:12:48.349 [INFO][6023] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892" Mar 2 13:12:48.351886 containerd[1760]: time="2026-03-02T13:12:48.351508580Z" level=info msg="TearDown network for sandbox \"c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892\" successfully" Mar 2 13:12:48.359789 containerd[1760]: time="2026-03-02T13:12:48.359741386Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:12:48.359861 containerd[1760]: time="2026-03-02T13:12:48.359851146Z" level=info msg="RemovePodSandbox \"c46caa868f74ddc086c3c3dc49dace783a32b5f2e19e3c1e4d689b6d03427892\" returns successfully" Mar 2 13:12:48.360310 containerd[1760]: time="2026-03-02T13:12:48.360284866Z" level=info msg="StopPodSandbox for \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\"" Mar 2 13:12:48.429192 containerd[1760]: 2026-03-02 13:12:48.395 [WARNING][6045] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"662b1d4a-a279-406c-96fb-fce38eb91097", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5d8f55657d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252", Pod:"csi-node-driver-nbvnj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliae4430da851", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:48.429192 containerd[1760]: 2026-03-02 13:12:48.395 [INFO][6045] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Mar 2 13:12:48.429192 containerd[1760]: 2026-03-02 13:12:48.395 [INFO][6045] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" iface="eth0" netns="" Mar 2 13:12:48.429192 containerd[1760]: 2026-03-02 13:12:48.395 [INFO][6045] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Mar 2 13:12:48.429192 containerd[1760]: 2026-03-02 13:12:48.395 [INFO][6045] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Mar 2 13:12:48.429192 containerd[1760]: 2026-03-02 13:12:48.412 [INFO][6052] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" HandleID="k8s-pod-network.3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Workload="ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0" Mar 2 13:12:48.429192 containerd[1760]: 2026-03-02 13:12:48.413 [INFO][6052] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:48.429192 containerd[1760]: 2026-03-02 13:12:48.413 [INFO][6052] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:48.429192 containerd[1760]: 2026-03-02 13:12:48.422 [WARNING][6052] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" HandleID="k8s-pod-network.3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Workload="ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0" Mar 2 13:12:48.429192 containerd[1760]: 2026-03-02 13:12:48.422 [INFO][6052] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" HandleID="k8s-pod-network.3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Workload="ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0" Mar 2 13:12:48.429192 containerd[1760]: 2026-03-02 13:12:48.424 [INFO][6052] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:48.429192 containerd[1760]: 2026-03-02 13:12:48.427 [INFO][6045] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Mar 2 13:12:48.430061 containerd[1760]: time="2026-03-02T13:12:48.429219955Z" level=info msg="TearDown network for sandbox \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\" successfully" Mar 2 13:12:48.430061 containerd[1760]: time="2026-03-02T13:12:48.429241995Z" level=info msg="StopPodSandbox for \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\" returns successfully" Mar 2 13:12:48.430061 containerd[1760]: time="2026-03-02T13:12:48.429582196Z" level=info msg="RemovePodSandbox for \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\"" Mar 2 13:12:48.430061 containerd[1760]: time="2026-03-02T13:12:48.429611396Z" level=info msg="Forcibly stopping sandbox \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\"" Mar 2 13:12:48.503134 containerd[1760]: 2026-03-02 13:12:48.467 [WARNING][6067] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"662b1d4a-a279-406c-96fb-fce38eb91097", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5d8f55657d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252", Pod:"csi-node-driver-nbvnj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.60.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliae4430da851", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:48.503134 containerd[1760]: 2026-03-02 13:12:48.468 [INFO][6067] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Mar 2 13:12:48.503134 containerd[1760]: 2026-03-02 13:12:48.468 [INFO][6067] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" iface="eth0" netns="" Mar 2 13:12:48.503134 containerd[1760]: 2026-03-02 13:12:48.468 [INFO][6067] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Mar 2 13:12:48.503134 containerd[1760]: 2026-03-02 13:12:48.468 [INFO][6067] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Mar 2 13:12:48.503134 containerd[1760]: 2026-03-02 13:12:48.490 [INFO][6074] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" HandleID="k8s-pod-network.3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Workload="ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0" Mar 2 13:12:48.503134 containerd[1760]: 2026-03-02 13:12:48.490 [INFO][6074] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:48.503134 containerd[1760]: 2026-03-02 13:12:48.490 [INFO][6074] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:48.503134 containerd[1760]: 2026-03-02 13:12:48.498 [WARNING][6074] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" HandleID="k8s-pod-network.3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Workload="ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0" Mar 2 13:12:48.503134 containerd[1760]: 2026-03-02 13:12:48.498 [INFO][6074] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" HandleID="k8s-pod-network.3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Workload="ci--4081.3.101--d5e61b93e9-k8s-csi--node--driver--nbvnj-eth0" Mar 2 13:12:48.503134 containerd[1760]: 2026-03-02 13:12:48.499 [INFO][6074] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:48.503134 containerd[1760]: 2026-03-02 13:12:48.501 [INFO][6067] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730" Mar 2 13:12:48.503534 containerd[1760]: time="2026-03-02T13:12:48.503176968Z" level=info msg="TearDown network for sandbox \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\" successfully" Mar 2 13:12:48.510368 containerd[1760]: time="2026-03-02T13:12:48.510335613Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:12:48.510438 containerd[1760]: time="2026-03-02T13:12:48.510400813Z" level=info msg="RemovePodSandbox \"3d64aa86846870def471a43cd6ca179b3abd9c874b65bbc391c24a8bcac4d730\" returns successfully" Mar 2 13:12:48.510911 containerd[1760]: time="2026-03-02T13:12:48.510887694Z" level=info msg="StopPodSandbox for \"6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc\"" Mar 2 13:12:48.577923 containerd[1760]: 2026-03-02 13:12:48.544 [WARNING][6089] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-whisker--554bdbbf9--xb6cd-eth0" Mar 2 13:12:48.577923 containerd[1760]: 2026-03-02 13:12:48.544 [INFO][6089] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Mar 2 13:12:48.577923 containerd[1760]: 2026-03-02 13:12:48.544 [INFO][6089] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" iface="eth0" netns="" Mar 2 13:12:48.577923 containerd[1760]: 2026-03-02 13:12:48.544 [INFO][6089] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Mar 2 13:12:48.577923 containerd[1760]: 2026-03-02 13:12:48.544 [INFO][6089] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Mar 2 13:12:48.577923 containerd[1760]: 2026-03-02 13:12:48.564 [INFO][6096] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" HandleID="k8s-pod-network.6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Workload="ci--4081.3.101--d5e61b93e9-k8s-whisker--554bdbbf9--xb6cd-eth0" Mar 2 13:12:48.577923 containerd[1760]: 2026-03-02 13:12:48.565 [INFO][6096] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:48.577923 containerd[1760]: 2026-03-02 13:12:48.565 [INFO][6096] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:48.577923 containerd[1760]: 2026-03-02 13:12:48.573 [WARNING][6096] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" HandleID="k8s-pod-network.6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Workload="ci--4081.3.101--d5e61b93e9-k8s-whisker--554bdbbf9--xb6cd-eth0" Mar 2 13:12:48.577923 containerd[1760]: 2026-03-02 13:12:48.573 [INFO][6096] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" HandleID="k8s-pod-network.6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Workload="ci--4081.3.101--d5e61b93e9-k8s-whisker--554bdbbf9--xb6cd-eth0" Mar 2 13:12:48.577923 containerd[1760]: 2026-03-02 13:12:48.574 [INFO][6096] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:48.577923 containerd[1760]: 2026-03-02 13:12:48.576 [INFO][6089] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Mar 2 13:12:48.577923 containerd[1760]: time="2026-03-02T13:12:48.577800342Z" level=info msg="TearDown network for sandbox \"6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc\" successfully" Mar 2 13:12:48.577923 containerd[1760]: time="2026-03-02T13:12:48.577825742Z" level=info msg="StopPodSandbox for \"6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc\" returns successfully" Mar 2 13:12:48.578995 containerd[1760]: time="2026-03-02T13:12:48.578554382Z" level=info msg="RemovePodSandbox for \"6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc\"" Mar 2 13:12:48.578995 containerd[1760]: time="2026-03-02T13:12:48.578582222Z" level=info msg="Forcibly stopping sandbox \"6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc\"" Mar 2 13:12:48.638377 containerd[1760]: 2026-03-02 13:12:48.608 [WARNING][6110] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" WorkloadEndpoint="ci--4081.3.101--d5e61b93e9-k8s-whisker--554bdbbf9--xb6cd-eth0" Mar 2 13:12:48.638377 containerd[1760]: 2026-03-02 13:12:48.608 [INFO][6110] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Mar 2 13:12:48.638377 containerd[1760]: 2026-03-02 13:12:48.608 [INFO][6110] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" iface="eth0" netns="" Mar 2 13:12:48.638377 containerd[1760]: 2026-03-02 13:12:48.608 [INFO][6110] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Mar 2 13:12:48.638377 containerd[1760]: 2026-03-02 13:12:48.608 [INFO][6110] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Mar 2 13:12:48.638377 containerd[1760]: 2026-03-02 13:12:48.625 [INFO][6117] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" HandleID="k8s-pod-network.6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Workload="ci--4081.3.101--d5e61b93e9-k8s-whisker--554bdbbf9--xb6cd-eth0" Mar 2 13:12:48.638377 containerd[1760]: 2026-03-02 13:12:48.625 [INFO][6117] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:48.638377 containerd[1760]: 2026-03-02 13:12:48.626 [INFO][6117] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:48.638377 containerd[1760]: 2026-03-02 13:12:48.633 [WARNING][6117] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" HandleID="k8s-pod-network.6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Workload="ci--4081.3.101--d5e61b93e9-k8s-whisker--554bdbbf9--xb6cd-eth0" Mar 2 13:12:48.638377 containerd[1760]: 2026-03-02 13:12:48.634 [INFO][6117] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" HandleID="k8s-pod-network.6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Workload="ci--4081.3.101--d5e61b93e9-k8s-whisker--554bdbbf9--xb6cd-eth0" Mar 2 13:12:48.638377 containerd[1760]: 2026-03-02 13:12:48.635 [INFO][6117] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:48.638377 containerd[1760]: 2026-03-02 13:12:48.636 [INFO][6110] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc" Mar 2 13:12:48.640014 containerd[1760]: time="2026-03-02T13:12:48.638670025Z" level=info msg="TearDown network for sandbox \"6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc\" successfully" Mar 2 13:12:48.647876 containerd[1760]: time="2026-03-02T13:12:48.647839352Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:12:48.648023 containerd[1760]: time="2026-03-02T13:12:48.647903992Z" level=info msg="RemovePodSandbox \"6d9d46680851293ea4904d46c9493f350048a81d1ccf2ae93be0fd93298cb6dc\" returns successfully" Mar 2 13:12:48.648586 containerd[1760]: time="2026-03-02T13:12:48.648337632Z" level=info msg="StopPodSandbox for \"c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb\"" Mar 2 13:12:48.713654 containerd[1760]: 2026-03-02 13:12:48.678 [WARNING][6131] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0", GenerateName:"calico-apiserver-7f6ff4dd5d-", Namespace:"calico-system", SelfLink:"", UID:"da16cb8c-bc4e-45ca-bc1a-d50354581491", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6ff4dd5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b", Pod:"calico-apiserver-7f6ff4dd5d-77p2h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali093d9108fea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:48.713654 containerd[1760]: 2026-03-02 13:12:48.678 [INFO][6131] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Mar 2 13:12:48.713654 containerd[1760]: 2026-03-02 13:12:48.678 [INFO][6131] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" iface="eth0" netns="" Mar 2 13:12:48.713654 containerd[1760]: 2026-03-02 13:12:48.678 [INFO][6131] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Mar 2 13:12:48.713654 containerd[1760]: 2026-03-02 13:12:48.678 [INFO][6131] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Mar 2 13:12:48.713654 containerd[1760]: 2026-03-02 13:12:48.700 [INFO][6138] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" HandleID="k8s-pod-network.c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0" Mar 2 13:12:48.713654 containerd[1760]: 2026-03-02 13:12:48.700 [INFO][6138] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:48.713654 containerd[1760]: 2026-03-02 13:12:48.700 [INFO][6138] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:48.713654 containerd[1760]: 2026-03-02 13:12:48.709 [WARNING][6138] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" HandleID="k8s-pod-network.c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0" Mar 2 13:12:48.713654 containerd[1760]: 2026-03-02 13:12:48.709 [INFO][6138] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" HandleID="k8s-pod-network.c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0" Mar 2 13:12:48.713654 containerd[1760]: 2026-03-02 13:12:48.710 [INFO][6138] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:48.713654 containerd[1760]: 2026-03-02 13:12:48.712 [INFO][6131] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Mar 2 13:12:48.714634 containerd[1760]: time="2026-03-02T13:12:48.714112319Z" level=info msg="TearDown network for sandbox \"c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb\" successfully" Mar 2 13:12:48.714634 containerd[1760]: time="2026-03-02T13:12:48.714180439Z" level=info msg="StopPodSandbox for \"c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb\" returns successfully" Mar 2 13:12:48.714634 containerd[1760]: time="2026-03-02T13:12:48.714519719Z" level=info msg="RemovePodSandbox for \"c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb\"" Mar 2 13:12:48.714634 containerd[1760]: time="2026-03-02T13:12:48.714544039Z" level=info msg="Forcibly stopping sandbox \"c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb\"" Mar 2 13:12:48.783883 containerd[1760]: 2026-03-02 13:12:48.751 [WARNING][6152] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0", GenerateName:"calico-apiserver-7f6ff4dd5d-", Namespace:"calico-system", SelfLink:"", UID:"da16cb8c-bc4e-45ca-bc1a-d50354581491", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 12, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f6ff4dd5d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d5e61b93e9", ContainerID:"425135a5c7e61cef6cb4603748921badcb9e8023fbec7b724a0647f75a1c332b", Pod:"calico-apiserver-7f6ff4dd5d-77p2h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali093d9108fea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:12:48.783883 containerd[1760]: 2026-03-02 13:12:48.751 [INFO][6152] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Mar 2 13:12:48.783883 containerd[1760]: 2026-03-02 13:12:48.751 [INFO][6152] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" iface="eth0" netns="" Mar 2 13:12:48.783883 containerd[1760]: 2026-03-02 13:12:48.751 [INFO][6152] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Mar 2 13:12:48.783883 containerd[1760]: 2026-03-02 13:12:48.751 [INFO][6152] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Mar 2 13:12:48.783883 containerd[1760]: 2026-03-02 13:12:48.770 [INFO][6159] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" HandleID="k8s-pod-network.c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0" Mar 2 13:12:48.783883 containerd[1760]: 2026-03-02 13:12:48.770 [INFO][6159] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:12:48.783883 containerd[1760]: 2026-03-02 13:12:48.770 [INFO][6159] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:12:48.783883 containerd[1760]: 2026-03-02 13:12:48.779 [WARNING][6159] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" HandleID="k8s-pod-network.c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0" Mar 2 13:12:48.783883 containerd[1760]: 2026-03-02 13:12:48.779 [INFO][6159] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" HandleID="k8s-pod-network.c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Workload="ci--4081.3.101--d5e61b93e9-k8s-calico--apiserver--7f6ff4dd5d--77p2h-eth0" Mar 2 13:12:48.783883 containerd[1760]: 2026-03-02 13:12:48.780 [INFO][6159] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:12:48.783883 containerd[1760]: 2026-03-02 13:12:48.782 [INFO][6152] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb" Mar 2 13:12:48.784716 containerd[1760]: time="2026-03-02T13:12:48.784358289Z" level=info msg="TearDown network for sandbox \"c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb\" successfully" Mar 2 13:12:48.791287 containerd[1760]: time="2026-03-02T13:12:48.791259574Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:12:48.791485 containerd[1760]: time="2026-03-02T13:12:48.791393454Z" level=info msg="RemovePodSandbox \"c96691eef0791fceca1580462529b8704cfde1c3eb8850ceb95a5d8397206adb\" returns successfully" Mar 2 13:12:49.994235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount163897602.mount: Deactivated successfully. Mar 2 13:12:50.041203 containerd[1760]: time="2026-03-02T13:12:50.040754308Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:50.044240 containerd[1760]: time="2026-03-02T13:12:50.044196230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.3: active requests=0, bytes read=16420592" Mar 2 13:12:50.047433 containerd[1760]: time="2026-03-02T13:12:50.047096433Z" level=info msg="ImageCreate event name:\"sha256:d6c2d25ea514599ef2dbba86e46277491ee9c1e15519321c135bb514b2f46aeb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:50.051124 containerd[1760]: time="2026-03-02T13:12:50.051096675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:359cb5c751e049ac0bb62c4f7e49b1ac81c59935c70715f5ff4c39a757bf9f38\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:50.051854 containerd[1760]: time="2026-03-02T13:12:50.051828636Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" with image id \"sha256:d6c2d25ea514599ef2dbba86e46277491ee9c1e15519321c135bb514b2f46aeb\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:359cb5c751e049ac0bb62c4f7e49b1ac81c59935c70715f5ff4c39a757bf9f38\", size \"16420422\" in 2.035370376s" Mar 2 13:12:50.051953 containerd[1760]: time="2026-03-02T13:12:50.051936876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" returns image reference \"sha256:d6c2d25ea514599ef2dbba86e46277491ee9c1e15519321c135bb514b2f46aeb\"" Mar 2 13:12:50.053580 containerd[1760]: time="2026-03-02T13:12:50.053558037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\"" Mar 2 13:12:50.060156 containerd[1760]: time="2026-03-02T13:12:50.060002802Z" level=info msg="CreateContainer within sandbox \"c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 2 13:12:50.094644 containerd[1760]: time="2026-03-02T13:12:50.094602626Z" level=info msg="CreateContainer within sandbox \"c31da0d8cc346c8d09e8492cad8f538ccbdfb2b80508282f452edd188d9ca0fb\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ea2015ab63df9e5e82b3ef34ec34c2f452e59c1a91b4ab626965311f68efcc15\"" Mar 2 13:12:50.095239 containerd[1760]: time="2026-03-02T13:12:50.095130227Z" level=info msg="StartContainer for \"ea2015ab63df9e5e82b3ef34ec34c2f452e59c1a91b4ab626965311f68efcc15\"" Mar 2 13:12:50.126308 systemd[1]: Started cri-containerd-ea2015ab63df9e5e82b3ef34ec34c2f452e59c1a91b4ab626965311f68efcc15.scope - libcontainer container ea2015ab63df9e5e82b3ef34ec34c2f452e59c1a91b4ab626965311f68efcc15. Mar 2 13:12:50.159061 containerd[1760]: time="2026-03-02T13:12:50.159009873Z" level=info msg="StartContainer for \"ea2015ab63df9e5e82b3ef34ec34c2f452e59c1a91b4ab626965311f68efcc15\" returns successfully" Mar 2 13:12:50.715782 kubelet[3222]: I0302 13:12:50.715631 3222 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-5cddd77687-ljvpr" podStartSLOduration=2.176245105 podStartE2EDuration="19.715616111s" podCreationTimestamp="2026-03-02 13:12:31 +0000 UTC" firstStartedPulling="2026-03-02 13:12:32.51326987 +0000 UTC m=+46.186507657" lastFinishedPulling="2026-03-02 13:12:50.052640836 +0000 UTC m=+63.725878663" observedRunningTime="2026-03-02 13:12:50.71487771 +0000 UTC m=+64.388115497" watchObservedRunningTime="2026-03-02 13:12:50.715616111 +0000 UTC m=+64.388853858" Mar 2 13:12:52.263015 containerd[1760]: time="2026-03-02T13:12:52.255887508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:52.264119 containerd[1760]: time="2026-03-02T13:12:52.263727674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3: active requests=0, bytes read=13755078" Mar 2 13:12:52.264119 containerd[1760]: time="2026-03-02T13:12:52.263795194Z" level=info msg="ImageCreate event name:\"sha256:c55251c1db32bbbf386d6ef9309a13d39443eef28f12c0883c2fd06bc5561b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:52.267918 containerd[1760]: time="2026-03-02T13:12:52.267884078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:2bdced3111efc84af5b77534155b084a55a3f839010807e7e83e75faefc8cf33\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:12:52.268795 containerd[1760]: time="2026-03-02T13:12:52.268764998Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" with image id \"sha256:c55251c1db32bbbf386d6ef9309a13d39443eef28f12c0883c2fd06bc5561b09\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:2bdced3111efc84af5b77534155b084a55a3f839010807e7e83e75faefc8cf33\", size \"15152555\" in 2.2145044s" Mar 2 13:12:52.268859 containerd[1760]: time="2026-03-02T13:12:52.268795758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" returns image reference \"sha256:c55251c1db32bbbf386d6ef9309a13d39443eef28f12c0883c2fd06bc5561b09\"" Mar 2 13:12:52.279076 containerd[1760]: time="2026-03-02T13:12:52.279049486Z" level=info msg="CreateContainer within sandbox \"eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 2 13:12:52.305325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3349576511.mount: Deactivated successfully. Mar 2 13:12:52.319856 containerd[1760]: time="2026-03-02T13:12:52.319760759Z" level=info msg="CreateContainer within sandbox \"eb1d165b2be5d0c57d28fd4b1346ed8ef2ef3af496e435f9eea681ccd5ca3252\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"684d3d0656827e8a67d1b3977a8e9e14dd07d1cf722a3210b7b2251ff97da285\"" Mar 2 13:12:52.320514 containerd[1760]: time="2026-03-02T13:12:52.320485959Z" level=info msg="StartContainer for \"684d3d0656827e8a67d1b3977a8e9e14dd07d1cf722a3210b7b2251ff97da285\"" Mar 2 13:12:52.354393 systemd[1]: Started cri-containerd-684d3d0656827e8a67d1b3977a8e9e14dd07d1cf722a3210b7b2251ff97da285.scope - libcontainer container 684d3d0656827e8a67d1b3977a8e9e14dd07d1cf722a3210b7b2251ff97da285. Mar 2 13:12:52.382250 containerd[1760]: time="2026-03-02T13:12:52.382211808Z" level=info msg="StartContainer for \"684d3d0656827e8a67d1b3977a8e9e14dd07d1cf722a3210b7b2251ff97da285\" returns successfully" Mar 2 13:12:52.528848 kubelet[3222]: I0302 13:12:52.528663 3222 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 2 13:12:52.528848 kubelet[3222]: I0302 13:12:52.528701 3222 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 2 13:12:52.729143 kubelet[3222]: I0302 13:12:52.728423 3222 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-nbvnj" podStartSLOduration=34.503838117 podStartE2EDuration="45.728411962s" podCreationTimestamp="2026-03-02 13:12:07 +0000 UTC" firstStartedPulling="2026-03-02 13:12:41.045315634 +0000 UTC m=+54.718553421" lastFinishedPulling="2026-03-02 13:12:52.269889479 +0000 UTC m=+65.943127266" observedRunningTime="2026-03-02 13:12:52.727249681 +0000 UTC m=+66.400487468" watchObservedRunningTime="2026-03-02 13:12:52.728411962 +0000 UTC m=+66.401649749" Mar 2 13:14:01.617116 systemd[1]: run-containerd-runc-k8s.io-98274eb76963d1eb75475eb3315a751e5eed3412c5c391b97f432307f10c4bf7-runc.me3brX.mount: Deactivated successfully. Mar 2 13:14:05.673888 systemd[1]: Started sshd@7-10.200.20.38:22-10.200.16.10:40908.service - OpenSSH per-connection server daemon (10.200.16.10:40908). Mar 2 13:14:06.175096 sshd[6546]: Accepted publickey for core from 10.200.16.10 port 40908 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:14:06.177201 sshd[6546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:14:06.182014 systemd-logind[1725]: New session 10 of user core. Mar 2 13:14:06.186311 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 2 13:14:06.609394 sshd[6546]: pam_unix(sshd:session): session closed for user core Mar 2 13:14:06.613028 systemd-logind[1725]: Session 10 logged out. Waiting for processes to exit. Mar 2 13:14:06.613737 systemd[1]: sshd@7-10.200.20.38:22-10.200.16.10:40908.service: Deactivated successfully. Mar 2 13:14:06.616117 systemd[1]: session-10.scope: Deactivated successfully. Mar 2 13:14:06.618615 systemd-logind[1725]: Removed session 10. Mar 2 13:14:10.661562 systemd[1]: run-containerd-runc-k8s.io-b441d72943159e838b4dbedfa26239707086ef3b0397f2cb8db9d175592c3b63-runc.0UEtLI.mount: Deactivated successfully. Mar 2 13:14:11.704688 systemd[1]: Started sshd@8-10.200.20.38:22-10.200.16.10:54504.service - OpenSSH per-connection server daemon (10.200.16.10:54504). Mar 2 13:14:12.187961 sshd[6612]: Accepted publickey for core from 10.200.16.10 port 54504 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:14:12.189749 sshd[6612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:14:12.193518 systemd-logind[1725]: New session 11 of user core. Mar 2 13:14:12.198456 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 2 13:14:12.601421 sshd[6612]: pam_unix(sshd:session): session closed for user core Mar 2 13:14:12.604728 systemd-logind[1725]: Session 11 logged out. Waiting for processes to exit. Mar 2 13:14:12.606693 systemd[1]: sshd@8-10.200.20.38:22-10.200.16.10:54504.service: Deactivated successfully. Mar 2 13:14:12.608736 systemd[1]: session-11.scope: Deactivated successfully. Mar 2 13:14:12.610200 systemd-logind[1725]: Removed session 11. Mar 2 13:14:17.696711 systemd[1]: Started sshd@9-10.200.20.38:22-10.200.16.10:54506.service - OpenSSH per-connection server daemon (10.200.16.10:54506). Mar 2 13:14:18.186735 sshd[6647]: Accepted publickey for core from 10.200.16.10 port 54506 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:14:18.187677 sshd[6647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:14:18.191610 systemd-logind[1725]: New session 12 of user core. Mar 2 13:14:18.197287 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 2 13:14:18.638142 sshd[6647]: pam_unix(sshd:session): session closed for user core Mar 2 13:14:18.641860 systemd[1]: sshd@9-10.200.20.38:22-10.200.16.10:54506.service: Deactivated successfully. Mar 2 13:14:18.643954 systemd[1]: session-12.scope: Deactivated successfully. Mar 2 13:14:18.646032 systemd-logind[1725]: Session 12 logged out. Waiting for processes to exit. Mar 2 13:14:18.647064 systemd-logind[1725]: Removed session 12. Mar 2 13:14:23.733383 systemd[1]: Started sshd@10-10.200.20.38:22-10.200.16.10:48266.service - OpenSSH per-connection server daemon (10.200.16.10:48266). Mar 2 13:14:24.223371 sshd[6684]: Accepted publickey for core from 10.200.16.10 port 48266 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:14:24.224858 sshd[6684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:14:24.229219 systemd-logind[1725]: New session 13 of user core. Mar 2 13:14:24.240299 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 2 13:14:24.650496 sshd[6684]: pam_unix(sshd:session): session closed for user core Mar 2 13:14:24.655318 systemd[1]: sshd@10-10.200.20.38:22-10.200.16.10:48266.service: Deactivated successfully. Mar 2 13:14:24.658724 systemd[1]: session-13.scope: Deactivated successfully. Mar 2 13:14:24.660150 systemd-logind[1725]: Session 13 logged out. Waiting for processes to exit. Mar 2 13:14:24.661292 systemd-logind[1725]: Removed session 13. Mar 2 13:14:24.738661 systemd[1]: Started sshd@11-10.200.20.38:22-10.200.16.10:48268.service - OpenSSH per-connection server daemon (10.200.16.10:48268). Mar 2 13:14:25.232227 sshd[6713]: Accepted publickey for core from 10.200.16.10 port 48268 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:14:25.233606 sshd[6713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:14:25.237706 systemd-logind[1725]: New session 14 of user core. Mar 2 13:14:25.243366 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 2 13:14:25.685910 sshd[6713]: pam_unix(sshd:session): session closed for user core Mar 2 13:14:25.690260 systemd[1]: sshd@11-10.200.20.38:22-10.200.16.10:48268.service: Deactivated successfully. Mar 2 13:14:25.693328 systemd[1]: session-14.scope: Deactivated successfully. Mar 2 13:14:25.693995 systemd-logind[1725]: Session 14 logged out. Waiting for processes to exit. Mar 2 13:14:25.694872 systemd-logind[1725]: Removed session 14. Mar 2 13:14:25.782543 systemd[1]: Started sshd@12-10.200.20.38:22-10.200.16.10:48280.service - OpenSSH per-connection server daemon (10.200.16.10:48280). Mar 2 13:14:26.267096 sshd[6726]: Accepted publickey for core from 10.200.16.10 port 48280 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:14:26.268495 sshd[6726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:14:26.273147 systemd-logind[1725]: New session 15 of user core. Mar 2 13:14:26.276312 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 2 13:14:26.685481 sshd[6726]: pam_unix(sshd:session): session closed for user core Mar 2 13:14:26.689413 systemd[1]: sshd@12-10.200.20.38:22-10.200.16.10:48280.service: Deactivated successfully. Mar 2 13:14:26.691922 systemd[1]: session-15.scope: Deactivated successfully. Mar 2 13:14:26.693220 systemd-logind[1725]: Session 15 logged out. Waiting for processes to exit. Mar 2 13:14:26.693978 systemd-logind[1725]: Removed session 15. Mar 2 13:14:31.774148 systemd[1]: Started sshd@13-10.200.20.38:22-10.200.16.10:59302.service - OpenSSH per-connection server daemon (10.200.16.10:59302). Mar 2 13:14:32.274195 sshd[6760]: Accepted publickey for core from 10.200.16.10 port 59302 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:14:32.275105 sshd[6760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:14:32.279014 systemd-logind[1725]: New session 16 of user core. Mar 2 13:14:32.285321 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 2 13:14:32.694513 sshd[6760]: pam_unix(sshd:session): session closed for user core Mar 2 13:14:32.698456 systemd[1]: sshd@13-10.200.20.38:22-10.200.16.10:59302.service: Deactivated successfully. Mar 2 13:14:32.700440 systemd[1]: session-16.scope: Deactivated successfully. Mar 2 13:14:32.701136 systemd-logind[1725]: Session 16 logged out. Waiting for processes to exit. Mar 2 13:14:32.702046 systemd-logind[1725]: Removed session 16. Mar 2 13:14:32.785381 systemd[1]: Started sshd@14-10.200.20.38:22-10.200.16.10:59310.service - OpenSSH per-connection server daemon (10.200.16.10:59310). Mar 2 13:14:33.269827 sshd[6773]: Accepted publickey for core from 10.200.16.10 port 59310 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:14:33.270674 sshd[6773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:14:33.275126 systemd-logind[1725]: New session 17 of user core. Mar 2 13:14:33.280304 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 2 13:14:33.799200 sshd[6773]: pam_unix(sshd:session): session closed for user core Mar 2 13:14:33.802656 systemd[1]: sshd@14-10.200.20.38:22-10.200.16.10:59310.service: Deactivated successfully. Mar 2 13:14:33.804754 systemd[1]: session-17.scope: Deactivated successfully. Mar 2 13:14:33.806072 systemd-logind[1725]: Session 17 logged out. Waiting for processes to exit. Mar 2 13:14:33.807073 systemd-logind[1725]: Removed session 17. Mar 2 13:14:33.893748 systemd[1]: Started sshd@15-10.200.20.38:22-10.200.16.10:59322.service - OpenSSH per-connection server daemon (10.200.16.10:59322). Mar 2 13:14:34.379349 sshd[6784]: Accepted publickey for core from 10.200.16.10 port 59322 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:14:34.382558 sshd[6784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:14:34.387551 systemd-logind[1725]: New session 18 of user core. Mar 2 13:14:34.395303 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 2 13:14:35.357363 sshd[6784]: pam_unix(sshd:session): session closed for user core Mar 2 13:14:35.362177 systemd[1]: sshd@15-10.200.20.38:22-10.200.16.10:59322.service: Deactivated successfully. Mar 2 13:14:35.366643 systemd[1]: session-18.scope: Deactivated successfully. Mar 2 13:14:35.369901 systemd-logind[1725]: Session 18 logged out. Waiting for processes to exit. Mar 2 13:14:35.370781 systemd-logind[1725]: Removed session 18. Mar 2 13:14:35.449992 systemd[1]: Started sshd@16-10.200.20.38:22-10.200.16.10:59336.service - OpenSSH per-connection server daemon (10.200.16.10:59336). Mar 2 13:14:35.933595 sshd[6815]: Accepted publickey for core from 10.200.16.10 port 59336 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:14:35.934469 sshd[6815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:14:35.938538 systemd-logind[1725]: New session 19 of user core. Mar 2 13:14:35.944304 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 2 13:14:36.459526 sshd[6815]: pam_unix(sshd:session): session closed for user core Mar 2 13:14:36.462829 systemd[1]: sshd@16-10.200.20.38:22-10.200.16.10:59336.service: Deactivated successfully. Mar 2 13:14:36.464934 systemd[1]: session-19.scope: Deactivated successfully. Mar 2 13:14:36.465815 systemd-logind[1725]: Session 19 logged out. Waiting for processes to exit. Mar 2 13:14:36.467026 systemd-logind[1725]: Removed session 19. Mar 2 13:14:36.547427 systemd[1]: Started sshd@17-10.200.20.38:22-10.200.16.10:59344.service - OpenSSH per-connection server daemon (10.200.16.10:59344). Mar 2 13:14:37.047766 sshd[6828]: Accepted publickey for core from 10.200.16.10 port 59344 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:14:37.048879 sshd[6828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:14:37.053354 systemd-logind[1725]: New session 20 of user core. Mar 2 13:14:37.061303 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 2 13:14:37.489392 sshd[6828]: pam_unix(sshd:session): session closed for user core Mar 2 13:14:37.492621 systemd[1]: sshd@17-10.200.20.38:22-10.200.16.10:59344.service: Deactivated successfully. Mar 2 13:14:37.494858 systemd[1]: session-20.scope: Deactivated successfully. Mar 2 13:14:37.496472 systemd-logind[1725]: Session 20 logged out. Waiting for processes to exit. Mar 2 13:14:37.497886 systemd-logind[1725]: Removed session 20. Mar 2 13:14:40.661387 systemd[1]: run-containerd-runc-k8s.io-b441d72943159e838b4dbedfa26239707086ef3b0397f2cb8db9d175592c3b63-runc.ICbHBO.mount: Deactivated successfully. Mar 2 13:14:42.578425 systemd[1]: Started sshd@18-10.200.20.38:22-10.200.16.10:50284.service - OpenSSH per-connection server daemon (10.200.16.10:50284). Mar 2 13:14:43.066529 sshd[6862]: Accepted publickey for core from 10.200.16.10 port 50284 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:14:43.068152 sshd[6862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:14:43.072216 systemd-logind[1725]: New session 21 of user core. Mar 2 13:14:43.078299 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 2 13:14:43.472361 sshd[6862]: pam_unix(sshd:session): session closed for user core Mar 2 13:14:43.477203 systemd[1]: sshd@18-10.200.20.38:22-10.200.16.10:50284.service: Deactivated successfully. Mar 2 13:14:43.479726 systemd[1]: session-21.scope: Deactivated successfully. Mar 2 13:14:43.481910 systemd-logind[1725]: Session 21 logged out. Waiting for processes to exit. Mar 2 13:14:43.482973 systemd-logind[1725]: Removed session 21. Mar 2 13:14:48.559659 systemd[1]: Started sshd@19-10.200.20.38:22-10.200.16.10:50300.service - OpenSSH per-connection server daemon (10.200.16.10:50300). Mar 2 13:14:49.049700 sshd[6896]: Accepted publickey for core from 10.200.16.10 port 50300 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:14:49.051067 sshd[6896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:14:49.054769 systemd-logind[1725]: New session 22 of user core. Mar 2 13:14:49.058296 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 2 13:14:49.462489 sshd[6896]: pam_unix(sshd:session): session closed for user core Mar 2 13:14:49.466593 systemd-logind[1725]: Session 22 logged out. Waiting for processes to exit. Mar 2 13:14:49.467302 systemd[1]: sshd@19-10.200.20.38:22-10.200.16.10:50300.service: Deactivated successfully. Mar 2 13:14:49.471501 systemd[1]: session-22.scope: Deactivated successfully. Mar 2 13:14:49.473108 systemd-logind[1725]: Removed session 22. Mar 2 13:14:54.552480 systemd[1]: Started sshd@20-10.200.20.38:22-10.200.16.10:54666.service - OpenSSH per-connection server daemon (10.200.16.10:54666). Mar 2 13:14:55.036034 sshd[6931]: Accepted publickey for core from 10.200.16.10 port 54666 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:14:55.037509 sshd[6931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:14:55.041589 systemd-logind[1725]: New session 23 of user core. Mar 2 13:14:55.046305 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 2 13:14:55.444668 sshd[6931]: pam_unix(sshd:session): session closed for user core Mar 2 13:14:55.451221 systemd-logind[1725]: Session 23 logged out. Waiting for processes to exit. Mar 2 13:14:55.451528 systemd[1]: sshd@20-10.200.20.38:22-10.200.16.10:54666.service: Deactivated successfully. Mar 2 13:14:55.453335 systemd[1]: session-23.scope: Deactivated successfully. Mar 2 13:14:55.454261 systemd-logind[1725]: Removed session 23. Mar 2 13:15:00.532693 systemd[1]: Started sshd@21-10.200.20.38:22-10.200.16.10:41738.service - OpenSSH per-connection server daemon (10.200.16.10:41738). Mar 2 13:15:01.027045 sshd[6945]: Accepted publickey for core from 10.200.16.10 port 41738 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:15:01.027884 sshd[6945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:15:01.031660 systemd-logind[1725]: New session 24 of user core. Mar 2 13:15:01.038296 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 2 13:15:01.435719 sshd[6945]: pam_unix(sshd:session): session closed for user core Mar 2 13:15:01.440429 systemd[1]: sshd@21-10.200.20.38:22-10.200.16.10:41738.service: Deactivated successfully. Mar 2 13:15:01.442256 systemd[1]: session-24.scope: Deactivated successfully. Mar 2 13:15:01.442821 systemd-logind[1725]: Session 24 logged out. Waiting for processes to exit. Mar 2 13:15:01.443941 systemd-logind[1725]: Removed session 24. Mar 2 13:15:06.534416 systemd[1]: Started sshd@22-10.200.20.38:22-10.200.16.10:41744.service - OpenSSH per-connection server daemon (10.200.16.10:41744). Mar 2 13:15:07.024317 sshd[6978]: Accepted publickey for core from 10.200.16.10 port 41744 ssh2: RSA SHA256:52dfq2xoobak5V8KUMpsxFzYzerT7MB9pwhdpXRVWM0 Mar 2 13:15:07.048007 sshd[6978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:15:07.056695 systemd-logind[1725]: New session 25 of user core. Mar 2 13:15:07.063325 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 2 13:15:07.433840 sshd[6978]: pam_unix(sshd:session): session closed for user core Mar 2 13:15:07.437857 systemd[1]: sshd@22-10.200.20.38:22-10.200.16.10:41744.service: Deactivated successfully. Mar 2 13:15:07.439532 systemd[1]: session-25.scope: Deactivated successfully. Mar 2 13:15:07.441954 systemd-logind[1725]: Session 25 logged out. Waiting for processes to exit. Mar 2 13:15:07.443107 systemd-logind[1725]: Removed session 25.