Mar 13 12:21:00.204196 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 13 12:21:00.204217 kernel: Linux version 6.6.129-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 13 08:56:28 -00 2026 Mar 13 12:21:00.204225 kernel: KASLR enabled Mar 13 12:21:00.204231 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 13 12:21:00.204238 kernel: printk: bootconsole [pl11] enabled Mar 13 12:21:00.204244 kernel: efi: EFI v2.7 by EDK II Mar 13 12:21:00.204251 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 13 12:21:00.204257 kernel: random: crng init done Mar 13 12:21:00.204263 kernel: ACPI: Early table checksum verification disabled Mar 13 12:21:00.204269 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 13 12:21:00.204275 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:21:00.204281 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:21:00.204288 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 13 12:21:00.204295 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:21:00.204305 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:21:00.204314 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:21:00.204320 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:21:00.204328 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:21:00.204335 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:21:00.204342 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 13 12:21:00.204349 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:21:00.204356 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 13 12:21:00.204363 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 13 12:21:00.204369 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 13 12:21:00.204376 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 13 12:21:00.204383 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 13 12:21:00.204389 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 13 12:21:00.204396 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 13 12:21:00.204403 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 13 12:21:00.204410 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 13 12:21:00.204416 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 13 12:21:00.204423 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 13 12:21:00.204429 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 13 12:21:00.204436 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 13 12:21:00.204442 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 13 12:21:00.204448 kernel: Zone ranges: Mar 13 12:21:00.204455 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 13 12:21:00.204461 kernel: DMA32 empty Mar 13 12:21:00.204467 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 13 12:21:00.204474 kernel: Movable zone start for each node Mar 13 12:21:00.204484 kernel: Early memory node ranges Mar 13 12:21:00.204493 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 13 12:21:00.204502 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 13 12:21:00.204509 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 13 12:21:00.204516 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 13 12:21:00.204524 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 13 12:21:00.204531 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 13 12:21:00.204537 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 13 12:21:00.204544 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 13 12:21:00.204551 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 13 12:21:00.204558 kernel: psci: probing for conduit method from ACPI. Mar 13 12:21:00.204564 kernel: psci: PSCIv1.1 detected in firmware. Mar 13 12:21:00.204571 kernel: psci: Using standard PSCI v0.2 function IDs Mar 13 12:21:00.204578 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 13 12:21:00.204584 kernel: psci: SMC Calling Convention v1.4 Mar 13 12:21:00.204591 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 13 12:21:00.204598 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 13 12:21:00.204606 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 13 12:21:00.204613 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 13 12:21:00.204620 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 13 12:21:00.204626 kernel: Detected PIPT I-cache on CPU0 Mar 13 12:21:00.204633 kernel: CPU features: detected: GIC system register CPU interface Mar 13 12:21:00.204640 kernel: CPU features: detected: Hardware dirty bit management Mar 13 12:21:00.204652 kernel: CPU features: detected: Spectre-BHB Mar 13 12:21:00.204661 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 13 12:21:00.204668 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 13 12:21:00.204675 kernel: CPU features: detected: ARM erratum 1418040 Mar 13 12:21:00.204681 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 13 12:21:00.204690 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 13 12:21:00.204696 kernel: alternatives: applying boot alternatives Mar 13 12:21:00.204704 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=74d23b4a193d6e906ee495726e120d1ddfae1935a31eb217879b4eafa2053949 Mar 13 12:21:00.204712 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 12:21:00.204719 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 13 12:21:00.204725 kernel: Fallback order for Node 0: 0 Mar 13 12:21:00.204732 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 13 12:21:00.204739 kernel: Policy zone: Normal Mar 13 12:21:00.204746 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 12:21:00.204752 kernel: software IO TLB: area num 2. Mar 13 12:21:00.204759 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 13 12:21:00.204767 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8120K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 13 12:21:00.204774 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 13 12:21:00.204781 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 12:21:00.204788 kernel: rcu: RCU event tracing is enabled. Mar 13 12:21:00.204795 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 13 12:21:00.204802 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 12:21:00.204809 kernel: Tracing variant of Tasks RCU enabled. Mar 13 12:21:00.204815 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 12:21:00.204822 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 13 12:21:00.204829 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 13 12:21:00.204836 kernel: GICv3: 960 SPIs implemented Mar 13 12:21:00.204844 kernel: GICv3: 0 Extended SPIs implemented Mar 13 12:21:00.204850 kernel: Root IRQ handler: gic_handle_irq Mar 13 12:21:00.204857 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 13 12:21:00.204864 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 13 12:21:00.204870 kernel: ITS: No ITS available, not enabling LPIs Mar 13 12:21:00.204877 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 12:21:00.204884 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 13 12:21:00.204891 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 13 12:21:00.204898 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 13 12:21:00.204905 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 13 12:21:00.204911 kernel: Console: colour dummy device 80x25 Mar 13 12:21:00.204920 kernel: printk: console [tty1] enabled Mar 13 12:21:00.204927 kernel: ACPI: Core revision 20230628 Mar 13 12:21:00.204934 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 13 12:21:00.204941 kernel: pid_max: default: 32768 minimum: 301 Mar 13 12:21:00.204949 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 13 12:21:00.204956 kernel: landlock: Up and running. Mar 13 12:21:00.204963 kernel: SELinux: Initializing. Mar 13 12:21:00.204971 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 12:21:00.204978 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 12:21:00.204987 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 12:21:00.204994 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 12:21:00.205001 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 13 12:21:00.205008 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 13 12:21:00.205016 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 13 12:21:00.205023 kernel: rcu: Hierarchical SRCU implementation. Mar 13 12:21:00.205030 kernel: rcu: Max phase no-delay instances is 400. Mar 13 12:21:00.205037 kernel: Remapping and enabling EFI services. Mar 13 12:21:00.205051 kernel: smp: Bringing up secondary CPUs ... Mar 13 12:21:00.205058 kernel: Detected PIPT I-cache on CPU1 Mar 13 12:21:00.205066 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 13 12:21:00.205074 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 13 12:21:00.205083 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 13 12:21:00.205090 kernel: smp: Brought up 1 node, 2 CPUs Mar 13 12:21:00.205098 kernel: SMP: Total of 2 processors activated. Mar 13 12:21:00.205106 kernel: CPU features: detected: 32-bit EL0 Support Mar 13 12:21:00.205113 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 13 12:21:00.205122 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 13 12:21:00.205129 kernel: CPU features: detected: CRC32 instructions Mar 13 12:21:00.205137 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 13 12:21:00.205144 kernel: CPU features: detected: LSE atomic instructions Mar 13 12:21:00.205151 kernel: CPU features: detected: Privileged Access Never Mar 13 12:21:00.205159 kernel: CPU: All CPU(s) started at EL1 Mar 13 12:21:00.205166 kernel: alternatives: applying system-wide alternatives Mar 13 12:21:00.205173 kernel: devtmpfs: initialized Mar 13 12:21:00.205181 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 12:21:00.205190 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 13 12:21:00.205197 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 12:21:00.205204 kernel: SMBIOS 3.1.0 present. Mar 13 12:21:00.205212 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 13 12:21:00.205219 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 12:21:00.205226 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 13 12:21:00.205234 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 13 12:21:00.205241 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 13 12:21:00.205249 kernel: audit: initializing netlink subsys (disabled) Mar 13 12:21:00.205257 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 13 12:21:00.205264 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 12:21:00.205272 kernel: cpuidle: using governor menu Mar 13 12:21:00.205279 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 13 12:21:00.205286 kernel: ASID allocator initialised with 32768 entries Mar 13 12:21:00.205294 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 12:21:00.205301 kernel: Serial: AMBA PL011 UART driver Mar 13 12:21:00.205308 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 13 12:21:00.205315 kernel: Modules: 0 pages in range for non-PLT usage Mar 13 12:21:00.205324 kernel: Modules: 509008 pages in range for PLT usage Mar 13 12:21:00.205332 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 12:21:00.205339 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 12:21:00.205347 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 13 12:21:00.205354 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 13 12:21:00.205361 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 12:21:00.205369 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 12:21:00.205376 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 13 12:21:00.205383 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 13 12:21:00.205392 kernel: ACPI: Added _OSI(Module Device) Mar 13 12:21:00.205399 kernel: ACPI: Added _OSI(Processor Device) Mar 13 12:21:00.205406 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 12:21:00.205414 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 13 12:21:00.205421 kernel: ACPI: Interpreter enabled Mar 13 12:21:00.205428 kernel: ACPI: Using GIC for interrupt routing Mar 13 12:21:00.205435 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 13 12:21:00.205443 kernel: printk: console [ttyAMA0] enabled Mar 13 12:21:00.205450 kernel: printk: bootconsole [pl11] disabled Mar 13 12:21:00.205459 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 13 12:21:00.205466 kernel: iommu: Default domain type: Translated Mar 13 12:21:00.205473 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 13 12:21:00.205481 kernel: efivars: Registered efivars operations Mar 13 12:21:00.205488 kernel: vgaarb: loaded Mar 13 12:21:00.205495 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 13 12:21:00.205502 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 12:21:00.205510 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 12:21:00.205517 kernel: pnp: PnP ACPI init Mar 13 12:21:00.205526 kernel: pnp: PnP ACPI: found 0 devices Mar 13 12:21:00.205533 kernel: NET: Registered PF_INET protocol family Mar 13 12:21:00.205540 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 12:21:00.205548 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 13 12:21:00.205555 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 12:21:00.205563 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 13 12:21:00.205570 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 13 12:21:00.205577 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 13 12:21:00.205585 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 12:21:00.205593 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 12:21:00.205601 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 12:21:00.205608 kernel: PCI: CLS 0 bytes, default 64 Mar 13 12:21:00.205615 kernel: kvm [1]: HYP mode not available Mar 13 12:21:00.205622 kernel: Initialise system trusted keyrings Mar 13 12:21:00.205629 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 13 12:21:00.205637 kernel: Key type asymmetric registered Mar 13 12:21:00.205644 kernel: Asymmetric key parser 'x509' registered Mar 13 12:21:00.205655 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 12:21:00.205664 kernel: io scheduler mq-deadline registered Mar 13 12:21:00.205672 kernel: io scheduler kyber registered Mar 13 12:21:00.205679 kernel: io scheduler bfq registered Mar 13 12:21:00.205687 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 12:21:00.205695 kernel: thunder_xcv, ver 1.0 Mar 13 12:21:00.205702 kernel: thunder_bgx, ver 1.0 Mar 13 12:21:00.205709 kernel: nicpf, ver 1.0 Mar 13 12:21:00.205716 kernel: nicvf, ver 1.0 Mar 13 12:21:00.205843 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 13 12:21:00.205918 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-13T12:20:59 UTC (1773404459) Mar 13 12:21:00.205928 kernel: efifb: probing for efifb Mar 13 12:21:00.205936 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 13 12:21:00.205943 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 13 12:21:00.205950 kernel: efifb: scrolling: redraw Mar 13 12:21:00.205958 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 13 12:21:00.205965 kernel: Console: switching to colour frame buffer device 128x48 Mar 13 12:21:00.205972 kernel: fb0: EFI VGA frame buffer device Mar 13 12:21:00.205982 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 13 12:21:00.205989 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 13 12:21:00.205997 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 13 12:21:00.206004 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 13 12:21:00.206011 kernel: watchdog: Hard watchdog permanently disabled Mar 13 12:21:00.206018 kernel: NET: Registered PF_INET6 protocol family Mar 13 12:21:00.206026 kernel: Segment Routing with IPv6 Mar 13 12:21:00.206033 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 12:21:00.206040 kernel: NET: Registered PF_PACKET protocol family Mar 13 12:21:00.206049 kernel: Key type dns_resolver registered Mar 13 12:21:00.206056 kernel: registered taskstats version 1 Mar 13 12:21:00.206063 kernel: Loading compiled-in X.509 certificates Mar 13 12:21:00.206071 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.129-flatcar: 669007e8dd7e677277a9246a6f3b194a311f8cf1' Mar 13 12:21:00.206078 kernel: Key type .fscrypt registered Mar 13 12:21:00.206085 kernel: Key type fscrypt-provisioning registered Mar 13 12:21:00.206092 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 12:21:00.206099 kernel: ima: Allocated hash algorithm: sha1 Mar 13 12:21:00.206106 kernel: ima: No architecture policies found Mar 13 12:21:00.206115 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 13 12:21:00.206122 kernel: clk: Disabling unused clocks Mar 13 12:21:00.206129 kernel: Freeing unused kernel memory: 39424K Mar 13 12:21:00.206137 kernel: Run /init as init process Mar 13 12:21:00.206144 kernel: with arguments: Mar 13 12:21:00.206151 kernel: /init Mar 13 12:21:00.206158 kernel: with environment: Mar 13 12:21:00.206165 kernel: HOME=/ Mar 13 12:21:00.206172 kernel: TERM=linux Mar 13 12:21:00.206181 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 13 12:21:00.206192 systemd[1]: Detected virtualization microsoft. Mar 13 12:21:00.206200 systemd[1]: Detected architecture arm64. Mar 13 12:21:00.206207 systemd[1]: Running in initrd. Mar 13 12:21:00.206215 systemd[1]: No hostname configured, using default hostname. Mar 13 12:21:00.206223 systemd[1]: Hostname set to . Mar 13 12:21:00.206231 systemd[1]: Initializing machine ID from random generator. Mar 13 12:21:00.206240 systemd[1]: Queued start job for default target initrd.target. Mar 13 12:21:00.206248 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 12:21:00.206256 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 12:21:00.206264 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 12:21:00.206272 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 12:21:00.206280 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 12:21:00.206289 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 12:21:00.206298 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 12:21:00.206308 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 12:21:00.206316 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 12:21:00.206323 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 12:21:00.206331 systemd[1]: Reached target paths.target - Path Units. Mar 13 12:21:00.206339 systemd[1]: Reached target slices.target - Slice Units. Mar 13 12:21:00.206347 systemd[1]: Reached target swap.target - Swaps. Mar 13 12:21:00.206355 systemd[1]: Reached target timers.target - Timer Units. Mar 13 12:21:00.206363 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 12:21:00.206372 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 12:21:00.206380 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 12:21:00.206388 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 13 12:21:00.206396 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 12:21:00.206404 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 12:21:00.206412 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 12:21:00.206419 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 12:21:00.206427 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 12:21:00.206437 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 12:21:00.206445 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 12:21:00.206453 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 12:21:00.206460 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 12:21:00.206480 systemd-journald[217]: Collecting audit messages is disabled. Mar 13 12:21:00.206501 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 12:21:00.206510 systemd-journald[217]: Journal started Mar 13 12:21:00.206528 systemd-journald[217]: Runtime Journal (/run/log/journal/c9135e0383c343c59444677c187c6f2e) is 8.0M, max 78.5M, 70.5M free. Mar 13 12:21:00.220275 systemd-modules-load[218]: Inserted module 'overlay' Mar 13 12:21:00.227906 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 12:21:00.249222 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 12:21:00.249275 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 12:21:00.252190 kernel: Bridge firewalling registered Mar 13 12:21:00.252260 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 13 12:21:00.260667 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 12:21:00.272874 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 12:21:00.286729 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 12:21:00.295794 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 12:21:00.304394 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:21:00.323911 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 12:21:00.330990 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 12:21:00.347551 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 12:21:00.374796 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 12:21:00.380910 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 12:21:00.401818 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 12:21:00.407141 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 12:21:00.417508 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 12:21:00.446869 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 12:21:00.454977 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 12:21:00.478114 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 12:21:00.490746 dracut-cmdline[251]: dracut-dracut-053 Mar 13 12:21:00.498870 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=74d23b4a193d6e906ee495726e120d1ddfae1935a31eb217879b4eafa2053949 Mar 13 12:21:00.526852 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 12:21:00.531160 systemd-resolved[252]: Positive Trust Anchors: Mar 13 12:21:00.531169 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 12:21:00.531201 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 12:21:00.533329 systemd-resolved[252]: Defaulting to hostname 'linux'. Mar 13 12:21:00.535894 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 12:21:00.551450 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 12:21:00.666677 kernel: SCSI subsystem initialized Mar 13 12:21:00.674668 kernel: Loading iSCSI transport class v2.0-870. Mar 13 12:21:00.684673 kernel: iscsi: registered transport (tcp) Mar 13 12:21:00.701416 kernel: iscsi: registered transport (qla4xxx) Mar 13 12:21:00.701464 kernel: QLogic iSCSI HBA Driver Mar 13 12:21:00.736279 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 12:21:00.749072 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 12:21:00.778675 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 12:21:00.778705 kernel: device-mapper: uevent: version 1.0.3 Mar 13 12:21:00.784479 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 13 12:21:00.830671 kernel: raid6: neonx8 gen() 15815 MB/s Mar 13 12:21:00.850663 kernel: raid6: neonx4 gen() 15681 MB/s Mar 13 12:21:00.869654 kernel: raid6: neonx2 gen() 13318 MB/s Mar 13 12:21:00.889662 kernel: raid6: neonx1 gen() 10558 MB/s Mar 13 12:21:00.909656 kernel: raid6: int64x8 gen() 6984 MB/s Mar 13 12:21:00.928657 kernel: raid6: int64x4 gen() 7375 MB/s Mar 13 12:21:00.947657 kernel: raid6: int64x2 gen() 6150 MB/s Mar 13 12:21:00.970391 kernel: raid6: int64x1 gen() 5071 MB/s Mar 13 12:21:00.970402 kernel: raid6: using algorithm neonx8 gen() 15815 MB/s Mar 13 12:21:00.992681 kernel: raid6: .... xor() 12035 MB/s, rmw enabled Mar 13 12:21:00.992707 kernel: raid6: using neon recovery algorithm Mar 13 12:21:01.002694 kernel: xor: measuring software checksum speed Mar 13 12:21:01.002711 kernel: 8regs : 19812 MB/sec Mar 13 12:21:01.006075 kernel: 32regs : 19674 MB/sec Mar 13 12:21:01.009018 kernel: arm64_neon : 27132 MB/sec Mar 13 12:21:01.012623 kernel: xor: using function: arm64_neon (27132 MB/sec) Mar 13 12:21:01.062662 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 12:21:01.071948 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 12:21:01.091836 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 12:21:01.111971 systemd-udevd[437]: Using default interface naming scheme 'v255'. Mar 13 12:21:01.115869 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 12:21:01.140851 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 12:21:01.156254 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Mar 13 12:21:01.181541 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 12:21:01.198144 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 12:21:01.235753 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 12:21:01.250841 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 12:21:01.273432 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 12:21:01.283474 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 12:21:01.296161 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 12:21:01.301922 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 12:21:01.326877 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 12:21:01.346699 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 12:21:01.378319 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 12:21:01.388757 kernel: hv_vmbus: Vmbus version:5.3 Mar 13 12:21:01.388787 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 13 12:21:01.378436 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 12:21:01.411573 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 13 12:21:01.411610 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 13 12:21:01.424559 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 13 12:21:01.424597 kernel: hv_vmbus: registering driver hv_netvsc Mar 13 12:21:01.426977 kernel: PTP clock support registered Mar 13 12:21:01.430007 kernel: hv_vmbus: registering driver hid_hyperv Mar 13 12:21:01.430033 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 13 12:21:01.436819 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 12:21:01.463010 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 13 12:21:01.463178 kernel: hv_vmbus: registering driver hv_storvsc Mar 13 12:21:01.459657 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 12:21:01.475531 kernel: scsi host1: storvsc_host_t Mar 13 12:21:01.475710 kernel: scsi host0: storvsc_host_t Mar 13 12:21:01.459838 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:21:01.495657 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 13 12:21:01.485224 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 12:21:01.506671 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 13 12:21:01.507981 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 12:21:01.528933 kernel: hv_utils: Registering HyperV Utility Driver Mar 13 12:21:01.528975 kernel: hv_vmbus: registering driver hv_utils Mar 13 12:21:01.533669 kernel: hv_utils: Heartbeat IC version 3.0 Mar 13 12:21:01.546390 kernel: hv_utils: Shutdown IC version 3.2 Mar 13 12:21:01.546424 kernel: hv_netvsc 000d3a6d-f059-000d-3a6d-f059000d3a6d eth0: VF slot 1 added Mar 13 12:21:01.546588 kernel: hv_utils: TimeSync IC version 4.0 Mar 13 12:21:01.926805 systemd-resolved[252]: Clock change detected. Flushing caches. Mar 13 12:21:01.936649 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:21:01.967378 kernel: hv_vmbus: registering driver hv_pci Mar 13 12:21:01.967425 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 13 12:21:01.967644 kernel: hv_pci 009ba370-29af-4e97-b6b9-8d45e7f4dbc7: PCI VMBus probing: Using version 0x10004 Mar 13 12:21:01.967761 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 13 12:21:01.972441 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 12:21:01.998642 kernel: hv_pci 009ba370-29af-4e97-b6b9-8d45e7f4dbc7: PCI host bridge to bus 29af:00 Mar 13 12:21:01.998794 kernel: pci_bus 29af:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 13 12:21:01.998896 kernel: pci_bus 29af:00: No busn resource found for root bus, will use [bus 00-ff] Mar 13 12:21:02.000440 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 13 12:21:02.010058 kernel: pci 29af:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 13 12:21:02.017489 kernel: pci 29af:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 13 12:21:02.031800 kernel: pci 29af:00:02.0: enabling Extended Tags Mar 13 12:21:02.025709 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 12:21:02.063795 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#256 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 13 12:21:02.063969 kernel: pci 29af:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 29af:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 13 12:21:02.064094 kernel: pci_bus 29af:00: busn_res: [bus 00-ff] end is updated to 00 Mar 13 12:21:02.064189 kernel: pci 29af:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 13 12:21:02.074446 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 13 12:21:02.074684 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 13 12:21:02.078476 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 13 12:21:02.085629 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 13 12:21:02.085824 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 13 12:21:02.093477 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 12:21:02.099445 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 13 12:21:02.116445 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#297 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 13 12:21:02.147659 kernel: mlx5_core 29af:00:02.0: enabling device (0000 -> 0002) Mar 13 12:21:02.153467 kernel: mlx5_core 29af:00:02.0: firmware version: 16.30.5026 Mar 13 12:21:02.350043 kernel: hv_netvsc 000d3a6d-f059-000d-3a6d-f059000d3a6d eth0: VF registering: eth1 Mar 13 12:21:02.350256 kernel: mlx5_core 29af:00:02.0 eth1: joined to eth0 Mar 13 12:21:02.356457 kernel: mlx5_core 29af:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 13 12:21:02.365463 kernel: mlx5_core 29af:00:02.0 enP10671s1: renamed from eth1 Mar 13 12:21:02.689940 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 13 12:21:02.712478 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (500) Mar 13 12:21:02.721040 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 13 12:21:02.735313 kernel: BTRFS: device fsid beae115b-a7a4-4bd0-8b91-fe8e188f678a devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (505) Mar 13 12:21:02.741228 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 13 12:21:02.760810 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 13 12:21:02.766447 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 13 12:21:02.793625 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 12:21:02.815460 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 12:21:02.824453 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 12:21:02.833451 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 12:21:03.834447 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 12:21:03.835346 disk-uuid[605]: The operation has completed successfully. Mar 13 12:21:03.903372 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 12:21:03.903497 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 12:21:03.934620 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 12:21:03.946214 sh[719]: Success Mar 13 12:21:03.977495 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 13 12:21:04.221332 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 12:21:04.240254 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 12:21:04.247818 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 12:21:04.273354 kernel: BTRFS info (device dm-0): first mount of filesystem beae115b-a7a4-4bd0-8b91-fe8e188f678a Mar 13 12:21:04.273396 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 13 12:21:04.279449 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 13 12:21:04.279488 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 13 12:21:04.286442 kernel: BTRFS info (device dm-0): using free space tree Mar 13 12:21:04.601627 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 12:21:04.606234 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 12:21:04.623622 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 12:21:04.634596 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 12:21:04.664992 kernel: BTRFS info (device sda6): first mount of filesystem e3f2e5ef-5667-4244-bb44-c23afbdb3707 Mar 13 12:21:04.665038 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 13 12:21:04.668983 kernel: BTRFS info (device sda6): using free space tree Mar 13 12:21:04.713090 kernel: BTRFS info (device sda6): auto enabling async discard Mar 13 12:21:04.719980 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 13 12:21:04.730444 kernel: BTRFS info (device sda6): last unmount of filesystem e3f2e5ef-5667-4244-bb44-c23afbdb3707 Mar 13 12:21:04.735902 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 12:21:04.742831 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 12:21:04.763696 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 12:21:04.774833 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 12:21:04.806545 systemd-networkd[903]: lo: Link UP Mar 13 12:21:04.806552 systemd-networkd[903]: lo: Gained carrier Mar 13 12:21:04.808363 systemd-networkd[903]: Enumeration completed Mar 13 12:21:04.808472 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 12:21:04.818001 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 12:21:04.818004 systemd-networkd[903]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 12:21:04.818326 systemd[1]: Reached target network.target - Network. Mar 13 12:21:04.893445 kernel: mlx5_core 29af:00:02.0 enP10671s1: Link up Mar 13 12:21:04.931931 kernel: hv_netvsc 000d3a6d-f059-000d-3a6d-f059000d3a6d eth0: Data path switched to VF: enP10671s1 Mar 13 12:21:04.931453 systemd-networkd[903]: enP10671s1: Link UP Mar 13 12:21:04.931537 systemd-networkd[903]: eth0: Link UP Mar 13 12:21:04.931627 systemd-networkd[903]: eth0: Gained carrier Mar 13 12:21:04.931635 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 12:21:04.951632 systemd-networkd[903]: enP10671s1: Gained carrier Mar 13 12:21:04.963476 systemd-networkd[903]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 13 12:21:05.738880 ignition[902]: Ignition 2.19.0 Mar 13 12:21:05.738893 ignition[902]: Stage: fetch-offline Mar 13 12:21:05.742859 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 12:21:05.738942 ignition[902]: no configs at "/usr/lib/ignition/base.d" Mar 13 12:21:05.738950 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:21:05.739033 ignition[902]: parsed url from cmdline: "" Mar 13 12:21:05.739036 ignition[902]: no config URL provided Mar 13 12:21:05.739041 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 12:21:05.764676 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 13 12:21:05.739047 ignition[902]: no config at "/usr/lib/ignition/user.ign" Mar 13 12:21:05.739051 ignition[902]: failed to fetch config: resource requires networking Mar 13 12:21:05.739509 ignition[902]: Ignition finished successfully Mar 13 12:21:05.786957 ignition[912]: Ignition 2.19.0 Mar 13 12:21:05.786963 ignition[912]: Stage: fetch Mar 13 12:21:05.787113 ignition[912]: no configs at "/usr/lib/ignition/base.d" Mar 13 12:21:05.787121 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:21:05.787214 ignition[912]: parsed url from cmdline: "" Mar 13 12:21:05.787217 ignition[912]: no config URL provided Mar 13 12:21:05.787221 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 12:21:05.787228 ignition[912]: no config at "/usr/lib/ignition/user.ign" Mar 13 12:21:05.787247 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 13 12:21:05.881879 ignition[912]: GET result: OK Mar 13 12:21:05.881975 ignition[912]: config has been read from IMDS userdata Mar 13 12:21:05.882018 ignition[912]: parsing config with SHA512: 910f6f40d938c348423dfc81f1c9322df947d716e94fc7277670fe76f3f0e314bec67e7c80194627e572c990f35cf0508348a359980edba2777629671b4f6d19 Mar 13 12:21:05.886237 unknown[912]: fetched base config from "system" Mar 13 12:21:05.886653 ignition[912]: fetch: fetch complete Mar 13 12:21:05.886253 unknown[912]: fetched base config from "system" Mar 13 12:21:05.886658 ignition[912]: fetch: fetch passed Mar 13 12:21:05.886259 unknown[912]: fetched user config from "azure" Mar 13 12:21:05.886703 ignition[912]: Ignition finished successfully Mar 13 12:21:05.891025 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 13 12:21:05.909623 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 12:21:05.930701 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 12:21:05.925508 ignition[919]: Ignition 2.19.0 Mar 13 12:21:05.925514 ignition[919]: Stage: kargs Mar 13 12:21:05.925697 ignition[919]: no configs at "/usr/lib/ignition/base.d" Mar 13 12:21:05.925706 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:21:05.926688 ignition[919]: kargs: kargs passed Mar 13 12:21:05.950570 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 12:21:05.926732 ignition[919]: Ignition finished successfully Mar 13 12:21:05.973144 ignition[925]: Ignition 2.19.0 Mar 13 12:21:05.973161 ignition[925]: Stage: disks Mar 13 12:21:05.975642 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 12:21:05.973332 ignition[925]: no configs at "/usr/lib/ignition/base.d" Mar 13 12:21:05.981126 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 12:21:05.973342 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:21:05.988806 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 12:21:05.974208 ignition[925]: disks: disks passed Mar 13 12:21:05.998543 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 12:21:05.974249 ignition[925]: Ignition finished successfully Mar 13 12:21:06.006665 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 12:21:06.016422 systemd[1]: Reached target basic.target - Basic System. Mar 13 12:21:06.039654 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 12:21:06.120546 systemd-fsck[933]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 13 12:21:06.128703 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 12:21:06.142621 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 12:21:06.196459 kernel: EXT4-fs (sda9): mounted filesystem e3689c4f-fa92-4cc2-b7ea-ac589877c8df r/w with ordered data mode. Quota mode: none. Mar 13 12:21:06.196629 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 12:21:06.201190 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 12:21:06.248503 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 12:21:06.268452 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (944) Mar 13 12:21:06.279485 kernel: BTRFS info (device sda6): first mount of filesystem e3f2e5ef-5667-4244-bb44-c23afbdb3707 Mar 13 12:21:06.279523 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 13 12:21:06.282997 kernel: BTRFS info (device sda6): using free space tree Mar 13 12:21:06.284538 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 12:21:06.290599 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 13 12:21:06.304673 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 12:21:06.304716 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 12:21:06.320971 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 12:21:06.338446 kernel: BTRFS info (device sda6): auto enabling async discard Mar 13 12:21:06.341564 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 12:21:06.347738 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 12:21:06.817563 systemd-networkd[903]: eth0: Gained IPv6LL Mar 13 12:21:06.910550 coreos-metadata[959]: Mar 13 12:21:06.910 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 13 12:21:06.919746 coreos-metadata[959]: Mar 13 12:21:06.919 INFO Fetch successful Mar 13 12:21:06.919746 coreos-metadata[959]: Mar 13 12:21:06.919 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 13 12:21:06.934849 coreos-metadata[959]: Mar 13 12:21:06.934 INFO Fetch successful Mar 13 12:21:06.951337 coreos-metadata[959]: Mar 13 12:21:06.951 INFO wrote hostname ci-4081.3.101-461ebd96c0 to /sysroot/etc/hostname Mar 13 12:21:06.958922 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 13 12:21:07.113825 initrd-setup-root[975]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 12:21:07.158046 initrd-setup-root[982]: cut: /sysroot/etc/group: No such file or directory Mar 13 12:21:07.163788 initrd-setup-root[989]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 12:21:07.171031 initrd-setup-root[996]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 12:21:08.214305 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 12:21:08.229653 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 12:21:08.236105 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 12:21:08.263189 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 12:21:08.267579 kernel: BTRFS info (device sda6): last unmount of filesystem e3f2e5ef-5667-4244-bb44-c23afbdb3707 Mar 13 12:21:08.289122 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 12:21:08.298304 ignition[1064]: INFO : Ignition 2.19.0 Mar 13 12:21:08.298304 ignition[1064]: INFO : Stage: mount Mar 13 12:21:08.298304 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 12:21:08.298304 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:21:08.298304 ignition[1064]: INFO : mount: mount passed Mar 13 12:21:08.298304 ignition[1064]: INFO : Ignition finished successfully Mar 13 12:21:08.298717 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 12:21:08.325564 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 12:21:08.339640 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 12:21:08.365445 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1075) Mar 13 12:21:08.376829 kernel: BTRFS info (device sda6): first mount of filesystem e3f2e5ef-5667-4244-bb44-c23afbdb3707 Mar 13 12:21:08.376883 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 13 12:21:08.380661 kernel: BTRFS info (device sda6): using free space tree Mar 13 12:21:08.387439 kernel: BTRFS info (device sda6): auto enabling async discard Mar 13 12:21:08.389115 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 12:21:08.411700 ignition[1093]: INFO : Ignition 2.19.0 Mar 13 12:21:08.411700 ignition[1093]: INFO : Stage: files Mar 13 12:21:08.418602 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 12:21:08.418602 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:21:08.418602 ignition[1093]: DEBUG : files: compiled without relabeling support, skipping Mar 13 12:21:08.443408 ignition[1093]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 12:21:08.443408 ignition[1093]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 12:21:08.540349 ignition[1093]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 12:21:08.546985 ignition[1093]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 12:21:08.546985 ignition[1093]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 12:21:08.540716 unknown[1093]: wrote ssh authorized keys file for user: core Mar 13 12:21:08.564558 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 13 12:21:08.564558 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 13 12:21:08.674159 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 12:21:09.013895 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 13 12:21:09.013895 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Mar 13 12:21:09.491680 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 13 12:21:09.739757 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 13 12:21:09.739757 ignition[1093]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 13 12:21:09.773362 ignition[1093]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 12:21:09.782748 ignition[1093]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 12:21:09.782748 ignition[1093]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 13 12:21:09.782748 ignition[1093]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 13 12:21:09.810384 ignition[1093]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 12:21:09.810384 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 12:21:09.810384 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 12:21:09.810384 ignition[1093]: INFO : files: files passed Mar 13 12:21:09.810384 ignition[1093]: INFO : Ignition finished successfully Mar 13 12:21:09.806031 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 12:21:09.825643 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 12:21:09.840597 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 12:21:09.852242 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 12:21:09.852330 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 12:21:09.889568 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 12:21:09.889568 initrd-setup-root-after-ignition[1120]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 12:21:09.905034 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 12:21:09.905505 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 12:21:09.917916 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 12:21:09.938639 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 12:21:09.963184 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 12:21:09.964512 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 12:21:09.973411 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 12:21:09.983179 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 12:21:09.992047 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 12:21:10.004631 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 12:21:10.022570 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 12:21:10.035730 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 12:21:10.054224 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 12:21:10.056492 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 12:21:10.065310 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 12:21:10.075335 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 12:21:10.085809 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 12:21:10.095256 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 12:21:10.095334 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 12:21:10.108484 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 12:21:10.118327 systemd[1]: Stopped target basic.target - Basic System. Mar 13 12:21:10.127260 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 12:21:10.136345 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 12:21:10.146868 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 12:21:10.156776 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 12:21:10.166365 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 12:21:10.176782 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 12:21:10.187187 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 12:21:10.195958 systemd[1]: Stopped target swap.target - Swaps. Mar 13 12:21:10.203635 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 12:21:10.203700 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 12:21:10.215754 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 12:21:10.225381 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 12:21:10.235405 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 12:21:10.235447 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 12:21:10.246091 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 12:21:10.246150 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 12:21:10.261686 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 12:21:10.261732 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 12:21:10.271388 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 12:21:10.271428 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 12:21:10.280398 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 13 12:21:10.280438 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 13 12:21:10.311644 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 12:21:10.323419 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 12:21:10.323503 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 12:21:10.341474 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 12:21:10.350280 ignition[1145]: INFO : Ignition 2.19.0 Mar 13 12:21:10.350280 ignition[1145]: INFO : Stage: umount Mar 13 12:21:10.350280 ignition[1145]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 12:21:10.350280 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:21:10.350280 ignition[1145]: INFO : umount: umount passed Mar 13 12:21:10.350280 ignition[1145]: INFO : Ignition finished successfully Mar 13 12:21:10.355571 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 12:21:10.355644 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 12:21:10.363091 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 12:21:10.363145 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 12:21:10.371830 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 12:21:10.371927 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 12:21:10.380963 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 12:21:10.381053 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 12:21:10.390686 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 12:21:10.390730 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 12:21:10.400924 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 13 12:21:10.400963 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 13 12:21:10.410001 systemd[1]: Stopped target network.target - Network. Mar 13 12:21:10.418274 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 12:21:10.418317 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 12:21:10.429498 systemd[1]: Stopped target paths.target - Path Units. Mar 13 12:21:10.439014 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 12:21:10.442452 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 12:21:10.451138 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 12:21:10.459399 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 12:21:10.463708 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 12:21:10.463769 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 12:21:10.472268 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 12:21:10.472312 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 12:21:10.480988 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 12:21:10.481035 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 12:21:10.490496 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 12:21:10.490532 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 12:21:10.678869 kernel: hv_netvsc 000d3a6d-f059-000d-3a6d-f059000d3a6d eth0: Data path switched from VF: enP10671s1 Mar 13 12:21:10.500775 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 12:21:10.510989 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 12:21:10.514785 systemd-networkd[903]: eth0: DHCPv6 lease lost Mar 13 12:21:10.525033 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 12:21:10.525574 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 12:21:10.525666 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 12:21:10.533934 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 12:21:10.534001 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 12:21:10.554691 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 12:21:10.564663 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 12:21:10.564736 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 12:21:10.571746 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 12:21:10.586236 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 12:21:10.586848 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 12:21:10.606474 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 12:21:10.606570 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 12:21:10.615548 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 12:21:10.615615 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 12:21:10.624111 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 12:21:10.624164 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 12:21:10.634507 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 12:21:10.634652 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 12:21:10.644310 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 12:21:10.644381 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 12:21:10.659912 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 12:21:10.659950 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 12:21:10.665221 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 12:21:10.665271 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 12:21:10.678935 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 12:21:10.678984 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 12:21:10.688502 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 12:21:10.688553 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 12:21:10.720701 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 12:21:10.734488 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 12:21:10.734550 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 12:21:10.746370 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 12:21:10.746415 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:21:10.757817 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 12:21:10.757924 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 12:21:10.930115 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Mar 13 12:21:10.768020 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 12:21:10.768112 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 12:21:10.778361 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 12:21:10.778457 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 12:21:10.789077 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 12:21:10.797856 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 12:21:10.797937 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 12:21:10.826722 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 12:21:10.841196 systemd[1]: Switching root. Mar 13 12:21:10.968580 systemd-journald[217]: Journal stopped Mar 13 12:21:00.204196 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 13 12:21:00.204217 kernel: Linux version 6.6.129-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 13 08:56:28 -00 2026 Mar 13 12:21:00.204225 kernel: KASLR enabled Mar 13 12:21:00.204231 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 13 12:21:00.204238 kernel: printk: bootconsole [pl11] enabled Mar 13 12:21:00.204244 kernel: efi: EFI v2.7 by EDK II Mar 13 12:21:00.204251 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 13 12:21:00.204257 kernel: random: crng init done Mar 13 12:21:00.204263 kernel: ACPI: Early table checksum verification disabled Mar 13 12:21:00.204269 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 13 12:21:00.204275 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:21:00.204281 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:21:00.204288 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 13 12:21:00.204295 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:21:00.204305 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:21:00.204314 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:21:00.204320 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:21:00.204328 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:21:00.204335 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:21:00.204342 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 13 12:21:00.204349 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:21:00.204356 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 13 12:21:00.204363 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 13 12:21:00.204369 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 13 12:21:00.204376 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 13 12:21:00.204383 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 13 12:21:00.204389 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 13 12:21:00.204396 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 13 12:21:00.204403 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 13 12:21:00.204410 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 13 12:21:00.204416 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 13 12:21:00.204423 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 13 12:21:00.204429 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 13 12:21:00.204436 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 13 12:21:00.204442 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 13 12:21:00.204448 kernel: Zone ranges: Mar 13 12:21:00.204455 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 13 12:21:00.204461 kernel: DMA32 empty Mar 13 12:21:00.204467 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 13 12:21:00.204474 kernel: Movable zone start for each node Mar 13 12:21:00.204484 kernel: Early memory node ranges Mar 13 12:21:00.204493 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 13 12:21:00.204502 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 13 12:21:00.204509 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 13 12:21:00.204516 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 13 12:21:00.204524 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 13 12:21:00.204531 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 13 12:21:00.204537 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 13 12:21:00.204544 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 13 12:21:00.204551 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 13 12:21:00.204558 kernel: psci: probing for conduit method from ACPI. Mar 13 12:21:00.204564 kernel: psci: PSCIv1.1 detected in firmware. Mar 13 12:21:00.204571 kernel: psci: Using standard PSCI v0.2 function IDs Mar 13 12:21:00.204578 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 13 12:21:00.204584 kernel: psci: SMC Calling Convention v1.4 Mar 13 12:21:00.204591 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 13 12:21:00.204598 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 13 12:21:00.204606 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 13 12:21:00.204613 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 13 12:21:00.204620 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 13 12:21:00.204626 kernel: Detected PIPT I-cache on CPU0 Mar 13 12:21:00.204633 kernel: CPU features: detected: GIC system register CPU interface Mar 13 12:21:00.204640 kernel: CPU features: detected: Hardware dirty bit management Mar 13 12:21:00.204652 kernel: CPU features: detected: Spectre-BHB Mar 13 12:21:00.204661 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 13 12:21:00.204668 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 13 12:21:00.204675 kernel: CPU features: detected: ARM erratum 1418040 Mar 13 12:21:00.204681 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 13 12:21:00.204690 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 13 12:21:00.204696 kernel: alternatives: applying boot alternatives Mar 13 12:21:00.204704 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=74d23b4a193d6e906ee495726e120d1ddfae1935a31eb217879b4eafa2053949 Mar 13 12:21:00.204712 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 12:21:00.204719 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 13 12:21:00.204725 kernel: Fallback order for Node 0: 0 Mar 13 12:21:00.204732 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 13 12:21:00.204739 kernel: Policy zone: Normal Mar 13 12:21:00.204746 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 12:21:00.204752 kernel: software IO TLB: area num 2. Mar 13 12:21:00.204759 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 13 12:21:00.204767 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8120K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 13 12:21:00.204774 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 13 12:21:00.204781 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 12:21:00.204788 kernel: rcu: RCU event tracing is enabled. Mar 13 12:21:00.204795 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 13 12:21:00.204802 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 12:21:00.204809 kernel: Tracing variant of Tasks RCU enabled. Mar 13 12:21:00.204815 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 12:21:00.204822 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 13 12:21:00.204829 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 13 12:21:00.204836 kernel: GICv3: 960 SPIs implemented Mar 13 12:21:00.204844 kernel: GICv3: 0 Extended SPIs implemented Mar 13 12:21:00.204850 kernel: Root IRQ handler: gic_handle_irq Mar 13 12:21:00.204857 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 13 12:21:00.204864 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 13 12:21:00.204870 kernel: ITS: No ITS available, not enabling LPIs Mar 13 12:21:00.204877 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 12:21:00.204884 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 13 12:21:00.204891 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 13 12:21:00.204898 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 13 12:21:00.204905 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 13 12:21:00.204911 kernel: Console: colour dummy device 80x25 Mar 13 12:21:00.204920 kernel: printk: console [tty1] enabled Mar 13 12:21:00.204927 kernel: ACPI: Core revision 20230628 Mar 13 12:21:00.204934 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 13 12:21:00.204941 kernel: pid_max: default: 32768 minimum: 301 Mar 13 12:21:00.204949 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 13 12:21:00.204956 kernel: landlock: Up and running. Mar 13 12:21:00.204963 kernel: SELinux: Initializing. Mar 13 12:21:00.204971 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 12:21:00.204978 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 12:21:00.204987 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 12:21:00.204994 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 12:21:00.205001 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 13 12:21:00.205008 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 13 12:21:00.205016 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 13 12:21:00.205023 kernel: rcu: Hierarchical SRCU implementation. Mar 13 12:21:00.205030 kernel: rcu: Max phase no-delay instances is 400. Mar 13 12:21:00.205037 kernel: Remapping and enabling EFI services. Mar 13 12:21:00.205051 kernel: smp: Bringing up secondary CPUs ... Mar 13 12:21:00.205058 kernel: Detected PIPT I-cache on CPU1 Mar 13 12:21:00.205066 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 13 12:21:00.205074 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 13 12:21:00.205083 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 13 12:21:00.205090 kernel: smp: Brought up 1 node, 2 CPUs Mar 13 12:21:00.205098 kernel: SMP: Total of 2 processors activated. Mar 13 12:21:00.205106 kernel: CPU features: detected: 32-bit EL0 Support Mar 13 12:21:00.205113 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 13 12:21:00.205122 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 13 12:21:00.205129 kernel: CPU features: detected: CRC32 instructions Mar 13 12:21:00.205137 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 13 12:21:00.205144 kernel: CPU features: detected: LSE atomic instructions Mar 13 12:21:00.205151 kernel: CPU features: detected: Privileged Access Never Mar 13 12:21:00.205159 kernel: CPU: All CPU(s) started at EL1 Mar 13 12:21:00.205166 kernel: alternatives: applying system-wide alternatives Mar 13 12:21:00.205173 kernel: devtmpfs: initialized Mar 13 12:21:00.205181 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 12:21:00.205190 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 13 12:21:00.205197 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 12:21:00.205204 kernel: SMBIOS 3.1.0 present. Mar 13 12:21:00.205212 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 13 12:21:00.205219 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 12:21:00.205226 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 13 12:21:00.205234 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 13 12:21:00.205241 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 13 12:21:00.205249 kernel: audit: initializing netlink subsys (disabled) Mar 13 12:21:00.205257 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 13 12:21:00.205264 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 12:21:00.205272 kernel: cpuidle: using governor menu Mar 13 12:21:00.205279 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 13 12:21:00.205286 kernel: ASID allocator initialised with 32768 entries Mar 13 12:21:00.205294 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 12:21:00.205301 kernel: Serial: AMBA PL011 UART driver Mar 13 12:21:00.205308 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 13 12:21:00.205315 kernel: Modules: 0 pages in range for non-PLT usage Mar 13 12:21:00.205324 kernel: Modules: 509008 pages in range for PLT usage Mar 13 12:21:00.205332 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 12:21:00.205339 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 12:21:00.205347 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 13 12:21:00.205354 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 13 12:21:00.205361 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 12:21:00.205369 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 12:21:00.205376 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 13 12:21:00.205383 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 13 12:21:00.205392 kernel: ACPI: Added _OSI(Module Device) Mar 13 12:21:00.205399 kernel: ACPI: Added _OSI(Processor Device) Mar 13 12:21:00.205406 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 12:21:00.205414 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 13 12:21:00.205421 kernel: ACPI: Interpreter enabled Mar 13 12:21:00.205428 kernel: ACPI: Using GIC for interrupt routing Mar 13 12:21:00.205435 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 13 12:21:00.205443 kernel: printk: console [ttyAMA0] enabled Mar 13 12:21:00.205450 kernel: printk: bootconsole [pl11] disabled Mar 13 12:21:00.205459 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 13 12:21:00.205466 kernel: iommu: Default domain type: Translated Mar 13 12:21:00.205473 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 13 12:21:00.205481 kernel: efivars: Registered efivars operations Mar 13 12:21:00.205488 kernel: vgaarb: loaded Mar 13 12:21:00.205495 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 13 12:21:00.205502 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 12:21:00.205510 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 12:21:00.205517 kernel: pnp: PnP ACPI init Mar 13 12:21:00.205526 kernel: pnp: PnP ACPI: found 0 devices Mar 13 12:21:00.205533 kernel: NET: Registered PF_INET protocol family Mar 13 12:21:00.205540 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 12:21:00.205548 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 13 12:21:00.205555 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 12:21:00.205563 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 13 12:21:00.205570 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 13 12:21:00.205577 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 13 12:21:00.205585 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 12:21:00.205593 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 12:21:00.205601 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 12:21:00.205608 kernel: PCI: CLS 0 bytes, default 64 Mar 13 12:21:00.205615 kernel: kvm [1]: HYP mode not available Mar 13 12:21:00.205622 kernel: Initialise system trusted keyrings Mar 13 12:21:00.205629 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 13 12:21:00.205637 kernel: Key type asymmetric registered Mar 13 12:21:00.205644 kernel: Asymmetric key parser 'x509' registered Mar 13 12:21:00.205655 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 12:21:00.205664 kernel: io scheduler mq-deadline registered Mar 13 12:21:00.205672 kernel: io scheduler kyber registered Mar 13 12:21:00.205679 kernel: io scheduler bfq registered Mar 13 12:21:00.205687 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 12:21:00.205695 kernel: thunder_xcv, ver 1.0 Mar 13 12:21:00.205702 kernel: thunder_bgx, ver 1.0 Mar 13 12:21:00.205709 kernel: nicpf, ver 1.0 Mar 13 12:21:00.205716 kernel: nicvf, ver 1.0 Mar 13 12:21:00.205843 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 13 12:21:00.205918 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-13T12:20:59 UTC (1773404459) Mar 13 12:21:00.205928 kernel: efifb: probing for efifb Mar 13 12:21:00.205936 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 13 12:21:00.205943 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 13 12:21:00.205950 kernel: efifb: scrolling: redraw Mar 13 12:21:00.205958 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 13 12:21:00.205965 kernel: Console: switching to colour frame buffer device 128x48 Mar 13 12:21:00.205972 kernel: fb0: EFI VGA frame buffer device Mar 13 12:21:00.205982 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 13 12:21:00.205989 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 13 12:21:00.205997 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 13 12:21:00.206004 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 13 12:21:00.206011 kernel: watchdog: Hard watchdog permanently disabled Mar 13 12:21:00.206018 kernel: NET: Registered PF_INET6 protocol family Mar 13 12:21:00.206026 kernel: Segment Routing with IPv6 Mar 13 12:21:00.206033 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 12:21:00.206040 kernel: NET: Registered PF_PACKET protocol family Mar 13 12:21:00.206049 kernel: Key type dns_resolver registered Mar 13 12:21:00.206056 kernel: registered taskstats version 1 Mar 13 12:21:00.206063 kernel: Loading compiled-in X.509 certificates Mar 13 12:21:00.206071 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.129-flatcar: 669007e8dd7e677277a9246a6f3b194a311f8cf1' Mar 13 12:21:00.206078 kernel: Key type .fscrypt registered Mar 13 12:21:00.206085 kernel: Key type fscrypt-provisioning registered Mar 13 12:21:00.206092 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 12:21:00.206099 kernel: ima: Allocated hash algorithm: sha1 Mar 13 12:21:00.206106 kernel: ima: No architecture policies found Mar 13 12:21:00.206115 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 13 12:21:00.206122 kernel: clk: Disabling unused clocks Mar 13 12:21:00.206129 kernel: Freeing unused kernel memory: 39424K Mar 13 12:21:00.206137 kernel: Run /init as init process Mar 13 12:21:00.206144 kernel: with arguments: Mar 13 12:21:00.206151 kernel: /init Mar 13 12:21:00.206158 kernel: with environment: Mar 13 12:21:00.206165 kernel: HOME=/ Mar 13 12:21:00.206172 kernel: TERM=linux Mar 13 12:21:00.206181 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 13 12:21:00.206192 systemd[1]: Detected virtualization microsoft. Mar 13 12:21:00.206200 systemd[1]: Detected architecture arm64. Mar 13 12:21:00.206207 systemd[1]: Running in initrd. Mar 13 12:21:00.206215 systemd[1]: No hostname configured, using default hostname. Mar 13 12:21:00.206223 systemd[1]: Hostname set to . Mar 13 12:21:00.206231 systemd[1]: Initializing machine ID from random generator. Mar 13 12:21:00.206240 systemd[1]: Queued start job for default target initrd.target. Mar 13 12:21:00.206248 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 12:21:00.206256 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 12:21:00.206264 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 12:21:00.206272 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 12:21:00.206280 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 12:21:00.206289 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 12:21:00.206298 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 12:21:00.206308 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 12:21:00.206316 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 12:21:00.206323 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 12:21:00.206331 systemd[1]: Reached target paths.target - Path Units. Mar 13 12:21:00.206339 systemd[1]: Reached target slices.target - Slice Units. Mar 13 12:21:00.206347 systemd[1]: Reached target swap.target - Swaps. Mar 13 12:21:00.206355 systemd[1]: Reached target timers.target - Timer Units. Mar 13 12:21:00.206363 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 12:21:00.206372 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 12:21:00.206380 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 12:21:00.206388 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 13 12:21:00.206396 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 12:21:00.206404 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 12:21:00.206412 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 12:21:00.206419 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 12:21:00.206427 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 12:21:00.206437 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 12:21:00.206445 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 12:21:00.206453 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 12:21:00.206460 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 12:21:00.206480 systemd-journald[217]: Collecting audit messages is disabled. Mar 13 12:21:00.206501 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 12:21:00.206510 systemd-journald[217]: Journal started Mar 13 12:21:00.206528 systemd-journald[217]: Runtime Journal (/run/log/journal/c9135e0383c343c59444677c187c6f2e) is 8.0M, max 78.5M, 70.5M free. Mar 13 12:21:00.220275 systemd-modules-load[218]: Inserted module 'overlay' Mar 13 12:21:00.227906 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 12:21:00.249222 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 12:21:00.249275 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 12:21:00.252190 kernel: Bridge firewalling registered Mar 13 12:21:00.252260 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 13 12:21:00.260667 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 12:21:00.272874 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 12:21:00.286729 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 12:21:00.295794 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 12:21:00.304394 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:21:00.323911 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 12:21:00.330990 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 12:21:00.347551 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 12:21:00.374796 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 12:21:00.380910 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 12:21:00.401818 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 12:21:00.407141 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 12:21:00.417508 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 12:21:00.446869 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 12:21:00.454977 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 12:21:00.478114 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 12:21:00.490746 dracut-cmdline[251]: dracut-dracut-053 Mar 13 12:21:00.498870 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=74d23b4a193d6e906ee495726e120d1ddfae1935a31eb217879b4eafa2053949 Mar 13 12:21:00.526852 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 12:21:00.531160 systemd-resolved[252]: Positive Trust Anchors: Mar 13 12:21:00.531169 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 12:21:00.531201 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 12:21:00.533329 systemd-resolved[252]: Defaulting to hostname 'linux'. Mar 13 12:21:00.535894 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 12:21:00.551450 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 12:21:00.666677 kernel: SCSI subsystem initialized Mar 13 12:21:00.674668 kernel: Loading iSCSI transport class v2.0-870. Mar 13 12:21:00.684673 kernel: iscsi: registered transport (tcp) Mar 13 12:21:00.701416 kernel: iscsi: registered transport (qla4xxx) Mar 13 12:21:00.701464 kernel: QLogic iSCSI HBA Driver Mar 13 12:21:00.736279 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 12:21:00.749072 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 12:21:00.778675 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 12:21:00.778705 kernel: device-mapper: uevent: version 1.0.3 Mar 13 12:21:00.784479 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 13 12:21:00.830671 kernel: raid6: neonx8 gen() 15815 MB/s Mar 13 12:21:00.850663 kernel: raid6: neonx4 gen() 15681 MB/s Mar 13 12:21:00.869654 kernel: raid6: neonx2 gen() 13318 MB/s Mar 13 12:21:00.889662 kernel: raid6: neonx1 gen() 10558 MB/s Mar 13 12:21:00.909656 kernel: raid6: int64x8 gen() 6984 MB/s Mar 13 12:21:00.928657 kernel: raid6: int64x4 gen() 7375 MB/s Mar 13 12:21:00.947657 kernel: raid6: int64x2 gen() 6150 MB/s Mar 13 12:21:00.970391 kernel: raid6: int64x1 gen() 5071 MB/s Mar 13 12:21:00.970402 kernel: raid6: using algorithm neonx8 gen() 15815 MB/s Mar 13 12:21:00.992681 kernel: raid6: .... xor() 12035 MB/s, rmw enabled Mar 13 12:21:00.992707 kernel: raid6: using neon recovery algorithm Mar 13 12:21:01.002694 kernel: xor: measuring software checksum speed Mar 13 12:21:01.002711 kernel: 8regs : 19812 MB/sec Mar 13 12:21:01.006075 kernel: 32regs : 19674 MB/sec Mar 13 12:21:01.009018 kernel: arm64_neon : 27132 MB/sec Mar 13 12:21:01.012623 kernel: xor: using function: arm64_neon (27132 MB/sec) Mar 13 12:21:01.062662 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 12:21:01.071948 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 12:21:01.091836 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 12:21:01.111971 systemd-udevd[437]: Using default interface naming scheme 'v255'. Mar 13 12:21:01.115869 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 12:21:01.140851 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 12:21:01.156254 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Mar 13 12:21:01.181541 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 12:21:01.198144 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 12:21:01.235753 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 12:21:01.250841 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 12:21:01.273432 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 12:21:01.283474 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 12:21:01.296161 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 12:21:01.301922 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 12:21:01.326877 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 12:21:01.346699 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 12:21:01.378319 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 12:21:01.388757 kernel: hv_vmbus: Vmbus version:5.3 Mar 13 12:21:01.388787 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 13 12:21:01.378436 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 12:21:01.411573 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 13 12:21:01.411610 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 13 12:21:01.424559 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 13 12:21:01.424597 kernel: hv_vmbus: registering driver hv_netvsc Mar 13 12:21:01.426977 kernel: PTP clock support registered Mar 13 12:21:01.430007 kernel: hv_vmbus: registering driver hid_hyperv Mar 13 12:21:01.430033 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 13 12:21:01.436819 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 12:21:01.463010 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 13 12:21:01.463178 kernel: hv_vmbus: registering driver hv_storvsc Mar 13 12:21:01.459657 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 12:21:01.475531 kernel: scsi host1: storvsc_host_t Mar 13 12:21:01.475710 kernel: scsi host0: storvsc_host_t Mar 13 12:21:01.459838 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:21:01.495657 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 13 12:21:01.485224 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 12:21:01.506671 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 13 12:21:01.507981 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 12:21:01.528933 kernel: hv_utils: Registering HyperV Utility Driver Mar 13 12:21:01.528975 kernel: hv_vmbus: registering driver hv_utils Mar 13 12:21:01.533669 kernel: hv_utils: Heartbeat IC version 3.0 Mar 13 12:21:01.546390 kernel: hv_utils: Shutdown IC version 3.2 Mar 13 12:21:01.546424 kernel: hv_netvsc 000d3a6d-f059-000d-3a6d-f059000d3a6d eth0: VF slot 1 added Mar 13 12:21:01.546588 kernel: hv_utils: TimeSync IC version 4.0 Mar 13 12:21:01.926805 systemd-resolved[252]: Clock change detected. Flushing caches. Mar 13 12:21:01.936649 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:21:01.967378 kernel: hv_vmbus: registering driver hv_pci Mar 13 12:21:01.967425 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 13 12:21:01.967644 kernel: hv_pci 009ba370-29af-4e97-b6b9-8d45e7f4dbc7: PCI VMBus probing: Using version 0x10004 Mar 13 12:21:01.967761 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 13 12:21:01.972441 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 12:21:01.998642 kernel: hv_pci 009ba370-29af-4e97-b6b9-8d45e7f4dbc7: PCI host bridge to bus 29af:00 Mar 13 12:21:01.998794 kernel: pci_bus 29af:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 13 12:21:01.998896 kernel: pci_bus 29af:00: No busn resource found for root bus, will use [bus 00-ff] Mar 13 12:21:02.000440 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 13 12:21:02.010058 kernel: pci 29af:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 13 12:21:02.017489 kernel: pci 29af:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 13 12:21:02.031800 kernel: pci 29af:00:02.0: enabling Extended Tags Mar 13 12:21:02.025709 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 12:21:02.063795 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#256 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 13 12:21:02.063969 kernel: pci 29af:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 29af:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 13 12:21:02.064094 kernel: pci_bus 29af:00: busn_res: [bus 00-ff] end is updated to 00 Mar 13 12:21:02.064189 kernel: pci 29af:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 13 12:21:02.074446 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 13 12:21:02.074684 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 13 12:21:02.078476 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 13 12:21:02.085629 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 13 12:21:02.085824 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 13 12:21:02.093477 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 12:21:02.099445 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 13 12:21:02.116445 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#297 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 13 12:21:02.147659 kernel: mlx5_core 29af:00:02.0: enabling device (0000 -> 0002) Mar 13 12:21:02.153467 kernel: mlx5_core 29af:00:02.0: firmware version: 16.30.5026 Mar 13 12:21:02.350043 kernel: hv_netvsc 000d3a6d-f059-000d-3a6d-f059000d3a6d eth0: VF registering: eth1 Mar 13 12:21:02.350256 kernel: mlx5_core 29af:00:02.0 eth1: joined to eth0 Mar 13 12:21:02.356457 kernel: mlx5_core 29af:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 13 12:21:02.365463 kernel: mlx5_core 29af:00:02.0 enP10671s1: renamed from eth1 Mar 13 12:21:02.689940 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 13 12:21:02.712478 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (500) Mar 13 12:21:02.721040 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 13 12:21:02.735313 kernel: BTRFS: device fsid beae115b-a7a4-4bd0-8b91-fe8e188f678a devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (505) Mar 13 12:21:02.741228 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 13 12:21:02.760810 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 13 12:21:02.766447 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 13 12:21:02.793625 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 12:21:02.815460 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 12:21:02.824453 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 12:21:02.833451 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 12:21:03.834447 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 12:21:03.835346 disk-uuid[605]: The operation has completed successfully. Mar 13 12:21:03.903372 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 12:21:03.903497 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 12:21:03.934620 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 12:21:03.946214 sh[719]: Success Mar 13 12:21:03.977495 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 13 12:21:04.221332 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 12:21:04.240254 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 12:21:04.247818 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 12:21:04.273354 kernel: BTRFS info (device dm-0): first mount of filesystem beae115b-a7a4-4bd0-8b91-fe8e188f678a Mar 13 12:21:04.273396 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 13 12:21:04.279449 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 13 12:21:04.279488 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 13 12:21:04.286442 kernel: BTRFS info (device dm-0): using free space tree Mar 13 12:21:04.601627 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 12:21:04.606234 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 12:21:04.623622 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 12:21:04.634596 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 12:21:04.664992 kernel: BTRFS info (device sda6): first mount of filesystem e3f2e5ef-5667-4244-bb44-c23afbdb3707 Mar 13 12:21:04.665038 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 13 12:21:04.668983 kernel: BTRFS info (device sda6): using free space tree Mar 13 12:21:04.713090 kernel: BTRFS info (device sda6): auto enabling async discard Mar 13 12:21:04.719980 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 13 12:21:04.730444 kernel: BTRFS info (device sda6): last unmount of filesystem e3f2e5ef-5667-4244-bb44-c23afbdb3707 Mar 13 12:21:04.735902 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 12:21:04.742831 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 12:21:04.763696 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 12:21:04.774833 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 12:21:04.806545 systemd-networkd[903]: lo: Link UP Mar 13 12:21:04.806552 systemd-networkd[903]: lo: Gained carrier Mar 13 12:21:04.808363 systemd-networkd[903]: Enumeration completed Mar 13 12:21:04.808472 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 12:21:04.818001 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 12:21:04.818004 systemd-networkd[903]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 12:21:04.818326 systemd[1]: Reached target network.target - Network. Mar 13 12:21:04.893445 kernel: mlx5_core 29af:00:02.0 enP10671s1: Link up Mar 13 12:21:04.931931 kernel: hv_netvsc 000d3a6d-f059-000d-3a6d-f059000d3a6d eth0: Data path switched to VF: enP10671s1 Mar 13 12:21:04.931453 systemd-networkd[903]: enP10671s1: Link UP Mar 13 12:21:04.931537 systemd-networkd[903]: eth0: Link UP Mar 13 12:21:04.931627 systemd-networkd[903]: eth0: Gained carrier Mar 13 12:21:04.931635 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 12:21:04.951632 systemd-networkd[903]: enP10671s1: Gained carrier Mar 13 12:21:04.963476 systemd-networkd[903]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 13 12:21:05.738880 ignition[902]: Ignition 2.19.0 Mar 13 12:21:05.738893 ignition[902]: Stage: fetch-offline Mar 13 12:21:05.742859 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 12:21:05.738942 ignition[902]: no configs at "/usr/lib/ignition/base.d" Mar 13 12:21:05.738950 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:21:05.739033 ignition[902]: parsed url from cmdline: "" Mar 13 12:21:05.739036 ignition[902]: no config URL provided Mar 13 12:21:05.739041 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 12:21:05.764676 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 13 12:21:05.739047 ignition[902]: no config at "/usr/lib/ignition/user.ign" Mar 13 12:21:05.739051 ignition[902]: failed to fetch config: resource requires networking Mar 13 12:21:05.739509 ignition[902]: Ignition finished successfully Mar 13 12:21:05.786957 ignition[912]: Ignition 2.19.0 Mar 13 12:21:05.786963 ignition[912]: Stage: fetch Mar 13 12:21:05.787113 ignition[912]: no configs at "/usr/lib/ignition/base.d" Mar 13 12:21:05.787121 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:21:05.787214 ignition[912]: parsed url from cmdline: "" Mar 13 12:21:05.787217 ignition[912]: no config URL provided Mar 13 12:21:05.787221 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 12:21:05.787228 ignition[912]: no config at "/usr/lib/ignition/user.ign" Mar 13 12:21:05.787247 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 13 12:21:05.881879 ignition[912]: GET result: OK Mar 13 12:21:05.881975 ignition[912]: config has been read from IMDS userdata Mar 13 12:21:05.882018 ignition[912]: parsing config with SHA512: 910f6f40d938c348423dfc81f1c9322df947d716e94fc7277670fe76f3f0e314bec67e7c80194627e572c990f35cf0508348a359980edba2777629671b4f6d19 Mar 13 12:21:05.886237 unknown[912]: fetched base config from "system" Mar 13 12:21:05.886653 ignition[912]: fetch: fetch complete Mar 13 12:21:05.886253 unknown[912]: fetched base config from "system" Mar 13 12:21:05.886658 ignition[912]: fetch: fetch passed Mar 13 12:21:05.886259 unknown[912]: fetched user config from "azure" Mar 13 12:21:05.886703 ignition[912]: Ignition finished successfully Mar 13 12:21:05.891025 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 13 12:21:05.909623 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 12:21:05.930701 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 12:21:05.925508 ignition[919]: Ignition 2.19.0 Mar 13 12:21:05.925514 ignition[919]: Stage: kargs Mar 13 12:21:05.925697 ignition[919]: no configs at "/usr/lib/ignition/base.d" Mar 13 12:21:05.925706 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:21:05.926688 ignition[919]: kargs: kargs passed Mar 13 12:21:05.950570 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 12:21:05.926732 ignition[919]: Ignition finished successfully Mar 13 12:21:05.973144 ignition[925]: Ignition 2.19.0 Mar 13 12:21:05.973161 ignition[925]: Stage: disks Mar 13 12:21:05.975642 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 12:21:05.973332 ignition[925]: no configs at "/usr/lib/ignition/base.d" Mar 13 12:21:05.981126 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 12:21:05.973342 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:21:05.988806 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 12:21:05.974208 ignition[925]: disks: disks passed Mar 13 12:21:05.998543 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 12:21:05.974249 ignition[925]: Ignition finished successfully Mar 13 12:21:06.006665 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 12:21:06.016422 systemd[1]: Reached target basic.target - Basic System. Mar 13 12:21:06.039654 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 12:21:06.120546 systemd-fsck[933]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 13 12:21:06.128703 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 12:21:06.142621 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 12:21:06.196459 kernel: EXT4-fs (sda9): mounted filesystem e3689c4f-fa92-4cc2-b7ea-ac589877c8df r/w with ordered data mode. Quota mode: none. Mar 13 12:21:06.196629 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 12:21:06.201190 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 12:21:06.248503 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 12:21:06.268452 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (944) Mar 13 12:21:06.279485 kernel: BTRFS info (device sda6): first mount of filesystem e3f2e5ef-5667-4244-bb44-c23afbdb3707 Mar 13 12:21:06.279523 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 13 12:21:06.282997 kernel: BTRFS info (device sda6): using free space tree Mar 13 12:21:06.284538 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 12:21:06.290599 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 13 12:21:06.304673 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 12:21:06.304716 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 12:21:06.320971 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 12:21:06.338446 kernel: BTRFS info (device sda6): auto enabling async discard Mar 13 12:21:06.341564 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 12:21:06.347738 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 12:21:06.817563 systemd-networkd[903]: eth0: Gained IPv6LL Mar 13 12:21:06.910550 coreos-metadata[959]: Mar 13 12:21:06.910 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 13 12:21:06.919746 coreos-metadata[959]: Mar 13 12:21:06.919 INFO Fetch successful Mar 13 12:21:06.919746 coreos-metadata[959]: Mar 13 12:21:06.919 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 13 12:21:06.934849 coreos-metadata[959]: Mar 13 12:21:06.934 INFO Fetch successful Mar 13 12:21:06.951337 coreos-metadata[959]: Mar 13 12:21:06.951 INFO wrote hostname ci-4081.3.101-461ebd96c0 to /sysroot/etc/hostname Mar 13 12:21:06.958922 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 13 12:21:07.113825 initrd-setup-root[975]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 12:21:07.158046 initrd-setup-root[982]: cut: /sysroot/etc/group: No such file or directory Mar 13 12:21:07.163788 initrd-setup-root[989]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 12:21:07.171031 initrd-setup-root[996]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 12:21:08.214305 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 12:21:08.229653 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 12:21:08.236105 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 12:21:08.263189 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 12:21:08.267579 kernel: BTRFS info (device sda6): last unmount of filesystem e3f2e5ef-5667-4244-bb44-c23afbdb3707 Mar 13 12:21:08.289122 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 12:21:08.298304 ignition[1064]: INFO : Ignition 2.19.0 Mar 13 12:21:08.298304 ignition[1064]: INFO : Stage: mount Mar 13 12:21:08.298304 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 12:21:08.298304 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:21:08.298304 ignition[1064]: INFO : mount: mount passed Mar 13 12:21:08.298304 ignition[1064]: INFO : Ignition finished successfully Mar 13 12:21:08.298717 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 12:21:08.325564 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 12:21:08.339640 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 12:21:08.365445 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1075) Mar 13 12:21:08.376829 kernel: BTRFS info (device sda6): first mount of filesystem e3f2e5ef-5667-4244-bb44-c23afbdb3707 Mar 13 12:21:08.376883 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 13 12:21:08.380661 kernel: BTRFS info (device sda6): using free space tree Mar 13 12:21:08.387439 kernel: BTRFS info (device sda6): auto enabling async discard Mar 13 12:21:08.389115 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 12:21:08.411700 ignition[1093]: INFO : Ignition 2.19.0 Mar 13 12:21:08.411700 ignition[1093]: INFO : Stage: files Mar 13 12:21:08.418602 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 12:21:08.418602 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:21:08.418602 ignition[1093]: DEBUG : files: compiled without relabeling support, skipping Mar 13 12:21:08.443408 ignition[1093]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 12:21:08.443408 ignition[1093]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 12:21:08.540349 ignition[1093]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 12:21:08.546985 ignition[1093]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 12:21:08.546985 ignition[1093]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 12:21:08.540716 unknown[1093]: wrote ssh authorized keys file for user: core Mar 13 12:21:08.564558 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 13 12:21:08.564558 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 13 12:21:08.674159 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 12:21:09.013895 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 13 12:21:09.013895 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 13 12:21:09.031366 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Mar 13 12:21:09.491680 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 13 12:21:09.739757 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 13 12:21:09.739757 ignition[1093]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 13 12:21:09.773362 ignition[1093]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 12:21:09.782748 ignition[1093]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 12:21:09.782748 ignition[1093]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 13 12:21:09.782748 ignition[1093]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 13 12:21:09.810384 ignition[1093]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 12:21:09.810384 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 12:21:09.810384 ignition[1093]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 12:21:09.810384 ignition[1093]: INFO : files: files passed Mar 13 12:21:09.810384 ignition[1093]: INFO : Ignition finished successfully Mar 13 12:21:09.806031 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 12:21:09.825643 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 12:21:09.840597 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 12:21:09.852242 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 12:21:09.852330 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 12:21:09.889568 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 12:21:09.889568 initrd-setup-root-after-ignition[1120]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 12:21:09.905034 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 12:21:09.905505 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 12:21:09.917916 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 12:21:09.938639 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 12:21:09.963184 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 12:21:09.964512 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 12:21:09.973411 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 12:21:09.983179 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 12:21:09.992047 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 12:21:10.004631 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 12:21:10.022570 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 12:21:10.035730 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 12:21:10.054224 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 12:21:10.056492 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 12:21:10.065310 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 12:21:10.075335 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 12:21:10.085809 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 12:21:10.095256 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 12:21:10.095334 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 12:21:10.108484 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 12:21:10.118327 systemd[1]: Stopped target basic.target - Basic System. Mar 13 12:21:10.127260 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 12:21:10.136345 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 12:21:10.146868 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 12:21:10.156776 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 12:21:10.166365 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 12:21:10.176782 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 12:21:10.187187 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 12:21:10.195958 systemd[1]: Stopped target swap.target - Swaps. Mar 13 12:21:10.203635 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 12:21:10.203700 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 12:21:10.215754 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 12:21:10.225381 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 12:21:10.235405 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 12:21:10.235447 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 12:21:10.246091 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 12:21:10.246150 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 12:21:10.261686 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 12:21:10.261732 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 12:21:10.271388 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 12:21:10.271428 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 12:21:10.280398 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 13 12:21:10.280438 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 13 12:21:10.311644 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 12:21:10.323419 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 12:21:10.323503 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 12:21:10.341474 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 12:21:10.350280 ignition[1145]: INFO : Ignition 2.19.0 Mar 13 12:21:10.350280 ignition[1145]: INFO : Stage: umount Mar 13 12:21:10.350280 ignition[1145]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 12:21:10.350280 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:21:10.350280 ignition[1145]: INFO : umount: umount passed Mar 13 12:21:10.350280 ignition[1145]: INFO : Ignition finished successfully Mar 13 12:21:10.355571 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 12:21:10.355644 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 12:21:10.363091 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 12:21:10.363145 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 12:21:10.371830 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 12:21:10.371927 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 12:21:10.380963 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 12:21:10.381053 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 12:21:10.390686 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 12:21:10.390730 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 12:21:10.400924 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 13 12:21:10.400963 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 13 12:21:10.410001 systemd[1]: Stopped target network.target - Network. Mar 13 12:21:10.418274 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 12:21:10.418317 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 12:21:10.429498 systemd[1]: Stopped target paths.target - Path Units. Mar 13 12:21:10.439014 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 12:21:10.442452 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 12:21:10.451138 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 12:21:10.459399 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 12:21:10.463708 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 12:21:10.463769 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 12:21:10.472268 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 12:21:10.472312 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 12:21:10.480988 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 12:21:10.481035 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 12:21:10.490496 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 12:21:10.490532 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 12:21:10.678869 kernel: hv_netvsc 000d3a6d-f059-000d-3a6d-f059000d3a6d eth0: Data path switched from VF: enP10671s1 Mar 13 12:21:10.500775 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 12:21:10.510989 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 12:21:10.514785 systemd-networkd[903]: eth0: DHCPv6 lease lost Mar 13 12:21:10.525033 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 12:21:10.525574 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 12:21:10.525666 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 12:21:10.533934 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 12:21:10.534001 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 12:21:10.554691 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 12:21:10.564663 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 12:21:10.564736 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 12:21:10.571746 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 12:21:10.586236 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 12:21:10.586848 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 12:21:10.606474 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 12:21:10.606570 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 12:21:10.615548 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 12:21:10.615615 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 12:21:10.624111 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 12:21:10.624164 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 12:21:10.634507 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 12:21:10.634652 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 12:21:10.644310 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 12:21:10.644381 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 12:21:10.659912 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 12:21:10.659950 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 12:21:10.665221 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 12:21:10.665271 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 12:21:10.678935 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 12:21:10.678984 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 12:21:10.688502 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 12:21:10.688553 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 12:21:10.720701 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 12:21:10.734488 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 12:21:10.734550 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 12:21:10.746370 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 12:21:10.746415 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:21:10.757817 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 12:21:10.757924 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 12:21:10.930115 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Mar 13 12:21:10.768020 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 12:21:10.768112 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 12:21:10.778361 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 12:21:10.778457 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 12:21:10.789077 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 12:21:10.797856 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 12:21:10.797937 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 12:21:10.826722 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 12:21:10.841196 systemd[1]: Switching root. Mar 13 12:21:10.968580 systemd-journald[217]: Journal stopped Mar 13 12:21:29.841622 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 12:21:29.841646 kernel: SELinux: policy capability open_perms=1 Mar 13 12:21:29.841656 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 12:21:29.841664 kernel: SELinux: policy capability always_check_network=0 Mar 13 12:21:29.841674 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 12:21:29.841682 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 12:21:29.841690 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 12:21:29.841698 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 12:21:29.841708 systemd[1]: Successfully loaded SELinux policy in 671.203ms. Mar 13 12:21:29.841718 kernel: audit: type=1403 audit(1773404474.329:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 12:21:29.841728 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.426ms. Mar 13 12:21:29.841738 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 13 12:21:29.841746 systemd[1]: Detected virtualization microsoft. Mar 13 12:21:29.841755 systemd[1]: Detected architecture arm64. Mar 13 12:21:29.841765 systemd[1]: Detected first boot. Mar 13 12:21:29.841775 systemd[1]: Hostname set to . Mar 13 12:21:29.841784 systemd[1]: Initializing machine ID from random generator. Mar 13 12:21:29.841793 zram_generator::config[1185]: No configuration found. Mar 13 12:21:29.841803 systemd[1]: Populated /etc with preset unit settings. Mar 13 12:21:29.841812 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 12:21:29.841821 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 12:21:29.841829 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 12:21:29.841840 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 12:21:29.841850 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 12:21:29.841859 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 12:21:29.841868 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 12:21:29.841877 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 12:21:29.841887 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 12:21:29.841896 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 12:21:29.841906 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 12:21:29.841915 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 12:21:29.841925 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 12:21:29.841934 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 12:21:29.841943 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 12:21:29.841952 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 12:21:29.841962 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 12:21:29.841971 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 13 12:21:29.841981 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 12:21:29.841990 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 12:21:29.842000 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 12:21:29.842011 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 12:21:29.842021 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 12:21:29.842030 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 12:21:29.842040 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 12:21:29.842049 systemd[1]: Reached target slices.target - Slice Units. Mar 13 12:21:29.842060 systemd[1]: Reached target swap.target - Swaps. Mar 13 12:21:29.842069 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 12:21:29.842079 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 12:21:29.842088 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 12:21:29.842099 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 12:21:29.842109 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 12:21:29.842119 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 12:21:29.842129 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 12:21:29.842138 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 12:21:29.842148 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 12:21:29.842157 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 12:21:29.842167 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 12:21:29.842176 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 12:21:29.842187 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 12:21:29.842197 systemd[1]: Reached target machines.target - Containers. Mar 13 12:21:29.842206 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 12:21:29.842216 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 12:21:29.842225 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 12:21:29.842234 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 12:21:29.842244 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 12:21:29.842253 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 12:21:29.842264 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 12:21:29.842274 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 12:21:29.842283 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 12:21:29.842293 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 12:21:29.842304 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 12:21:29.842313 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 12:21:29.842323 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 12:21:29.842332 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 12:21:29.842343 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 12:21:29.842353 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 12:21:29.842362 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 12:21:29.842372 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 12:21:29.842381 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 12:21:29.842391 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 12:21:29.842400 systemd[1]: Stopped verity-setup.service. Mar 13 12:21:29.842409 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 12:21:29.842419 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 12:21:29.842436 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 12:21:29.842447 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 12:21:29.842456 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 12:21:29.842466 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 12:21:29.842476 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 12:21:29.842485 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 12:21:29.842495 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 12:21:29.842504 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 12:21:29.842515 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 12:21:29.842526 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 12:21:29.842535 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 12:21:29.842544 kernel: loop: module loaded Mar 13 12:21:29.842553 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 12:21:29.842563 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 12:21:29.842588 systemd-journald[1259]: Collecting audit messages is disabled. Mar 13 12:21:29.842609 systemd-journald[1259]: Journal started Mar 13 12:21:29.842629 systemd-journald[1259]: Runtime Journal (/run/log/journal/d1b1686bfd394ea982f471a6f309d5ab) is 8.0M, max 78.5M, 70.5M free. Mar 13 12:21:27.623506 systemd[1]: Queued start job for default target multi-user.target. Mar 13 12:21:28.585585 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 13 12:21:28.585942 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 12:21:28.586256 systemd[1]: systemd-journald.service: Consumed 2.562s CPU time. Mar 13 12:21:29.863011 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 12:21:29.866381 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 12:21:29.866955 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 12:21:29.877975 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 12:21:29.887524 kernel: fuse: init (API version 7.39) Mar 13 12:21:29.888028 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 12:21:29.888217 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 12:21:29.898535 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 12:21:29.904964 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 12:21:29.924549 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 12:21:29.932065 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 12:21:29.937566 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 12:21:29.937689 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 12:21:29.943944 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 13 12:21:29.954578 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 12:21:29.965628 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 12:21:29.972846 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 12:21:29.979470 kernel: ACPI: bus type drm_connector registered Mar 13 12:21:29.993585 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 12:21:29.999750 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 12:21:30.004964 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 12:21:30.007620 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 12:21:30.015284 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 12:21:30.016260 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 12:21:30.023661 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 12:21:30.031356 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 13 12:21:30.038465 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 12:21:30.044629 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 12:21:30.044760 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 12:21:30.050183 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 12:21:30.056134 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 12:21:30.064412 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 12:21:30.080676 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 12:21:30.092451 kernel: loop0: detected capacity change from 0 to 114424 Mar 13 12:21:30.093049 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 12:21:30.099334 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 12:21:30.100773 systemd-journald[1259]: Time spent on flushing to /var/log/journal/d1b1686bfd394ea982f471a6f309d5ab is 21.862ms for 896 entries. Mar 13 12:21:30.100773 systemd-journald[1259]: System Journal (/var/log/journal/d1b1686bfd394ea982f471a6f309d5ab) is 8.0M, max 2.6G, 2.6G free. Mar 13 12:21:30.210619 systemd-journald[1259]: Received client request to flush runtime journal. Mar 13 12:21:30.117668 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 13 12:21:30.124154 udevadm[1319]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 13 12:21:30.189807 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 12:21:30.211631 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 12:21:30.260195 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 12:21:30.263642 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 13 12:21:30.326311 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 12:21:30.338784 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 12:21:30.416257 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Mar 13 12:21:30.416273 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Mar 13 12:21:30.420455 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 12:21:30.557531 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 12:21:30.617463 kernel: loop1: detected capacity change from 0 to 114328 Mar 13 12:21:30.779063 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 12:21:30.788971 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 12:21:30.812613 systemd-udevd[1341]: Using default interface naming scheme 'v255'. Mar 13 12:21:30.880648 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 12:21:30.900659 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 12:21:30.948652 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 12:21:30.960178 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 13 12:21:31.010739 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 12:21:31.040454 kernel: hv_vmbus: registering driver hv_balloon Mar 13 12:21:31.044495 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 13 12:21:31.050416 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 13 12:21:31.073561 kernel: hv_vmbus: registering driver hyperv_fb Mar 13 12:21:31.073656 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 13 12:21:31.087524 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 13 12:21:31.087617 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#169 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 13 12:21:31.087832 kernel: loop2: detected capacity change from 0 to 31320 Mar 13 12:21:31.094970 kernel: Console: switching to colour dummy device 80x25 Mar 13 12:21:31.102615 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 12:21:31.102680 kernel: Console: switching to colour frame buffer device 128x48 Mar 13 12:21:31.135550 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 12:21:31.148914 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 12:21:31.149144 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:21:31.162034 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 12:21:31.170347 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 12:21:31.170567 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:21:31.184832 systemd-networkd[1354]: lo: Link UP Mar 13 12:21:31.184842 systemd-networkd[1354]: lo: Gained carrier Mar 13 12:21:31.185869 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 12:21:31.187720 systemd-networkd[1354]: Enumeration completed Mar 13 12:21:31.189560 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 12:21:31.189563 systemd-networkd[1354]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 12:21:31.196049 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 12:21:31.230493 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1349) Mar 13 12:21:31.232147 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 12:21:31.262462 kernel: mlx5_core 29af:00:02.0 enP10671s1: Link up Mar 13 12:21:31.274253 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 13 12:21:31.287476 kernel: hv_netvsc 000d3a6d-f059-000d-3a6d-f059000d3a6d eth0: Data path switched to VF: enP10671s1 Mar 13 12:21:31.287757 systemd-networkd[1354]: enP10671s1: Link UP Mar 13 12:21:31.288406 systemd-networkd[1354]: eth0: Link UP Mar 13 12:21:31.288413 systemd-networkd[1354]: eth0: Gained carrier Mar 13 12:21:31.288435 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 12:21:31.289683 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 12:21:31.291639 systemd-networkd[1354]: enP10671s1: Gained carrier Mar 13 12:21:31.303467 systemd-networkd[1354]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 13 12:21:31.391016 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 12:21:31.563521 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 13 12:21:31.574598 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 13 12:21:31.604470 kernel: loop3: detected capacity change from 0 to 200864 Mar 13 12:21:31.656456 kernel: loop4: detected capacity change from 0 to 114424 Mar 13 12:21:31.669444 kernel: loop5: detected capacity change from 0 to 114328 Mar 13 12:21:31.678443 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 13 12:21:31.682453 kernel: loop6: detected capacity change from 0 to 31320 Mar 13 12:21:31.696490 kernel: loop7: detected capacity change from 0 to 200864 Mar 13 12:21:31.701968 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 13 12:21:31.708712 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 12:21:31.712052 (sd-merge)[1442]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 13 12:21:31.712791 (sd-merge)[1442]: Merged extensions into '/usr'. Mar 13 12:21:31.718567 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 13 12:21:31.728101 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 13 12:21:31.729632 systemd[1]: Reloading requested from client PID 1317 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 12:21:31.729864 systemd[1]: Reloading... Mar 13 12:21:31.784460 zram_generator::config[1472]: No configuration found. Mar 13 12:21:32.198895 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 13 12:21:32.276337 systemd[1]: Reloading finished in 545 ms. Mar 13 12:21:32.304152 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:21:32.310983 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 12:21:32.317010 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 13 12:21:32.335601 systemd[1]: Starting ensure-sysext.service... Mar 13 12:21:32.340029 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 12:21:32.347535 systemd[1]: Reloading requested from client PID 1533 ('systemctl') (unit ensure-sysext.service)... Mar 13 12:21:32.347547 systemd[1]: Reloading... Mar 13 12:21:32.359877 systemd-tmpfiles[1534]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 12:21:32.360615 systemd-tmpfiles[1534]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 12:21:32.361842 systemd-tmpfiles[1534]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 12:21:32.362364 systemd-tmpfiles[1534]: ACLs are not supported, ignoring. Mar 13 12:21:32.362583 systemd-tmpfiles[1534]: ACLs are not supported, ignoring. Mar 13 12:21:32.415457 zram_generator::config[1564]: No configuration found. Mar 13 12:21:32.481982 systemd-networkd[1354]: eth0: Gained IPv6LL Mar 13 12:21:32.520276 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 13 12:21:32.598275 systemd[1]: Reloading finished in 250 ms. Mar 13 12:21:32.614185 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 12:21:32.620479 systemd-tmpfiles[1534]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 12:21:32.620486 systemd-tmpfiles[1534]: Skipping /boot Mar 13 12:21:32.630380 systemd-tmpfiles[1534]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 12:21:32.631527 systemd-tmpfiles[1534]: Skipping /boot Mar 13 12:21:32.636809 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 12:21:32.643656 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 12:21:32.651689 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 12:21:32.661273 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 12:21:32.666223 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 12:21:32.668467 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 12:21:32.674903 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 12:21:32.675041 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 12:21:32.680887 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 12:21:32.681000 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 12:21:32.688038 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 12:21:32.688158 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 12:21:32.709662 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 13 12:21:32.957651 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 12:21:32.962805 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 12:21:32.963887 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 12:21:32.974679 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 12:21:32.980588 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 12:21:32.987483 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 12:21:32.989697 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 12:21:33.009722 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 12:21:33.016234 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 12:21:33.023096 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 12:21:33.023389 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 12:21:33.029234 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 12:21:33.029380 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 12:21:33.036054 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 12:21:33.036188 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 12:21:33.049317 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 12:21:33.052761 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 12:21:33.060684 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 12:21:33.069172 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 12:21:33.084729 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 12:21:33.091863 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 12:21:33.092155 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 12:21:33.098893 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 12:21:33.100467 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 12:21:33.107164 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 12:21:33.107357 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 12:21:33.114030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 12:21:33.114320 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 12:21:33.120557 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 12:21:33.120778 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 12:21:33.130539 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 12:21:33.136618 systemd[1]: Finished ensure-sysext.service. Mar 13 12:21:33.144783 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 12:21:33.144937 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 12:21:33.917634 systemd-resolved[1641]: Positive Trust Anchors: Mar 13 12:21:33.917650 systemd-resolved[1641]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 12:21:33.917681 systemd-resolved[1641]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 12:21:34.154512 systemd-resolved[1641]: Using system hostname 'ci-4081.3.101-461ebd96c0'. Mar 13 12:21:34.156193 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 12:21:34.161856 systemd[1]: Reached target network.target - Network. Mar 13 12:21:34.165941 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 12:21:34.171094 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 12:21:34.358597 augenrules[1665]: No rules Mar 13 12:21:34.360624 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 13 12:21:34.977647 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 12:21:39.614516 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 12:21:39.620749 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 12:21:50.566210 ldconfig[1312]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 12:21:50.702944 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 12:21:50.713600 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 12:21:50.726791 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 12:21:50.732229 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 12:21:50.737274 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 12:21:50.743026 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 12:21:50.748849 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 12:21:50.753925 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 12:21:50.759784 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 12:21:50.765402 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 12:21:50.765456 systemd[1]: Reached target paths.target - Path Units. Mar 13 12:21:50.769741 systemd[1]: Reached target timers.target - Timer Units. Mar 13 12:21:50.776462 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 12:21:50.782720 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 12:21:50.791184 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 12:21:50.796309 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 12:21:50.801688 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 12:21:50.806300 systemd[1]: Reached target basic.target - Basic System. Mar 13 12:21:50.811057 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 12:21:50.811087 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 12:21:51.044514 systemd[1]: Starting chronyd.service - NTP client/server... Mar 13 12:21:51.051567 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 12:21:51.062619 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 13 12:21:51.068034 (chronyd)[1678]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 13 12:21:51.071050 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 12:21:51.076225 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 12:21:51.081769 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 12:21:51.086411 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 12:21:51.086556 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 13 12:21:51.090610 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 13 12:21:51.092242 chronyd[1689]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 13 12:21:51.095759 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 13 12:21:51.098550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 12:21:51.103615 jq[1684]: false Mar 13 12:21:51.101388 KVP[1687]: KVP starting; pid is:1687 Mar 13 12:21:51.107583 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 12:21:51.112718 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 12:21:51.119860 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 12:21:51.125967 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 12:21:51.136477 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 12:21:51.144606 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 12:21:51.149715 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 13 12:21:51.150141 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 12:21:51.151669 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 12:21:51.162669 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 12:21:51.169936 chronyd[1689]: Timezone right/UTC failed leap second check, ignoring Mar 13 12:21:51.170094 chronyd[1689]: Loaded seccomp filter (level 2) Mar 13 12:21:51.173009 systemd[1]: Started chronyd.service - NTP client/server. Mar 13 12:21:51.178023 jq[1703]: true Mar 13 12:21:51.187771 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 12:21:51.189467 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 12:21:51.197373 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 12:21:51.197569 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 12:21:51.225776 kernel: hv_utils: KVP IC version 4.0 Mar 13 12:21:51.223547 KVP[1687]: KVP LIC Version: 3.1 Mar 13 12:21:51.233465 jq[1707]: true Mar 13 12:21:51.422403 extend-filesystems[1685]: Found loop4 Mar 13 12:21:51.422403 extend-filesystems[1685]: Found loop5 Mar 13 12:21:51.428848 extend-filesystems[1685]: Found loop6 Mar 13 12:21:51.428848 extend-filesystems[1685]: Found loop7 Mar 13 12:21:51.428848 extend-filesystems[1685]: Found sda Mar 13 12:21:51.428848 extend-filesystems[1685]: Found sda1 Mar 13 12:21:51.428848 extend-filesystems[1685]: Found sda2 Mar 13 12:21:51.428848 extend-filesystems[1685]: Found sda3 Mar 13 12:21:51.428848 extend-filesystems[1685]: Found usr Mar 13 12:21:51.428848 extend-filesystems[1685]: Found sda4 Mar 13 12:21:51.428848 extend-filesystems[1685]: Found sda6 Mar 13 12:21:51.428848 extend-filesystems[1685]: Found sda7 Mar 13 12:21:51.428848 extend-filesystems[1685]: Found sda9 Mar 13 12:21:51.428848 extend-filesystems[1685]: Checking size of /dev/sda9 Mar 13 12:21:51.473372 (ntainerd)[1735]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 12:21:51.482696 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 12:21:51.483900 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 12:21:51.520459 tar[1706]: linux-arm64/LICENSE Mar 13 12:21:51.520459 tar[1706]: linux-arm64/helm Mar 13 12:21:51.537192 systemd-logind[1698]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 13 12:21:51.537642 systemd-logind[1698]: New seat seat0. Mar 13 12:21:51.538275 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 12:21:51.753484 update_engine[1702]: I20260313 12:21:51.752614 1702 main.cc:92] Flatcar Update Engine starting Mar 13 12:21:51.813871 bash[1730]: Updated "/home/core/.ssh/authorized_keys" Mar 13 12:21:51.817514 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 12:21:51.826238 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 13 12:21:51.835781 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 12:21:51.860608 extend-filesystems[1685]: Old size kept for /dev/sda9 Mar 13 12:21:51.876034 extend-filesystems[1685]: Found sr0 Mar 13 12:21:51.863208 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 12:21:51.863394 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 12:21:51.913496 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1748) Mar 13 12:21:51.995030 dbus-daemon[1681]: [system] SELinux support is enabled Mar 13 12:21:51.997517 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 12:21:52.001875 update_engine[1702]: I20260313 12:21:52.001827 1702 update_check_scheduler.cc:74] Next update check in 8m53s Mar 13 12:21:52.006406 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 12:21:52.007048 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 12:21:52.016760 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 12:21:52.016786 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 12:21:52.025308 dbus-daemon[1681]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 13 12:21:52.026645 systemd[1]: Started update-engine.service - Update Engine. Mar 13 12:21:52.045736 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 12:21:52.074152 coreos-metadata[1680]: Mar 13 12:21:52.073 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 13 12:21:52.078586 coreos-metadata[1680]: Mar 13 12:21:52.078 INFO Fetch successful Mar 13 12:21:52.079058 coreos-metadata[1680]: Mar 13 12:21:52.078 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 13 12:21:52.083608 coreos-metadata[1680]: Mar 13 12:21:52.083 INFO Fetch successful Mar 13 12:21:52.083608 coreos-metadata[1680]: Mar 13 12:21:52.083 INFO Fetching http://168.63.129.16/machine/f362fdcc-0284-469b-8f50-6a2ef38da386/b00526c8%2Db1c8%2D4d86%2D991d%2D51c0b68a101e.%5Fci%2D4081.3.101%2D461ebd96c0?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 13 12:21:52.087020 coreos-metadata[1680]: Mar 13 12:21:52.086 INFO Fetch successful Mar 13 12:21:52.087099 coreos-metadata[1680]: Mar 13 12:21:52.087 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 13 12:21:52.100128 coreos-metadata[1680]: Mar 13 12:21:52.100 INFO Fetch successful Mar 13 12:21:52.139719 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 13 12:21:52.150227 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 12:21:52.189729 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 12:21:52.200204 (kubelet)[1806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 12:21:52.310325 tar[1706]: linux-arm64/README.md Mar 13 12:21:52.322761 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 12:21:52.322939 locksmithd[1791]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 12:21:52.452106 sshd_keygen[1701]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 12:21:52.476559 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 12:21:52.490686 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 12:21:52.498341 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 13 12:21:52.506137 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 12:21:52.506302 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 12:21:52.517750 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 12:21:52.531917 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 13 12:21:52.545770 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 12:21:52.563764 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 12:21:52.570371 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 13 12:21:52.575719 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 12:21:52.614041 containerd[1735]: time="2026-03-13T12:21:52.613958760Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 13 12:21:52.644332 containerd[1735]: time="2026-03-13T12:21:52.644295640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 13 12:21:52.646593 containerd[1735]: time="2026-03-13T12:21:52.646561840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.129-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 13 12:21:52.646696 containerd[1735]: time="2026-03-13T12:21:52.646682200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 13 12:21:52.646751 containerd[1735]: time="2026-03-13T12:21:52.646740280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 13 12:21:52.646952 containerd[1735]: time="2026-03-13T12:21:52.646934840Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 13 12:21:52.647014 containerd[1735]: time="2026-03-13T12:21:52.647002120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 13 12:21:52.647122 containerd[1735]: time="2026-03-13T12:21:52.647104960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 13 12:21:52.647184 containerd[1735]: time="2026-03-13T12:21:52.647170120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 13 12:21:52.647403 containerd[1735]: time="2026-03-13T12:21:52.647383240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 13 12:21:52.647543 containerd[1735]: time="2026-03-13T12:21:52.647501880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 13 12:21:52.647621 containerd[1735]: time="2026-03-13T12:21:52.647605680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 13 12:21:52.647669 containerd[1735]: time="2026-03-13T12:21:52.647657560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 13 12:21:52.647979 containerd[1735]: time="2026-03-13T12:21:52.647953400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 13 12:21:52.648252 containerd[1735]: time="2026-03-13T12:21:52.648227160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 13 12:21:52.648473 containerd[1735]: time="2026-03-13T12:21:52.648424520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 13 12:21:52.648733 containerd[1735]: time="2026-03-13T12:21:52.648532720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 13 12:21:52.648733 containerd[1735]: time="2026-03-13T12:21:52.648625600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 13 12:21:52.648733 containerd[1735]: time="2026-03-13T12:21:52.648666600Z" level=info msg="metadata content store policy set" policy=shared Mar 13 12:21:52.662928 containerd[1735]: time="2026-03-13T12:21:52.662902160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 13 12:21:52.663043 containerd[1735]: time="2026-03-13T12:21:52.663029120Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 13 12:21:52.663169 containerd[1735]: time="2026-03-13T12:21:52.663153360Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 13 12:21:52.663246 containerd[1735]: time="2026-03-13T12:21:52.663234320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 13 12:21:52.663377 containerd[1735]: time="2026-03-13T12:21:52.663309640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 13 12:21:52.663543 containerd[1735]: time="2026-03-13T12:21:52.663527360Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 13 12:21:52.663878 containerd[1735]: time="2026-03-13T12:21:52.663862040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 13 12:21:52.664069 containerd[1735]: time="2026-03-13T12:21:52.664053960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 13 12:21:52.664447 containerd[1735]: time="2026-03-13T12:21:52.664165800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 13 12:21:52.664447 containerd[1735]: time="2026-03-13T12:21:52.664186120Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 13 12:21:52.664447 containerd[1735]: time="2026-03-13T12:21:52.664206760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 13 12:21:52.664447 containerd[1735]: time="2026-03-13T12:21:52.664227600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 13 12:21:52.664447 containerd[1735]: time="2026-03-13T12:21:52.664242160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 13 12:21:52.664447 containerd[1735]: time="2026-03-13T12:21:52.664256560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 13 12:21:52.664447 containerd[1735]: time="2026-03-13T12:21:52.664270960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 13 12:21:52.664447 containerd[1735]: time="2026-03-13T12:21:52.664283680Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 13 12:21:52.664447 containerd[1735]: time="2026-03-13T12:21:52.664295680Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 13 12:21:52.664447 containerd[1735]: time="2026-03-13T12:21:52.664308760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 13 12:21:52.664447 containerd[1735]: time="2026-03-13T12:21:52.664331560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 13 12:21:52.664447 containerd[1735]: time="2026-03-13T12:21:52.664345800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 13 12:21:52.664447 containerd[1735]: time="2026-03-13T12:21:52.664357800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 13 12:21:52.664447 containerd[1735]: time="2026-03-13T12:21:52.664370000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 13 12:21:52.664708 containerd[1735]: time="2026-03-13T12:21:52.664382000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 13 12:21:52.664708 containerd[1735]: time="2026-03-13T12:21:52.664395080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 13 12:21:52.664708 containerd[1735]: time="2026-03-13T12:21:52.664407800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 13 12:21:52.664708 containerd[1735]: time="2026-03-13T12:21:52.664421480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 13 12:21:52.664861 containerd[1735]: time="2026-03-13T12:21:52.664797920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 13 12:21:52.664861 containerd[1735]: time="2026-03-13T12:21:52.664824760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 13 12:21:52.664861 containerd[1735]: time="2026-03-13T12:21:52.664838200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 13 12:21:52.665008 containerd[1735]: time="2026-03-13T12:21:52.664850360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 13 12:21:52.665008 containerd[1735]: time="2026-03-13T12:21:52.664951360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 13 12:21:52.665008 containerd[1735]: time="2026-03-13T12:21:52.664974160Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 13 12:21:52.665089 containerd[1735]: time="2026-03-13T12:21:52.664996040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 13 12:21:52.665237 containerd[1735]: time="2026-03-13T12:21:52.665130920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 13 12:21:52.665237 containerd[1735]: time="2026-03-13T12:21:52.665147680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 13 12:21:52.665237 containerd[1735]: time="2026-03-13T12:21:52.665208560Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 13 12:21:52.665402 containerd[1735]: time="2026-03-13T12:21:52.665226800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 13 12:21:52.665402 containerd[1735]: time="2026-03-13T12:21:52.665358040Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 13 12:21:52.665402 containerd[1735]: time="2026-03-13T12:21:52.665373360Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 13 12:21:52.665402 containerd[1735]: time="2026-03-13T12:21:52.665382640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 13 12:21:52.665662 containerd[1735]: time="2026-03-13T12:21:52.665565160Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 13 12:21:52.665662 containerd[1735]: time="2026-03-13T12:21:52.665593640Z" level=info msg="NRI interface is disabled by configuration." Mar 13 12:21:52.665662 containerd[1735]: time="2026-03-13T12:21:52.665608040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 13 12:21:52.666172 containerd[1735]: time="2026-03-13T12:21:52.666047080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 13 12:21:52.666172 containerd[1735]: time="2026-03-13T12:21:52.666115120Z" level=info msg="Connect containerd service" Mar 13 12:21:52.666438 containerd[1735]: time="2026-03-13T12:21:52.666149400Z" level=info msg="using legacy CRI server" Mar 13 12:21:52.666438 containerd[1735]: time="2026-03-13T12:21:52.666327080Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 12:21:52.666676 containerd[1735]: time="2026-03-13T12:21:52.666553200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 13 12:21:52.667261 containerd[1735]: time="2026-03-13T12:21:52.667234240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 12:21:52.667525 containerd[1735]: time="2026-03-13T12:21:52.667472160Z" level=info msg="Start subscribing containerd event" Mar 13 12:21:52.667726 containerd[1735]: time="2026-03-13T12:21:52.667652760Z" level=info msg="Start recovering state" Mar 13 12:21:52.667972 containerd[1735]: time="2026-03-13T12:21:52.667713640Z" level=info msg="Start event monitor" Mar 13 12:21:52.667972 containerd[1735]: time="2026-03-13T12:21:52.667909720Z" level=info msg="Start snapshots syncer" Mar 13 12:21:52.667972 containerd[1735]: time="2026-03-13T12:21:52.667924120Z" level=info msg="Start cni network conf syncer for default" Mar 13 12:21:52.667972 containerd[1735]: time="2026-03-13T12:21:52.667932000Z" level=info msg="Start streaming server" Mar 13 12:21:52.668073 containerd[1735]: time="2026-03-13T12:21:52.667876520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 12:21:52.669443 containerd[1735]: time="2026-03-13T12:21:52.668101520Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 12:21:52.668243 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 12:21:52.673090 containerd[1735]: time="2026-03-13T12:21:52.673060800Z" level=info msg="containerd successfully booted in 0.061665s" Mar 13 12:21:52.675186 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 12:21:52.681175 systemd[1]: Startup finished in 619ms (kernel) + 13.548s (initrd) + 39.022s (userspace) = 53.190s. Mar 13 12:21:52.710040 kubelet[1806]: E0313 12:21:52.709988 1806 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 12:21:52.712792 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 12:21:52.712930 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 12:21:52.915975 login[1840]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 13 12:21:52.917904 login[1841]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:21:52.927257 systemd-logind[1698]: New session 1 of user core. Mar 13 12:21:52.927791 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 12:21:52.935850 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 12:21:53.004103 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 12:21:53.010740 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 12:21:53.013588 (systemd)[1854]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 12:21:53.173121 systemd[1854]: Queued start job for default target default.target. Mar 13 12:21:53.179610 systemd[1854]: Created slice app.slice - User Application Slice. Mar 13 12:21:53.179637 systemd[1854]: Reached target paths.target - Paths. Mar 13 12:21:53.179649 systemd[1854]: Reached target timers.target - Timers. Mar 13 12:21:53.180810 systemd[1854]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 12:21:53.191192 systemd[1854]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 12:21:53.191245 systemd[1854]: Reached target sockets.target - Sockets. Mar 13 12:21:53.191256 systemd[1854]: Reached target basic.target - Basic System. Mar 13 12:21:53.191305 systemd[1854]: Reached target default.target - Main User Target. Mar 13 12:21:53.191334 systemd[1854]: Startup finished in 172ms. Mar 13 12:21:53.191378 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 12:21:53.199332 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 12:21:53.916419 login[1840]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:21:53.920669 systemd-logind[1698]: New session 2 of user core. Mar 13 12:21:53.930554 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 12:21:54.504245 waagent[1837]: 2026-03-13T12:21:54.504165Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 13 12:21:54.508869 waagent[1837]: 2026-03-13T12:21:54.508821Z INFO Daemon Daemon OS: flatcar 4081.3.101 Mar 13 12:21:54.512648 waagent[1837]: 2026-03-13T12:21:54.512609Z INFO Daemon Daemon Python: 3.11.9 Mar 13 12:21:54.516056 waagent[1837]: 2026-03-13T12:21:54.516011Z INFO Daemon Daemon Run daemon Mar 13 12:21:54.521448 waagent[1837]: 2026-03-13T12:21:54.520845Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.101' Mar 13 12:21:54.528302 waagent[1837]: 2026-03-13T12:21:54.528258Z INFO Daemon Daemon Using waagent for provisioning Mar 13 12:21:54.532645 waagent[1837]: 2026-03-13T12:21:54.532559Z INFO Daemon Daemon Activate resource disk Mar 13 12:21:54.536400 waagent[1837]: 2026-03-13T12:21:54.536360Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 13 12:21:54.545933 waagent[1837]: 2026-03-13T12:21:54.545884Z INFO Daemon Daemon Found device: None Mar 13 12:21:54.549635 waagent[1837]: 2026-03-13T12:21:54.549594Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 13 12:21:54.556469 waagent[1837]: 2026-03-13T12:21:54.556419Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 13 12:21:54.567434 waagent[1837]: 2026-03-13T12:21:54.567384Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 13 12:21:54.572247 waagent[1837]: 2026-03-13T12:21:54.572209Z INFO Daemon Daemon Running default provisioning handler Mar 13 12:21:54.582693 waagent[1837]: 2026-03-13T12:21:54.582629Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 13 12:21:54.594277 waagent[1837]: 2026-03-13T12:21:54.594227Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 13 12:21:54.601951 waagent[1837]: 2026-03-13T12:21:54.601911Z INFO Daemon Daemon cloud-init is enabled: False Mar 13 12:21:54.606193 waagent[1837]: 2026-03-13T12:21:54.606157Z INFO Daemon Daemon Copying ovf-env.xml Mar 13 12:21:54.711697 waagent[1837]: 2026-03-13T12:21:54.711609Z INFO Daemon Daemon Successfully mounted dvd Mar 13 12:21:54.740499 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 13 12:21:54.742823 waagent[1837]: 2026-03-13T12:21:54.742770Z INFO Daemon Daemon Detect protocol endpoint Mar 13 12:21:54.746881 waagent[1837]: 2026-03-13T12:21:54.746835Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 13 12:21:54.751580 waagent[1837]: 2026-03-13T12:21:54.751539Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 13 12:21:54.756852 waagent[1837]: 2026-03-13T12:21:54.756777Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 13 12:21:54.761210 waagent[1837]: 2026-03-13T12:21:54.761170Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 13 12:21:54.765601 waagent[1837]: 2026-03-13T12:21:54.765562Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 13 12:21:54.797983 waagent[1837]: 2026-03-13T12:21:54.797946Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 13 12:21:54.808054 waagent[1837]: 2026-03-13T12:21:54.803765Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 13 12:21:54.808319 waagent[1837]: 2026-03-13T12:21:54.808281Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 13 12:21:55.157468 waagent[1837]: 2026-03-13T12:21:55.156796Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 13 12:21:55.162221 waagent[1837]: 2026-03-13T12:21:55.162172Z INFO Daemon Daemon Forcing an update of the goal state. Mar 13 12:21:55.170304 waagent[1837]: 2026-03-13T12:21:55.170258Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 13 12:21:55.203263 waagent[1837]: 2026-03-13T12:21:55.203221Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 13 12:21:55.208115 waagent[1837]: 2026-03-13T12:21:55.208074Z INFO Daemon Mar 13 12:21:55.210359 waagent[1837]: 2026-03-13T12:21:55.210318Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 7319bfc5-e954-4f07-8f59-711e28ec168b eTag: 16554284183879765106 source: Fabric] Mar 13 12:21:55.219338 waagent[1837]: 2026-03-13T12:21:55.219297Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 13 12:21:55.225301 waagent[1837]: 2026-03-13T12:21:55.225259Z INFO Daemon Mar 13 12:21:55.227596 waagent[1837]: 2026-03-13T12:21:55.227555Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 13 12:21:55.237121 waagent[1837]: 2026-03-13T12:21:55.237090Z INFO Daemon Daemon Downloading artifacts profile blob Mar 13 12:21:55.310460 waagent[1837]: 2026-03-13T12:21:55.309995Z INFO Daemon Downloaded certificate {'thumbprint': '2EC21A625CED5F512FF26FDC0E9847E15764A3EE', 'hasPrivateKey': True} Mar 13 12:21:55.318810 waagent[1837]: 2026-03-13T12:21:55.318768Z INFO Daemon Fetch goal state completed Mar 13 12:21:55.329498 waagent[1837]: 2026-03-13T12:21:55.329455Z INFO Daemon Daemon Starting provisioning Mar 13 12:21:55.333814 waagent[1837]: 2026-03-13T12:21:55.333773Z INFO Daemon Daemon Handle ovf-env.xml. Mar 13 12:21:55.337599 waagent[1837]: 2026-03-13T12:21:55.337562Z INFO Daemon Daemon Set hostname [ci-4081.3.101-461ebd96c0] Mar 13 12:21:55.362450 waagent[1837]: 2026-03-13T12:21:55.360154Z INFO Daemon Daemon Publish hostname [ci-4081.3.101-461ebd96c0] Mar 13 12:21:55.365469 waagent[1837]: 2026-03-13T12:21:55.365416Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 13 12:21:55.370410 waagent[1837]: 2026-03-13T12:21:55.370372Z INFO Daemon Daemon Primary interface is [eth0] Mar 13 12:21:55.416142 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 12:21:55.416149 systemd-networkd[1354]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 12:21:55.416177 systemd-networkd[1354]: eth0: DHCP lease lost Mar 13 12:21:55.417943 waagent[1837]: 2026-03-13T12:21:55.417865Z INFO Daemon Daemon Create user account if not exists Mar 13 12:21:55.422544 waagent[1837]: 2026-03-13T12:21:55.422420Z INFO Daemon Daemon User core already exists, skip useradd Mar 13 12:21:55.423531 systemd-networkd[1354]: eth0: DHCPv6 lease lost Mar 13 12:21:55.427464 waagent[1837]: 2026-03-13T12:21:55.427407Z INFO Daemon Daemon Configure sudoer Mar 13 12:21:55.431501 waagent[1837]: 2026-03-13T12:21:55.431410Z INFO Daemon Daemon Configure sshd Mar 13 12:21:55.435113 waagent[1837]: 2026-03-13T12:21:55.435068Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 13 12:21:55.445690 waagent[1837]: 2026-03-13T12:21:55.445221Z INFO Daemon Daemon Deploy ssh public key. Mar 13 12:21:55.461496 systemd-networkd[1354]: eth0: DHCPv4 address 10.200.20.18/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 13 12:21:56.569836 waagent[1837]: 2026-03-13T12:21:56.569772Z INFO Daemon Daemon Provisioning complete Mar 13 12:21:56.586366 waagent[1837]: 2026-03-13T12:21:56.586326Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 13 12:21:56.591437 waagent[1837]: 2026-03-13T12:21:56.591395Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 13 12:21:56.599422 waagent[1837]: 2026-03-13T12:21:56.599383Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 13 12:21:56.725407 waagent[1903]: 2026-03-13T12:21:56.725332Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 13 12:21:56.726347 waagent[1903]: 2026-03-13T12:21:56.725853Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.101 Mar 13 12:21:56.726347 waagent[1903]: 2026-03-13T12:21:56.725926Z INFO ExtHandler ExtHandler Python: 3.11.9 Mar 13 12:21:59.354302 waagent[1903]: 2026-03-13T12:21:59.354143Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.101; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 13 12:21:59.702729 waagent[1903]: 2026-03-13T12:21:59.702653Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 13 12:21:59.702824 waagent[1903]: 2026-03-13T12:21:59.702793Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 13 12:21:59.710887 waagent[1903]: 2026-03-13T12:21:59.710830Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 13 12:21:59.715943 waagent[1903]: 2026-03-13T12:21:59.715903Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 13 12:21:59.716370 waagent[1903]: 2026-03-13T12:21:59.716329Z INFO ExtHandler Mar 13 12:21:59.716459 waagent[1903]: 2026-03-13T12:21:59.716413Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 001b7168-0849-46d7-bae8-494a76d58f0f eTag: 16554284183879765106 source: Fabric] Mar 13 12:21:59.716760 waagent[1903]: 2026-03-13T12:21:59.716724Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 13 12:21:59.717518 waagent[1903]: 2026-03-13T12:21:59.717472Z INFO ExtHandler Mar 13 12:21:59.717591 waagent[1903]: 2026-03-13T12:21:59.717564Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 13 12:21:59.721206 waagent[1903]: 2026-03-13T12:21:59.721177Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 13 12:22:00.058509 waagent[1903]: 2026-03-13T12:22:00.057746Z INFO ExtHandler Downloaded certificate {'thumbprint': '2EC21A625CED5F512FF26FDC0E9847E15764A3EE', 'hasPrivateKey': True} Mar 13 12:22:00.058509 waagent[1903]: 2026-03-13T12:22:00.058352Z INFO ExtHandler Fetch goal state completed Mar 13 12:22:00.074215 waagent[1903]: 2026-03-13T12:22:00.074166Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1903 Mar 13 12:22:00.074358 waagent[1903]: 2026-03-13T12:22:00.074324Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 13 12:22:00.075962 waagent[1903]: 2026-03-13T12:22:00.075920Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.101', '', 'Flatcar Container Linux by Kinvolk'] Mar 13 12:22:00.076319 waagent[1903]: 2026-03-13T12:22:00.076283Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 13 12:22:00.368652 waagent[1903]: 2026-03-13T12:22:00.368615Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 13 12:22:00.368967 waagent[1903]: 2026-03-13T12:22:00.368812Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 13 12:22:00.375324 waagent[1903]: 2026-03-13T12:22:00.375270Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 13 12:22:00.380898 systemd[1]: Reloading requested from client PID 1916 ('systemctl') (unit waagent.service)... Mar 13 12:22:00.381146 systemd[1]: Reloading... Mar 13 12:22:00.452455 zram_generator::config[1947]: No configuration found. Mar 13 12:22:00.555269 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 13 12:22:00.634766 systemd[1]: Reloading finished in 253 ms. Mar 13 12:22:00.656470 waagent[1903]: 2026-03-13T12:22:00.656359Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 13 12:22:00.662867 systemd[1]: Reloading requested from client PID 2004 ('systemctl') (unit waagent.service)... Mar 13 12:22:00.662880 systemd[1]: Reloading... Mar 13 12:22:00.745524 zram_generator::config[2038]: No configuration found. Mar 13 12:22:00.831596 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 13 12:22:00.909201 systemd[1]: Reloading finished in 246 ms. Mar 13 12:22:00.931917 waagent[1903]: 2026-03-13T12:22:00.931673Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 13 12:22:00.931917 waagent[1903]: 2026-03-13T12:22:00.931829Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 13 12:22:02.834421 waagent[1903]: 2026-03-13T12:22:02.834339Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 13 12:22:02.835016 waagent[1903]: 2026-03-13T12:22:02.834964Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 13 12:22:02.835785 waagent[1903]: 2026-03-13T12:22:02.835708Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 13 12:22:02.836264 waagent[1903]: 2026-03-13T12:22:02.836127Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 13 12:22:02.837195 waagent[1903]: 2026-03-13T12:22:02.836477Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 13 12:22:02.837195 waagent[1903]: 2026-03-13T12:22:02.836571Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 13 12:22:02.837195 waagent[1903]: 2026-03-13T12:22:02.836766Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 13 12:22:02.837195 waagent[1903]: 2026-03-13T12:22:02.836936Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 13 12:22:02.837195 waagent[1903]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 13 12:22:02.837195 waagent[1903]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 13 12:22:02.837195 waagent[1903]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 13 12:22:02.837195 waagent[1903]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 13 12:22:02.837195 waagent[1903]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 13 12:22:02.837195 waagent[1903]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 13 12:22:02.837566 waagent[1903]: 2026-03-13T12:22:02.837508Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 13 12:22:02.837746 waagent[1903]: 2026-03-13T12:22:02.837702Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 13 12:22:02.837793 waagent[1903]: 2026-03-13T12:22:02.837754Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 13 12:22:02.838151 waagent[1903]: 2026-03-13T12:22:02.838097Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 13 12:22:02.838285 waagent[1903]: 2026-03-13T12:22:02.838242Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 13 12:22:02.838516 waagent[1903]: 2026-03-13T12:22:02.838469Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 13 12:22:02.838907 waagent[1903]: 2026-03-13T12:22:02.838866Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 13 12:22:02.840299 waagent[1903]: 2026-03-13T12:22:02.840250Z INFO EnvHandler ExtHandler Configure routes Mar 13 12:22:02.840371 waagent[1903]: 2026-03-13T12:22:02.840341Z INFO EnvHandler ExtHandler Gateway:None Mar 13 12:22:02.840414 waagent[1903]: 2026-03-13T12:22:02.840389Z INFO EnvHandler ExtHandler Routes:None Mar 13 12:22:02.846748 waagent[1903]: 2026-03-13T12:22:02.846705Z INFO ExtHandler ExtHandler Mar 13 12:22:02.846835 waagent[1903]: 2026-03-13T12:22:02.846801Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: f88cd66d-2b7a-47a0-bc67-de7bb79d8338 correlation 9bdbda58-3590-49b7-b1da-2d0cad6b0c81 created: 2026-03-13T12:20:28.640846Z] Mar 13 12:22:02.847184 waagent[1903]: 2026-03-13T12:22:02.847140Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 13 12:22:02.847732 waagent[1903]: 2026-03-13T12:22:02.847693Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Mar 13 12:22:02.854978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 12:22:02.862780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 12:22:02.881181 waagent[1903]: 2026-03-13T12:22:02.881122Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 58659174-B661-4919-AF56-73CE566043B9;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 13 12:22:03.123284 waagent[1903]: 2026-03-13T12:22:03.122831Z INFO MonitorHandler ExtHandler Network interfaces: Mar 13 12:22:03.123284 waagent[1903]: Executing ['ip', '-a', '-o', 'link']: Mar 13 12:22:03.123284 waagent[1903]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 13 12:22:03.123284 waagent[1903]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6d:f0:59 brd ff:ff:ff:ff:ff:ff Mar 13 12:22:03.123284 waagent[1903]: 3: enP10671s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6d:f0:59 brd ff:ff:ff:ff:ff:ff\ altname enP10671p0s2 Mar 13 12:22:03.123284 waagent[1903]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 13 12:22:03.123284 waagent[1903]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 13 12:22:03.123284 waagent[1903]: 2: eth0 inet 10.200.20.18/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 13 12:22:03.123284 waagent[1903]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 13 12:22:03.123284 waagent[1903]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 13 12:22:03.123284 waagent[1903]: 2: eth0 inet6 fe80::20d:3aff:fe6d:f059/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 13 12:22:04.123966 waagent[1903]: 2026-03-13T12:22:04.123891Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 13 12:22:04.123966 waagent[1903]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 13 12:22:04.123966 waagent[1903]: pkts bytes target prot opt in out source destination Mar 13 12:22:04.123966 waagent[1903]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 13 12:22:04.123966 waagent[1903]: pkts bytes target prot opt in out source destination Mar 13 12:22:04.123966 waagent[1903]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 13 12:22:04.123966 waagent[1903]: pkts bytes target prot opt in out source destination Mar 13 12:22:04.123966 waagent[1903]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 13 12:22:04.123966 waagent[1903]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 13 12:22:04.123966 waagent[1903]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 13 12:22:04.126799 waagent[1903]: 2026-03-13T12:22:04.126745Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 13 12:22:04.126799 waagent[1903]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 13 12:22:04.126799 waagent[1903]: pkts bytes target prot opt in out source destination Mar 13 12:22:04.126799 waagent[1903]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 13 12:22:04.126799 waagent[1903]: pkts bytes target prot opt in out source destination Mar 13 12:22:04.126799 waagent[1903]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 13 12:22:04.126799 waagent[1903]: pkts bytes target prot opt in out source destination Mar 13 12:22:04.126799 waagent[1903]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 13 12:22:04.126799 waagent[1903]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 13 12:22:04.126799 waagent[1903]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 13 12:22:04.127031 waagent[1903]: 2026-03-13T12:22:04.126998Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 13 12:22:06.539766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 12:22:06.543561 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 12:22:06.575186 kubelet[2132]: E0313 12:22:06.575131 2132 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 12:22:06.578417 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 12:22:06.578727 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 12:22:14.919664 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 12:22:14.921231 systemd[1]: Started sshd@0-10.200.20.18:22-10.200.16.10:45440.service - OpenSSH per-connection server daemon (10.200.16.10:45440). Mar 13 12:22:14.953063 chronyd[1689]: Selected source PHC0 Mar 13 12:22:15.480283 sshd[2141]: Accepted publickey for core from 10.200.16.10 port 45440 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:22:15.481095 sshd[2141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:22:15.485607 systemd-logind[1698]: New session 3 of user core. Mar 13 12:22:15.488565 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 12:22:15.913376 systemd[1]: Started sshd@1-10.200.20.18:22-10.200.16.10:45442.service - OpenSSH per-connection server daemon (10.200.16.10:45442). Mar 13 12:22:16.399089 sshd[2146]: Accepted publickey for core from 10.200.16.10 port 45442 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:22:16.399869 sshd[2146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:22:16.404362 systemd-logind[1698]: New session 4 of user core. Mar 13 12:22:16.409569 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 12:22:16.605053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 13 12:22:16.611572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 12:22:16.824137 systemd[1]: Started sshd@2-10.200.20.18:22-10.200.16.10:45456.service - OpenSSH per-connection server daemon (10.200.16.10:45456). Mar 13 12:22:16.899500 sshd[2146]: pam_unix(sshd:session): session closed for user core Mar 13 12:22:16.904948 systemd[1]: sshd@1-10.200.20.18:22-10.200.16.10:45442.service: Deactivated successfully. Mar 13 12:22:16.907144 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 12:22:16.908812 systemd-logind[1698]: Session 4 logged out. Waiting for processes to exit. Mar 13 12:22:16.910164 systemd-logind[1698]: Removed session 4. Mar 13 12:22:16.954835 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 12:22:16.958520 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 12:22:16.991787 kubelet[2163]: E0313 12:22:16.991748 2163 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 12:22:16.994466 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 12:22:16.994609 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 12:22:17.272517 sshd[2154]: Accepted publickey for core from 10.200.16.10 port 45456 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:22:17.273267 sshd[2154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:22:17.278111 systemd-logind[1698]: New session 5 of user core. Mar 13 12:22:17.282641 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 12:22:17.593727 sshd[2154]: pam_unix(sshd:session): session closed for user core Mar 13 12:22:17.597140 systemd[1]: sshd@2-10.200.20.18:22-10.200.16.10:45456.service: Deactivated successfully. Mar 13 12:22:17.598822 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 12:22:17.599656 systemd-logind[1698]: Session 5 logged out. Waiting for processes to exit. Mar 13 12:22:17.600786 systemd-logind[1698]: Removed session 5. Mar 13 12:22:17.697273 systemd[1]: Started sshd@3-10.200.20.18:22-10.200.16.10:45460.service - OpenSSH per-connection server daemon (10.200.16.10:45460). Mar 13 12:22:18.186177 sshd[2175]: Accepted publickey for core from 10.200.16.10 port 45460 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:22:18.186950 sshd[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:22:18.190474 systemd-logind[1698]: New session 6 of user core. Mar 13 12:22:18.199555 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 12:22:18.538035 sshd[2175]: pam_unix(sshd:session): session closed for user core Mar 13 12:22:18.540865 systemd-logind[1698]: Session 6 logged out. Waiting for processes to exit. Mar 13 12:22:18.541231 systemd[1]: sshd@3-10.200.20.18:22-10.200.16.10:45460.service: Deactivated successfully. Mar 13 12:22:18.543813 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 12:22:18.545484 systemd-logind[1698]: Removed session 6. Mar 13 12:22:18.627527 systemd[1]: Started sshd@4-10.200.20.18:22-10.200.16.10:45474.service - OpenSSH per-connection server daemon (10.200.16.10:45474). Mar 13 12:22:19.115329 sshd[2182]: Accepted publickey for core from 10.200.16.10 port 45474 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:22:19.116116 sshd[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:22:19.119544 systemd-logind[1698]: New session 7 of user core. Mar 13 12:22:19.125554 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 12:22:19.186010 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 13 12:22:19.557842 sudo[2185]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 12:22:19.558108 sudo[2185]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 12:22:19.570251 sudo[2185]: pam_unix(sudo:session): session closed for user root Mar 13 12:22:19.648548 sshd[2182]: pam_unix(sshd:session): session closed for user core Mar 13 12:22:19.652162 systemd[1]: sshd@4-10.200.20.18:22-10.200.16.10:45474.service: Deactivated successfully. Mar 13 12:22:19.654056 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 12:22:19.654987 systemd-logind[1698]: Session 7 logged out. Waiting for processes to exit. Mar 13 12:22:19.655951 systemd-logind[1698]: Removed session 7. Mar 13 12:22:19.736216 systemd[1]: Started sshd@5-10.200.20.18:22-10.200.16.10:45484.service - OpenSSH per-connection server daemon (10.200.16.10:45484). Mar 13 12:22:20.225475 sshd[2190]: Accepted publickey for core from 10.200.16.10 port 45484 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:22:20.227131 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:22:20.231595 systemd-logind[1698]: New session 8 of user core. Mar 13 12:22:20.236689 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 12:22:20.500716 sudo[2194]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 12:22:20.501373 sudo[2194]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 12:22:20.504282 sudo[2194]: pam_unix(sudo:session): session closed for user root Mar 13 12:22:20.508601 sudo[2193]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 13 12:22:20.508853 sudo[2193]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 12:22:20.528645 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 13 12:22:20.529757 auditctl[2197]: No rules Mar 13 12:22:20.530256 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 12:22:20.530450 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 13 12:22:20.533017 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 13 12:22:20.553864 augenrules[2215]: No rules Mar 13 12:22:20.556462 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 13 12:22:20.557534 sudo[2193]: pam_unix(sudo:session): session closed for user root Mar 13 12:22:20.635809 sshd[2190]: pam_unix(sshd:session): session closed for user core Mar 13 12:22:20.638342 systemd[1]: sshd@5-10.200.20.18:22-10.200.16.10:45484.service: Deactivated successfully. Mar 13 12:22:20.640006 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 12:22:20.641353 systemd-logind[1698]: Session 8 logged out. Waiting for processes to exit. Mar 13 12:22:20.642293 systemd-logind[1698]: Removed session 8. Mar 13 12:22:20.722036 systemd[1]: Started sshd@6-10.200.20.18:22-10.200.16.10:46776.service - OpenSSH per-connection server daemon (10.200.16.10:46776). Mar 13 12:22:21.210458 sshd[2223]: Accepted publickey for core from 10.200.16.10 port 46776 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:22:21.211175 sshd[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:22:21.214652 systemd-logind[1698]: New session 9 of user core. Mar 13 12:22:21.225604 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 12:22:21.484648 sudo[2226]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 12:22:21.485324 sudo[2226]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 12:22:22.439646 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 12:22:22.439835 (dockerd)[2241]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 12:22:23.053863 dockerd[2241]: time="2026-03-13T12:22:23.053809339Z" level=info msg="Starting up" Mar 13 12:22:23.507785 dockerd[2241]: time="2026-03-13T12:22:23.507744900Z" level=info msg="Loading containers: start." Mar 13 12:22:23.633454 kernel: Initializing XFRM netlink socket Mar 13 12:22:23.769932 systemd-networkd[1354]: docker0: Link UP Mar 13 12:22:23.800594 dockerd[2241]: time="2026-03-13T12:22:23.800555319Z" level=info msg="Loading containers: done." Mar 13 12:22:23.811243 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3916746961-merged.mount: Deactivated successfully. Mar 13 12:22:23.822102 dockerd[2241]: time="2026-03-13T12:22:23.822062162Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 12:22:23.822268 dockerd[2241]: time="2026-03-13T12:22:23.822180123Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 13 12:22:23.822317 dockerd[2241]: time="2026-03-13T12:22:23.822290564Z" level=info msg="Daemon has completed initialization" Mar 13 12:22:23.878899 dockerd[2241]: time="2026-03-13T12:22:23.878835153Z" level=info msg="API listen on /run/docker.sock" Mar 13 12:22:23.879539 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 12:22:24.334226 containerd[1735]: time="2026-03-13T12:22:24.333993963Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 13 12:22:25.235512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1482497746.mount: Deactivated successfully. Mar 13 12:22:26.539836 containerd[1735]: time="2026-03-13T12:22:26.539784906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:26.542901 containerd[1735]: time="2026-03-13T12:22:26.542873310Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=24583252" Mar 13 12:22:26.546353 containerd[1735]: time="2026-03-13T12:22:26.546320795Z" level=info msg="ImageCreate event name:\"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:26.551137 containerd[1735]: time="2026-03-13T12:22:26.550682441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:26.551835 containerd[1735]: time="2026-03-13T12:22:26.551807203Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"24579851\" in 2.2177738s" Mar 13 12:22:26.551896 containerd[1735]: time="2026-03-13T12:22:26.551837923Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\"" Mar 13 12:22:26.552353 containerd[1735]: time="2026-03-13T12:22:26.552325483Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 13 12:22:27.105060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 13 12:22:27.113710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 12:22:27.208104 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 12:22:27.211828 (kubelet)[2443]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 12:22:27.309991 kubelet[2443]: E0313 12:22:27.309929 2443 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 12:22:27.312065 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 12:22:27.312188 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 12:22:28.590467 containerd[1735]: time="2026-03-13T12:22:28.590180057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:28.594688 containerd[1735]: time="2026-03-13T12:22:28.594347103Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=19139641" Mar 13 12:22:28.655115 containerd[1735]: time="2026-03-13T12:22:28.655067549Z" level=info msg="ImageCreate event name:\"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:28.702831 containerd[1735]: time="2026-03-13T12:22:28.702508936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:28.704675 containerd[1735]: time="2026-03-13T12:22:28.704645779Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"20724045\" in 2.152289776s" Mar 13 12:22:28.704786 containerd[1735]: time="2026-03-13T12:22:28.704770899Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\"" Mar 13 12:22:28.705305 containerd[1735]: time="2026-03-13T12:22:28.705280780Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 13 12:22:35.047461 containerd[1735]: time="2026-03-13T12:22:35.047253866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:35.049716 containerd[1735]: time="2026-03-13T12:22:35.049687510Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=14195544" Mar 13 12:22:35.095390 containerd[1735]: time="2026-03-13T12:22:35.095322837Z" level=info msg="ImageCreate event name:\"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:35.143462 containerd[1735]: time="2026-03-13T12:22:35.143055888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:35.144546 containerd[1735]: time="2026-03-13T12:22:35.144152970Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"15779966\" in 6.43884123s" Mar 13 12:22:35.144546 containerd[1735]: time="2026-03-13T12:22:35.144190890Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\"" Mar 13 12:22:35.144650 containerd[1735]: time="2026-03-13T12:22:35.144627611Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 13 12:22:37.355254 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 13 12:22:37.362661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 12:22:37.459550 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 12:22:37.463712 (kubelet)[2466]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 12:22:37.498027 update_engine[1702]: I20260313 12:22:37.497476 1702 update_attempter.cc:509] Updating boot flags... Mar 13 12:22:37.561929 kubelet[2466]: E0313 12:22:37.561892 2466 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 12:22:37.564513 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 12:22:37.564653 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 12:22:41.441922 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2484) Mar 13 12:22:42.327487 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2477) Mar 13 12:22:42.467738 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2477) Mar 13 12:22:44.663108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1461612436.mount: Deactivated successfully. Mar 13 12:22:45.640858 containerd[1735]: time="2026-03-13T12:22:45.640813404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:45.702371 containerd[1735]: time="2026-03-13T12:22:45.702337650Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=22697088" Mar 13 12:22:45.770712 containerd[1735]: time="2026-03-13T12:22:45.770683189Z" level=info msg="ImageCreate event name:\"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:45.796827 containerd[1735]: time="2026-03-13T12:22:45.796772282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:45.797956 containerd[1735]: time="2026-03-13T12:22:45.797402563Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"22696107\" in 10.652747912s" Mar 13 12:22:45.797956 containerd[1735]: time="2026-03-13T12:22:45.797447243Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\"" Mar 13 12:22:45.797956 containerd[1735]: time="2026-03-13T12:22:45.797841924Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 13 12:22:46.596446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount856610589.mount: Deactivated successfully. Mar 13 12:22:47.605000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 13 12:22:47.612631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 12:22:47.724677 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 12:22:47.728358 (kubelet)[2634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 12:22:47.802465 kubelet[2634]: E0313 12:22:47.802406 2634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 12:22:47.805018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 12:22:47.805167 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 12:22:48.158965 containerd[1735]: time="2026-03-13T12:22:48.158922076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:48.161544 containerd[1735]: time="2026-03-13T12:22:48.161515080Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Mar 13 12:22:48.165167 containerd[1735]: time="2026-03-13T12:22:48.165124646Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:48.170287 containerd[1735]: time="2026-03-13T12:22:48.169882734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:48.171136 containerd[1735]: time="2026-03-13T12:22:48.171108016Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 2.373240452s" Mar 13 12:22:48.171231 containerd[1735]: time="2026-03-13T12:22:48.171215616Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Mar 13 12:22:48.171825 containerd[1735]: time="2026-03-13T12:22:48.171793857Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 13 12:22:48.794028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1009517201.mount: Deactivated successfully. Mar 13 12:22:48.814467 containerd[1735]: time="2026-03-13T12:22:48.813833031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:48.816658 containerd[1735]: time="2026-03-13T12:22:48.816451196Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 13 12:22:48.819650 containerd[1735]: time="2026-03-13T12:22:48.819625761Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:48.824808 containerd[1735]: time="2026-03-13T12:22:48.823994289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:48.824808 containerd[1735]: time="2026-03-13T12:22:48.824698370Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 652.860793ms" Mar 13 12:22:48.824808 containerd[1735]: time="2026-03-13T12:22:48.824728090Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 13 12:22:48.825324 containerd[1735]: time="2026-03-13T12:22:48.825300011Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 13 12:22:49.493354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3752065729.mount: Deactivated successfully. Mar 13 12:22:50.502047 containerd[1735]: time="2026-03-13T12:22:50.500978225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:50.504224 containerd[1735]: time="2026-03-13T12:22:50.504176711Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21125515" Mar 13 12:22:50.507760 containerd[1735]: time="2026-03-13T12:22:50.507719597Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:50.515740 containerd[1735]: time="2026-03-13T12:22:50.515708571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:50.516863 containerd[1735]: time="2026-03-13T12:22:50.516597772Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.691267241s" Mar 13 12:22:50.516863 containerd[1735]: time="2026-03-13T12:22:50.516628692Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Mar 13 12:22:55.866661 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 12:22:55.873028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 12:22:55.909313 systemd[1]: Reloading requested from client PID 2733 ('systemctl') (unit session-9.scope)... Mar 13 12:22:55.909331 systemd[1]: Reloading... Mar 13 12:22:56.009069 zram_generator::config[2775]: No configuration found. Mar 13 12:22:56.114991 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 13 12:22:56.196421 systemd[1]: Reloading finished in 286 ms. Mar 13 12:22:56.403497 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 12:22:56.403610 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 12:22:56.403880 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 12:22:56.411025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 12:22:57.422779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 12:22:57.432666 (kubelet)[2839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 12:22:57.465710 kubelet[2839]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 12:22:57.465710 kubelet[2839]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:22:57.466622 kubelet[2839]: I0313 12:22:57.466440 2839 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 12:22:58.272130 kubelet[2839]: I0313 12:22:58.272092 2839 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 12:22:58.272130 kubelet[2839]: I0313 12:22:58.272119 2839 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 12:22:58.273420 kubelet[2839]: I0313 12:22:58.273404 2839 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 12:22:58.273470 kubelet[2839]: I0313 12:22:58.273421 2839 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 12:22:58.273655 kubelet[2839]: I0313 12:22:58.273640 2839 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 12:22:58.283461 kubelet[2839]: E0313 12:22:58.282657 2839 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 12:22:58.285224 kubelet[2839]: I0313 12:22:58.285199 2839 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 12:22:58.289178 kubelet[2839]: E0313 12:22:58.289147 2839 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 13 12:22:58.289309 kubelet[2839]: I0313 12:22:58.289297 2839 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 13 12:22:58.292028 kubelet[2839]: I0313 12:22:58.292012 2839 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 12:22:58.292345 kubelet[2839]: I0313 12:22:58.292323 2839 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 12:22:58.292558 kubelet[2839]: I0313 12:22:58.292407 2839 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.101-461ebd96c0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 12:22:58.292683 kubelet[2839]: I0313 12:22:58.292672 2839 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 12:22:58.292732 kubelet[2839]: I0313 12:22:58.292726 2839 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 12:22:58.292866 kubelet[2839]: I0313 12:22:58.292856 2839 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 12:22:58.297687 kubelet[2839]: I0313 12:22:58.297672 2839 state_mem.go:36] "Initialized new in-memory state store" Mar 13 12:22:58.298991 kubelet[2839]: I0313 12:22:58.298977 2839 kubelet.go:475] "Attempting to sync node with API server" Mar 13 12:22:58.299079 kubelet[2839]: I0313 12:22:58.299070 2839 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 12:22:58.299152 kubelet[2839]: I0313 12:22:58.299144 2839 kubelet.go:387] "Adding apiserver pod source" Mar 13 12:22:58.299208 kubelet[2839]: I0313 12:22:58.299200 2839 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 12:22:58.299495 kubelet[2839]: E0313 12:22:58.299464 2839 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.101-461ebd96c0&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 12:22:58.300193 kubelet[2839]: I0313 12:22:58.300176 2839 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 13 12:22:58.300876 kubelet[2839]: I0313 12:22:58.300853 2839 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 12:22:58.300974 kubelet[2839]: I0313 12:22:58.300964 2839 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 12:22:58.301063 kubelet[2839]: W0313 12:22:58.301054 2839 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 12:22:58.303153 kubelet[2839]: I0313 12:22:58.303138 2839 server.go:1262] "Started kubelet" Mar 13 12:22:58.303792 kubelet[2839]: E0313 12:22:58.303413 2839 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 12:22:58.305373 kubelet[2839]: I0313 12:22:58.305334 2839 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 12:22:58.306068 kubelet[2839]: I0313 12:22:58.306044 2839 server.go:310] "Adding debug handlers to kubelet server" Mar 13 12:22:58.306284 kubelet[2839]: I0313 12:22:58.306244 2839 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 12:22:58.306376 kubelet[2839]: I0313 12:22:58.306365 2839 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 12:22:58.306686 kubelet[2839]: I0313 12:22:58.306666 2839 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 12:22:58.307879 kubelet[2839]: E0313 12:22:58.306870 2839 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.18:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.101-461ebd96c0.189c66116cceebb4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.101-461ebd96c0,UID:ci-4081.3.101-461ebd96c0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.101-461ebd96c0,},FirstTimestamp:2026-03-13 12:22:58.303110068 +0000 UTC m=+0.867833688,LastTimestamp:2026-03-13 12:22:58.303110068 +0000 UTC m=+0.867833688,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.101-461ebd96c0,}" Mar 13 12:22:58.310441 kubelet[2839]: E0313 12:22:58.310413 2839 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 12:22:58.311053 kubelet[2839]: I0313 12:22:58.310922 2839 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 12:22:58.311053 kubelet[2839]: I0313 12:22:58.311041 2839 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 12:22:58.312394 kubelet[2839]: E0313 12:22:58.312357 2839 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.101-461ebd96c0\" not found" Mar 13 12:22:58.312394 kubelet[2839]: I0313 12:22:58.312395 2839 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 12:22:58.313280 kubelet[2839]: I0313 12:22:58.313160 2839 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 12:22:58.313280 kubelet[2839]: I0313 12:22:58.313219 2839 reconciler.go:29] "Reconciler: start to sync state" Mar 13 12:22:58.314347 kubelet[2839]: I0313 12:22:58.314176 2839 factory.go:223] Registration of the systemd container factory successfully Mar 13 12:22:58.314347 kubelet[2839]: I0313 12:22:58.314262 2839 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 12:22:58.314619 kubelet[2839]: E0313 12:22:58.314592 2839 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 12:22:58.315618 kubelet[2839]: I0313 12:22:58.315596 2839 factory.go:223] Registration of the containerd container factory successfully Mar 13 12:22:58.325079 kubelet[2839]: E0313 12:22:58.325034 2839 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-461ebd96c0?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="200ms" Mar 13 12:22:58.331801 kubelet[2839]: I0313 12:22:58.331769 2839 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 12:22:58.332649 kubelet[2839]: I0313 12:22:58.332626 2839 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 12:22:58.332649 kubelet[2839]: I0313 12:22:58.332642 2839 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 12:22:58.332732 kubelet[2839]: I0313 12:22:58.332659 2839 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 12:22:58.332732 kubelet[2839]: E0313 12:22:58.332691 2839 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 12:22:58.337937 kubelet[2839]: E0313 12:22:58.337912 2839 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 12:22:58.341844 kubelet[2839]: I0313 12:22:58.341823 2839 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 12:22:58.341844 kubelet[2839]: I0313 12:22:58.341838 2839 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 12:22:58.341931 kubelet[2839]: I0313 12:22:58.341856 2839 state_mem.go:36] "Initialized new in-memory state store" Mar 13 12:22:58.346541 kubelet[2839]: I0313 12:22:58.346515 2839 policy_none.go:49] "None policy: Start" Mar 13 12:22:58.346541 kubelet[2839]: I0313 12:22:58.346537 2839 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 12:22:58.346541 kubelet[2839]: I0313 12:22:58.346548 2839 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 12:22:58.351101 kubelet[2839]: I0313 12:22:58.351085 2839 policy_none.go:47] "Start" Mar 13 12:22:58.354822 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 12:22:58.365108 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 12:22:58.368075 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 12:22:58.377628 kubelet[2839]: E0313 12:22:58.377226 2839 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 12:22:58.377628 kubelet[2839]: I0313 12:22:58.377403 2839 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 12:22:58.377628 kubelet[2839]: I0313 12:22:58.377416 2839 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 12:22:58.377740 kubelet[2839]: I0313 12:22:58.377662 2839 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 12:22:58.380042 kubelet[2839]: E0313 12:22:58.379824 2839 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 12:22:58.380042 kubelet[2839]: E0313 12:22:58.379859 2839 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.101-461ebd96c0\" not found" Mar 13 12:22:58.446327 systemd[1]: Created slice kubepods-burstable-pod21fd1cc8bb4bfdda18d742eab907ea19.slice - libcontainer container kubepods-burstable-pod21fd1cc8bb4bfdda18d742eab907ea19.slice. Mar 13 12:22:58.455021 kubelet[2839]: E0313 12:22:58.454375 2839 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-461ebd96c0\" not found" node="ci-4081.3.101-461ebd96c0" Mar 13 12:22:58.459708 systemd[1]: Created slice kubepods-burstable-pod718e671e05f994af1198ebd00db17c48.slice - libcontainer container kubepods-burstable-pod718e671e05f994af1198ebd00db17c48.slice. Mar 13 12:22:58.461550 kubelet[2839]: E0313 12:22:58.461530 2839 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-461ebd96c0\" not found" node="ci-4081.3.101-461ebd96c0" Mar 13 12:22:58.464131 systemd[1]: Created slice kubepods-burstable-pod6461d85ca71a951be5b0880dd56ae3da.slice - libcontainer container kubepods-burstable-pod6461d85ca71a951be5b0880dd56ae3da.slice. Mar 13 12:22:58.465976 kubelet[2839]: E0313 12:22:58.465843 2839 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-461ebd96c0\" not found" node="ci-4081.3.101-461ebd96c0" Mar 13 12:22:58.479528 kubelet[2839]: I0313 12:22:58.479360 2839 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.101-461ebd96c0" Mar 13 12:22:58.479839 kubelet[2839]: E0313 12:22:58.479807 2839 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4081.3.101-461ebd96c0" Mar 13 12:22:58.526520 kubelet[2839]: E0313 12:22:58.526392 2839 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-461ebd96c0?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="400ms" Mar 13 12:22:58.614903 kubelet[2839]: I0313 12:22:58.614859 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21fd1cc8bb4bfdda18d742eab907ea19-ca-certs\") pod \"kube-apiserver-ci-4081.3.101-461ebd96c0\" (UID: \"21fd1cc8bb4bfdda18d742eab907ea19\") " pod="kube-system/kube-apiserver-ci-4081.3.101-461ebd96c0" Mar 13 12:22:58.615017 kubelet[2839]: I0313 12:22:58.614910 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/718e671e05f994af1198ebd00db17c48-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.101-461ebd96c0\" (UID: \"718e671e05f994af1198ebd00db17c48\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-461ebd96c0" Mar 13 12:22:58.615017 kubelet[2839]: I0313 12:22:58.614932 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/718e671e05f994af1198ebd00db17c48-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.101-461ebd96c0\" (UID: \"718e671e05f994af1198ebd00db17c48\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-461ebd96c0" Mar 13 12:22:58.615017 kubelet[2839]: I0313 12:22:58.614945 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6461d85ca71a951be5b0880dd56ae3da-kubeconfig\") pod \"kube-scheduler-ci-4081.3.101-461ebd96c0\" (UID: \"6461d85ca71a951be5b0880dd56ae3da\") " pod="kube-system/kube-scheduler-ci-4081.3.101-461ebd96c0" Mar 13 12:22:58.615017 kubelet[2839]: I0313 12:22:58.614960 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21fd1cc8bb4bfdda18d742eab907ea19-k8s-certs\") pod \"kube-apiserver-ci-4081.3.101-461ebd96c0\" (UID: \"21fd1cc8bb4bfdda18d742eab907ea19\") " pod="kube-system/kube-apiserver-ci-4081.3.101-461ebd96c0" Mar 13 12:22:58.615017 kubelet[2839]: I0313 12:22:58.614978 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21fd1cc8bb4bfdda18d742eab907ea19-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.101-461ebd96c0\" (UID: \"21fd1cc8bb4bfdda18d742eab907ea19\") " pod="kube-system/kube-apiserver-ci-4081.3.101-461ebd96c0" Mar 13 12:22:58.615128 kubelet[2839]: I0313 12:22:58.614995 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/718e671e05f994af1198ebd00db17c48-ca-certs\") pod \"kube-controller-manager-ci-4081.3.101-461ebd96c0\" (UID: \"718e671e05f994af1198ebd00db17c48\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-461ebd96c0" Mar 13 12:22:58.615128 kubelet[2839]: I0313 12:22:58.615011 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/718e671e05f994af1198ebd00db17c48-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.101-461ebd96c0\" (UID: \"718e671e05f994af1198ebd00db17c48\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-461ebd96c0" Mar 13 12:22:58.615128 kubelet[2839]: I0313 12:22:58.615025 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/718e671e05f994af1198ebd00db17c48-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.101-461ebd96c0\" (UID: \"718e671e05f994af1198ebd00db17c48\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-461ebd96c0" Mar 13 12:22:58.681847 kubelet[2839]: I0313 12:22:58.681507 2839 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.101-461ebd96c0" Mar 13 12:22:58.681847 kubelet[2839]: E0313 12:22:58.681773 2839 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4081.3.101-461ebd96c0" Mar 13 12:22:58.761049 containerd[1735]: time="2026-03-13T12:22:58.760999583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.101-461ebd96c0,Uid:21fd1cc8bb4bfdda18d742eab907ea19,Namespace:kube-system,Attempt:0,}" Mar 13 12:22:58.767159 containerd[1735]: time="2026-03-13T12:22:58.766837196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.101-461ebd96c0,Uid:718e671e05f994af1198ebd00db17c48,Namespace:kube-system,Attempt:0,}" Mar 13 12:22:58.771052 containerd[1735]: time="2026-03-13T12:22:58.770893045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.101-461ebd96c0,Uid:6461d85ca71a951be5b0880dd56ae3da,Namespace:kube-system,Attempt:0,}" Mar 13 12:22:58.926952 kubelet[2839]: E0313 12:22:58.926916 2839 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-461ebd96c0?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="800ms" Mar 13 12:22:59.083828 kubelet[2839]: I0313 12:22:59.083784 2839 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.101-461ebd96c0" Mar 13 12:22:59.084115 kubelet[2839]: E0313 12:22:59.084086 2839 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4081.3.101-461ebd96c0" Mar 13 12:22:59.386679 kubelet[2839]: E0313 12:22:59.386640 2839 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 12:22:59.650717 kubelet[2839]: E0313 12:22:59.649938 2839 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.101-461ebd96c0&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 12:22:59.727941 kubelet[2839]: E0313 12:22:59.727902 2839 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-461ebd96c0?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="1.6s" Mar 13 12:22:59.783368 kubelet[2839]: E0313 12:22:59.783336 2839 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 12:22:59.885702 kubelet[2839]: I0313 12:22:59.885636 2839 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.101-461ebd96c0" Mar 13 12:22:59.885965 kubelet[2839]: E0313 12:22:59.885938 2839 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4081.3.101-461ebd96c0" Mar 13 12:22:59.937192 kubelet[2839]: E0313 12:22:59.937100 2839 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 12:23:00.373850 kubelet[2839]: E0313 12:23:00.373822 2839 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 12:23:00.755231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1319351472.mount: Deactivated successfully. Mar 13 12:23:01.051502 containerd[1735]: time="2026-03-13T12:23:01.050620921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 12:23:01.053408 containerd[1735]: time="2026-03-13T12:23:01.053362007Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 13 12:23:01.098702 containerd[1735]: time="2026-03-13T12:23:01.098605506Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 12:23:01.144396 containerd[1735]: time="2026-03-13T12:23:01.144347525Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 12:23:01.147291 containerd[1735]: time="2026-03-13T12:23:01.147249651Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 13 12:23:01.193171 containerd[1735]: time="2026-03-13T12:23:01.193115831Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 12:23:01.195913 containerd[1735]: time="2026-03-13T12:23:01.195694357Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 13 12:23:01.241208 containerd[1735]: time="2026-03-13T12:23:01.241165736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 12:23:01.242449 containerd[1735]: time="2026-03-13T12:23:01.241798457Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.470851532s" Mar 13 12:23:01.242809 containerd[1735]: time="2026-03-13T12:23:01.242779779Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.481701516s" Mar 13 12:23:01.243494 containerd[1735]: time="2026-03-13T12:23:01.243471301Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.476563185s" Mar 13 12:23:01.329132 kubelet[2839]: E0313 12:23:01.329016 2839 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-461ebd96c0?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="3.2s" Mar 13 12:23:01.488452 kubelet[2839]: I0313 12:23:01.488237 2839 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.101-461ebd96c0" Mar 13 12:23:01.488675 kubelet[2839]: E0313 12:23:01.488651 2839 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4081.3.101-461ebd96c0" Mar 13 12:23:01.502131 kubelet[2839]: E0313 12:23:01.502105 2839 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 12:23:01.963352 kubelet[2839]: E0313 12:23:01.963315 2839 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 12:23:02.237191 kubelet[2839]: E0313 12:23:02.237096 2839 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 12:23:02.400977 kubelet[2839]: E0313 12:23:02.400937 2839 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.101-461ebd96c0&limit=500&resourceVersion=0\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 12:23:04.359813 containerd[1735]: time="2026-03-13T12:23:04.359713946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:23:04.359813 containerd[1735]: time="2026-03-13T12:23:04.359782866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:23:04.360200 containerd[1735]: time="2026-03-13T12:23:04.359798266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:23:04.361974 containerd[1735]: time="2026-03-13T12:23:04.361925152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:23:04.367844 containerd[1735]: time="2026-03-13T12:23:04.367738049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:23:04.367844 containerd[1735]: time="2026-03-13T12:23:04.367809170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:23:04.367986 containerd[1735]: time="2026-03-13T12:23:04.367828090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:23:04.367986 containerd[1735]: time="2026-03-13T12:23:04.367904050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:23:04.374234 containerd[1735]: time="2026-03-13T12:23:04.370841499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:23:04.374234 containerd[1735]: time="2026-03-13T12:23:04.370887579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:23:04.374234 containerd[1735]: time="2026-03-13T12:23:04.370903339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:23:04.374234 containerd[1735]: time="2026-03-13T12:23:04.371035499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:23:04.390618 systemd[1]: Started cri-containerd-bffcbbe72e388f7b41ac9c05b8cfae40dfeea2d1d7aec4f1dc59339b71a60c0f.scope - libcontainer container bffcbbe72e388f7b41ac9c05b8cfae40dfeea2d1d7aec4f1dc59339b71a60c0f. Mar 13 12:23:04.394526 systemd[1]: Started cri-containerd-5fd4bdba640804831f8e3a5857e39a7007c714ae0af80fa4467b693c44b7d433.scope - libcontainer container 5fd4bdba640804831f8e3a5857e39a7007c714ae0af80fa4467b693c44b7d433. Mar 13 12:23:04.406581 systemd[1]: Started cri-containerd-d62fdb9708af7d028821c4ab3b9a2586d7cd5115be2fc429c274b853d1b0342c.scope - libcontainer container d62fdb9708af7d028821c4ab3b9a2586d7cd5115be2fc429c274b853d1b0342c. Mar 13 12:23:04.436946 containerd[1735]: time="2026-03-13T12:23:04.436820693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.101-461ebd96c0,Uid:718e671e05f994af1198ebd00db17c48,Namespace:kube-system,Attempt:0,} returns sandbox id \"bffcbbe72e388f7b41ac9c05b8cfae40dfeea2d1d7aec4f1dc59339b71a60c0f\"" Mar 13 12:23:04.449423 containerd[1735]: time="2026-03-13T12:23:04.448326767Z" level=info msg="CreateContainer within sandbox \"bffcbbe72e388f7b41ac9c05b8cfae40dfeea2d1d7aec4f1dc59339b71a60c0f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 12:23:04.449423 containerd[1735]: time="2026-03-13T12:23:04.448846609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.101-461ebd96c0,Uid:21fd1cc8bb4bfdda18d742eab907ea19,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fd4bdba640804831f8e3a5857e39a7007c714ae0af80fa4467b693c44b7d433\"" Mar 13 12:23:04.461221 containerd[1735]: time="2026-03-13T12:23:04.460619004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.101-461ebd96c0,Uid:6461d85ca71a951be5b0880dd56ae3da,Namespace:kube-system,Attempt:0,} returns sandbox id \"d62fdb9708af7d028821c4ab3b9a2586d7cd5115be2fc429c274b853d1b0342c\"" Mar 13 12:23:04.495479 containerd[1735]: time="2026-03-13T12:23:04.495412946Z" level=info msg="CreateContainer within sandbox \"5fd4bdba640804831f8e3a5857e39a7007c714ae0af80fa4467b693c44b7d433\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 12:23:04.530476 kubelet[2839]: E0313 12:23:04.530409 2839 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-461ebd96c0?timeout=10s\": dial tcp 10.200.20.18:6443: connect: connection refused" interval="6.4s" Mar 13 12:23:04.543607 containerd[1735]: time="2026-03-13T12:23:04.543569609Z" level=info msg="CreateContainer within sandbox \"d62fdb9708af7d028821c4ab3b9a2586d7cd5115be2fc429c274b853d1b0342c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 12:23:04.628114 kubelet[2839]: E0313 12:23:04.628007 2839 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 12:23:04.690757 kubelet[2839]: I0313 12:23:04.690729 2839 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.101-461ebd96c0" Mar 13 12:23:04.749261 kubelet[2839]: E0313 12:23:04.691147 2839 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.18:6443/api/v1/nodes\": dial tcp 10.200.20.18:6443: connect: connection refused" node="ci-4081.3.101-461ebd96c0" Mar 13 12:23:05.007960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1522061503.mount: Deactivated successfully. Mar 13 12:23:05.143857 containerd[1735]: time="2026-03-13T12:23:05.143709180Z" level=info msg="CreateContainer within sandbox \"5fd4bdba640804831f8e3a5857e39a7007c714ae0af80fa4467b693c44b7d433\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e903eef99fd9cd2e38a4b39cda04b236db3b4951ad6d16dec6bb0d20f0087a6f\"" Mar 13 12:23:05.144592 containerd[1735]: time="2026-03-13T12:23:05.144559743Z" level=info msg="StartContainer for \"e903eef99fd9cd2e38a4b39cda04b236db3b4951ad6d16dec6bb0d20f0087a6f\"" Mar 13 12:23:05.169593 systemd[1]: Started cri-containerd-e903eef99fd9cd2e38a4b39cda04b236db3b4951ad6d16dec6bb0d20f0087a6f.scope - libcontainer container e903eef99fd9cd2e38a4b39cda04b236db3b4951ad6d16dec6bb0d20f0087a6f. Mar 13 12:23:05.221227 kubelet[2839]: E0313 12:23:05.221119 2839 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.18:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.101-461ebd96c0.189c66116cceebb4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.101-461ebd96c0,UID:ci-4081.3.101-461ebd96c0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.101-461ebd96c0,},FirstTimestamp:2026-03-13 12:22:58.303110068 +0000 UTC m=+0.867833688,LastTimestamp:2026-03-13 12:22:58.303110068 +0000 UTC m=+0.867833688,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.101-461ebd96c0,}" Mar 13 12:23:05.299217 containerd[1735]: time="2026-03-13T12:23:05.299105439Z" level=info msg="CreateContainer within sandbox \"bffcbbe72e388f7b41ac9c05b8cfae40dfeea2d1d7aec4f1dc59339b71a60c0f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"74e39857e8c12dbcbd50e37213dce022a911ce9d8ba63bf48f650efb74bbb69a\"" Mar 13 12:23:05.299217 containerd[1735]: time="2026-03-13T12:23:05.299202999Z" level=info msg="StartContainer for \"e903eef99fd9cd2e38a4b39cda04b236db3b4951ad6d16dec6bb0d20f0087a6f\" returns successfully" Mar 13 12:23:05.300769 containerd[1735]: time="2026-03-13T12:23:05.299972241Z" level=info msg="StartContainer for \"74e39857e8c12dbcbd50e37213dce022a911ce9d8ba63bf48f650efb74bbb69a\"" Mar 13 12:23:05.329583 systemd[1]: Started cri-containerd-74e39857e8c12dbcbd50e37213dce022a911ce9d8ba63bf48f650efb74bbb69a.scope - libcontainer container 74e39857e8c12dbcbd50e37213dce022a911ce9d8ba63bf48f650efb74bbb69a. Mar 13 12:23:05.347740 containerd[1735]: time="2026-03-13T12:23:05.347614062Z" level=info msg="CreateContainer within sandbox \"d62fdb9708af7d028821c4ab3b9a2586d7cd5115be2fc429c274b853d1b0342c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5d3f355ade3d8d856cf7a92c825eb4a87b3c1dcc4116bd8cd6d71778a512e7cc\"" Mar 13 12:23:05.348779 containerd[1735]: time="2026-03-13T12:23:05.348389504Z" level=info msg="StartContainer for \"5d3f355ade3d8d856cf7a92c825eb4a87b3c1dcc4116bd8cd6d71778a512e7cc\"" Mar 13 12:23:05.358924 kubelet[2839]: E0313 12:23:05.358896 2839 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-461ebd96c0\" not found" node="ci-4081.3.101-461ebd96c0" Mar 13 12:23:05.381050 containerd[1735]: time="2026-03-13T12:23:05.380656320Z" level=info msg="StartContainer for \"74e39857e8c12dbcbd50e37213dce022a911ce9d8ba63bf48f650efb74bbb69a\" returns successfully" Mar 13 12:23:05.400575 systemd[1]: Started cri-containerd-5d3f355ade3d8d856cf7a92c825eb4a87b3c1dcc4116bd8cd6d71778a512e7cc.scope - libcontainer container 5d3f355ade3d8d856cf7a92c825eb4a87b3c1dcc4116bd8cd6d71778a512e7cc. Mar 13 12:23:05.481788 containerd[1735]: time="2026-03-13T12:23:05.481454697Z" level=info msg="StartContainer for \"5d3f355ade3d8d856cf7a92c825eb4a87b3c1dcc4116bd8cd6d71778a512e7cc\" returns successfully" Mar 13 12:23:06.364457 kubelet[2839]: E0313 12:23:06.364295 2839 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-461ebd96c0\" not found" node="ci-4081.3.101-461ebd96c0" Mar 13 12:23:06.368214 kubelet[2839]: E0313 12:23:06.367896 2839 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-461ebd96c0\" not found" node="ci-4081.3.101-461ebd96c0" Mar 13 12:23:06.368214 kubelet[2839]: E0313 12:23:06.368057 2839 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-461ebd96c0\" not found" node="ci-4081.3.101-461ebd96c0" Mar 13 12:23:07.368472 kubelet[2839]: E0313 12:23:07.367034 2839 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-461ebd96c0\" not found" node="ci-4081.3.101-461ebd96c0" Mar 13 12:23:07.368472 kubelet[2839]: E0313 12:23:07.367351 2839 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-461ebd96c0\" not found" node="ci-4081.3.101-461ebd96c0" Mar 13 12:23:07.369342 kubelet[2839]: E0313 12:23:07.369186 2839 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-461ebd96c0\" not found" node="ci-4081.3.101-461ebd96c0" Mar 13 12:23:08.305248 kubelet[2839]: I0313 12:23:08.305023 2839 apiserver.go:52] "Watching apiserver" Mar 13 12:23:08.368800 kubelet[2839]: E0313 12:23:08.368755 2839 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-461ebd96c0\" not found" node="ci-4081.3.101-461ebd96c0" Mar 13 12:23:08.380577 kubelet[2839]: E0313 12:23:08.380074 2839 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.101-461ebd96c0\" not found" Mar 13 12:23:08.413609 kubelet[2839]: I0313 12:23:08.413574 2839 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 12:23:08.713904 kubelet[2839]: E0313 12:23:08.713842 2839 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.101-461ebd96c0" not found Mar 13 12:23:09.072585 kubelet[2839]: E0313 12:23:09.072369 2839 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.101-461ebd96c0" not found Mar 13 12:23:09.666172 kubelet[2839]: E0313 12:23:09.666119 2839 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.101-461ebd96c0" not found Mar 13 12:23:10.933574 kubelet[2839]: E0313 12:23:10.933535 2839 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.101-461ebd96c0\" not found" node="ci-4081.3.101-461ebd96c0" Mar 13 12:23:11.093914 kubelet[2839]: I0313 12:23:11.093620 2839 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.101-461ebd96c0" Mar 13 12:23:11.104949 kubelet[2839]: I0313 12:23:11.104742 2839 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.101-461ebd96c0" Mar 13 12:23:11.117333 kubelet[2839]: I0313 12:23:11.117065 2839 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.101-461ebd96c0" Mar 13 12:23:11.127774 kubelet[2839]: I0313 12:23:11.127746 2839 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 12:23:11.128876 kubelet[2839]: I0313 12:23:11.128849 2839 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.101-461ebd96c0" Mar 13 12:23:11.135883 kubelet[2839]: I0313 12:23:11.135862 2839 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 12:23:11.136054 kubelet[2839]: I0313 12:23:11.135938 2839 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.101-461ebd96c0" Mar 13 12:23:11.142809 kubelet[2839]: I0313 12:23:11.142534 2839 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 12:23:11.578530 systemd[1]: Reloading requested from client PID 3121 ('systemctl') (unit session-9.scope)... Mar 13 12:23:11.578544 systemd[1]: Reloading... Mar 13 12:23:11.679548 zram_generator::config[3161]: No configuration found. Mar 13 12:23:11.815459 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 13 12:23:11.911017 systemd[1]: Reloading finished in 332 ms. Mar 13 12:23:11.944164 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 12:23:11.957241 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 12:23:11.957459 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 12:23:11.957508 systemd[1]: kubelet.service: Consumed 1.202s CPU time, 123.1M memory peak, 0B memory swap peak. Mar 13 12:23:11.961752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 12:23:12.321152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 12:23:12.333718 (kubelet)[3225]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 12:23:12.371140 kubelet[3225]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 12:23:12.371140 kubelet[3225]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:23:12.371526 kubelet[3225]: I0313 12:23:12.371187 3225 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 12:23:12.378065 kubelet[3225]: I0313 12:23:12.378003 3225 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 13 12:23:12.378669 kubelet[3225]: I0313 12:23:12.378176 3225 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 12:23:12.378669 kubelet[3225]: I0313 12:23:12.378208 3225 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 12:23:12.378669 kubelet[3225]: I0313 12:23:12.378216 3225 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 12:23:12.378669 kubelet[3225]: I0313 12:23:12.378403 3225 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 12:23:12.379984 kubelet[3225]: I0313 12:23:12.379969 3225 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 12:23:12.383051 kubelet[3225]: I0313 12:23:12.383033 3225 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 12:23:12.387515 kubelet[3225]: E0313 12:23:12.387491 3225 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 13 12:23:12.387662 kubelet[3225]: I0313 12:23:12.387649 3225 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 13 12:23:12.395658 kubelet[3225]: I0313 12:23:12.395635 3225 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 12:23:12.397786 kubelet[3225]: I0313 12:23:12.397742 3225 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 12:23:12.397956 kubelet[3225]: I0313 12:23:12.397774 3225 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.101-461ebd96c0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 12:23:12.397956 kubelet[3225]: I0313 12:23:12.397953 3225 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 12:23:12.398084 kubelet[3225]: I0313 12:23:12.397963 3225 container_manager_linux.go:306] "Creating device plugin manager" Mar 13 12:23:12.398084 kubelet[3225]: I0313 12:23:12.397992 3225 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 12:23:12.398173 kubelet[3225]: I0313 12:23:12.398156 3225 state_mem.go:36] "Initialized new in-memory state store" Mar 13 12:23:12.398313 kubelet[3225]: I0313 12:23:12.398298 3225 kubelet.go:475] "Attempting to sync node with API server" Mar 13 12:23:12.398361 kubelet[3225]: I0313 12:23:12.398315 3225 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 12:23:12.398361 kubelet[3225]: I0313 12:23:12.398344 3225 kubelet.go:387] "Adding apiserver pod source" Mar 13 12:23:12.398361 kubelet[3225]: I0313 12:23:12.398353 3225 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 12:23:12.400517 kubelet[3225]: I0313 12:23:12.399748 3225 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 13 12:23:12.402658 kubelet[3225]: I0313 12:23:12.401186 3225 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 12:23:12.402791 kubelet[3225]: I0313 12:23:12.402778 3225 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 12:23:12.408438 kubelet[3225]: I0313 12:23:12.406807 3225 server.go:1262] "Started kubelet" Mar 13 12:23:12.411447 kubelet[3225]: I0313 12:23:12.409484 3225 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 12:23:12.423039 kubelet[3225]: I0313 12:23:12.423013 3225 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 12:23:12.426441 kubelet[3225]: E0313 12:23:12.410517 3225 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 12:23:12.426552 kubelet[3225]: I0313 12:23:12.410659 3225 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 12:23:12.426649 kubelet[3225]: I0313 12:23:12.426632 3225 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 12:23:12.427309 kubelet[3225]: I0313 12:23:12.426842 3225 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 12:23:12.432452 kubelet[3225]: I0313 12:23:12.427479 3225 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 13 12:23:12.432452 kubelet[3225]: E0313 12:23:12.427673 3225 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.101-461ebd96c0\" not found" Mar 13 12:23:12.432452 kubelet[3225]: I0313 12:23:12.410628 3225 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 12:23:12.432452 kubelet[3225]: I0313 12:23:12.431564 3225 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 12:23:12.432600 kubelet[3225]: I0313 12:23:12.432538 3225 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 12:23:12.432600 kubelet[3225]: I0313 12:23:12.432555 3225 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 13 12:23:12.432600 kubelet[3225]: I0313 12:23:12.432575 3225 kubelet.go:2428] "Starting kubelet main sync loop" Mar 13 12:23:12.432663 kubelet[3225]: E0313 12:23:12.432621 3225 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 12:23:12.433141 kubelet[3225]: I0313 12:23:12.433126 3225 server.go:310] "Adding debug handlers to kubelet server" Mar 13 12:23:12.437440 kubelet[3225]: I0313 12:23:12.434275 3225 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 12:23:12.437678 kubelet[3225]: I0313 12:23:12.437651 3225 reconciler.go:29] "Reconciler: start to sync state" Mar 13 12:23:12.450516 kubelet[3225]: I0313 12:23:12.450483 3225 factory.go:223] Registration of the systemd container factory successfully Mar 13 12:23:12.450718 kubelet[3225]: I0313 12:23:12.450700 3225 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 12:23:12.457129 kubelet[3225]: I0313 12:23:12.457072 3225 factory.go:223] Registration of the containerd container factory successfully Mar 13 12:23:12.492258 kubelet[3225]: I0313 12:23:12.492235 3225 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 12:23:12.492462 kubelet[3225]: I0313 12:23:12.492415 3225 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 12:23:12.492537 kubelet[3225]: I0313 12:23:12.492528 3225 state_mem.go:36] "Initialized new in-memory state store" Mar 13 12:23:12.492755 kubelet[3225]: I0313 12:23:12.492732 3225 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 12:23:12.492840 kubelet[3225]: I0313 12:23:12.492807 3225 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 12:23:12.492889 kubelet[3225]: I0313 12:23:12.492881 3225 policy_none.go:49] "None policy: Start" Mar 13 12:23:12.492967 kubelet[3225]: I0313 12:23:12.492957 3225 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 12:23:12.493035 kubelet[3225]: I0313 12:23:12.493026 3225 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 12:23:12.493274 kubelet[3225]: I0313 12:23:12.493253 3225 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 13 12:23:12.493357 kubelet[3225]: I0313 12:23:12.493349 3225 policy_none.go:47] "Start" Mar 13 12:23:12.497345 kubelet[3225]: E0313 12:23:12.497325 3225 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 12:23:12.497862 kubelet[3225]: I0313 12:23:12.497847 3225 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 12:23:12.497993 kubelet[3225]: I0313 12:23:12.497964 3225 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 12:23:12.498256 kubelet[3225]: I0313 12:23:12.498237 3225 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 12:23:12.499861 kubelet[3225]: E0313 12:23:12.499844 3225 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 12:23:12.533563 kubelet[3225]: I0313 12:23:12.533533 3225 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.101-461ebd96c0" Mar 13 12:23:12.534314 kubelet[3225]: I0313 12:23:12.533993 3225 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.101-461ebd96c0" Mar 13 12:23:12.534314 kubelet[3225]: I0313 12:23:12.534192 3225 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.101-461ebd96c0" Mar 13 12:23:12.540875 kubelet[3225]: I0313 12:23:12.540631 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/718e671e05f994af1198ebd00db17c48-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.101-461ebd96c0\" (UID: \"718e671e05f994af1198ebd00db17c48\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-461ebd96c0" Mar 13 12:23:12.540875 kubelet[3225]: I0313 12:23:12.540677 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/718e671e05f994af1198ebd00db17c48-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.101-461ebd96c0\" (UID: \"718e671e05f994af1198ebd00db17c48\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-461ebd96c0" Mar 13 12:23:12.540875 kubelet[3225]: I0313 12:23:12.540698 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21fd1cc8bb4bfdda18d742eab907ea19-ca-certs\") pod \"kube-apiserver-ci-4081.3.101-461ebd96c0\" (UID: \"21fd1cc8bb4bfdda18d742eab907ea19\") " pod="kube-system/kube-apiserver-ci-4081.3.101-461ebd96c0" Mar 13 12:23:12.540875 kubelet[3225]: I0313 12:23:12.540725 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21fd1cc8bb4bfdda18d742eab907ea19-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.101-461ebd96c0\" (UID: \"21fd1cc8bb4bfdda18d742eab907ea19\") " pod="kube-system/kube-apiserver-ci-4081.3.101-461ebd96c0" Mar 13 12:23:12.540875 kubelet[3225]: I0313 12:23:12.540745 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/718e671e05f994af1198ebd00db17c48-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.101-461ebd96c0\" (UID: \"718e671e05f994af1198ebd00db17c48\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-461ebd96c0" Mar 13 12:23:12.541048 kubelet[3225]: I0313 12:23:12.540760 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6461d85ca71a951be5b0880dd56ae3da-kubeconfig\") pod \"kube-scheduler-ci-4081.3.101-461ebd96c0\" (UID: \"6461d85ca71a951be5b0880dd56ae3da\") " pod="kube-system/kube-scheduler-ci-4081.3.101-461ebd96c0" Mar 13 12:23:12.541048 kubelet[3225]: I0313 12:23:12.540777 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21fd1cc8bb4bfdda18d742eab907ea19-k8s-certs\") pod \"kube-apiserver-ci-4081.3.101-461ebd96c0\" (UID: \"21fd1cc8bb4bfdda18d742eab907ea19\") " pod="kube-system/kube-apiserver-ci-4081.3.101-461ebd96c0" Mar 13 12:23:12.541048 kubelet[3225]: I0313 12:23:12.540818 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/718e671e05f994af1198ebd00db17c48-ca-certs\") pod \"kube-controller-manager-ci-4081.3.101-461ebd96c0\" (UID: \"718e671e05f994af1198ebd00db17c48\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-461ebd96c0" Mar 13 12:23:12.541048 kubelet[3225]: I0313 12:23:12.540841 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/718e671e05f994af1198ebd00db17c48-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.101-461ebd96c0\" (UID: \"718e671e05f994af1198ebd00db17c48\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-461ebd96c0" Mar 13 12:23:12.548372 kubelet[3225]: I0313 12:23:12.548344 3225 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 12:23:12.548444 kubelet[3225]: E0313 12:23:12.548408 3225 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.101-461ebd96c0\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.101-461ebd96c0" Mar 13 12:23:12.549035 kubelet[3225]: I0313 12:23:12.549011 3225 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 12:23:12.549072 kubelet[3225]: E0313 12:23:12.549047 3225 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.101-461ebd96c0\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.101-461ebd96c0" Mar 13 12:23:12.549557 kubelet[3225]: I0313 12:23:12.549535 3225 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 12:23:12.549615 kubelet[3225]: E0313 12:23:12.549582 3225 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.101-461ebd96c0\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.101-461ebd96c0" Mar 13 12:23:12.601840 kubelet[3225]: I0313 12:23:12.600729 3225 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.101-461ebd96c0" Mar 13 12:23:12.611759 kubelet[3225]: I0313 12:23:12.611731 3225 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.101-461ebd96c0" Mar 13 12:23:12.611872 kubelet[3225]: I0313 12:23:12.611801 3225 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.101-461ebd96c0" Mar 13 12:23:13.399818 kubelet[3225]: I0313 12:23:13.399780 3225 apiserver.go:52] "Watching apiserver" Mar 13 12:23:13.437854 kubelet[3225]: I0313 12:23:13.437802 3225 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 12:23:13.504232 kubelet[3225]: I0313 12:23:13.504175 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.101-461ebd96c0" podStartSLOduration=2.504159587 podStartE2EDuration="2.504159587s" podCreationTimestamp="2026-03-13 12:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:23:13.504000546 +0000 UTC m=+1.167580458" watchObservedRunningTime="2026-03-13 12:23:13.504159587 +0000 UTC m=+1.167739499" Mar 13 12:23:13.504443 kubelet[3225]: I0313 12:23:13.504272 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.101-461ebd96c0" podStartSLOduration=2.504268267 podStartE2EDuration="2.504268267s" podCreationTimestamp="2026-03-13 12:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:23:13.493385364 +0000 UTC m=+1.156965276" watchObservedRunningTime="2026-03-13 12:23:13.504268267 +0000 UTC m=+1.167848139" Mar 13 12:23:13.515452 kubelet[3225]: I0313 12:23:13.514644 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.101-461ebd96c0" podStartSLOduration=2.514628049 podStartE2EDuration="2.514628049s" podCreationTimestamp="2026-03-13 12:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:23:13.514492449 +0000 UTC m=+1.178072361" watchObservedRunningTime="2026-03-13 12:23:13.514628049 +0000 UTC m=+1.178207961" Mar 13 12:23:15.494033 kubelet[3225]: I0313 12:23:15.493987 3225 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 12:23:15.494725 containerd[1735]: time="2026-03-13T12:23:15.494692444Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 12:23:15.494991 kubelet[3225]: I0313 12:23:15.494850 3225 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 12:23:16.436191 systemd[1]: Created slice kubepods-besteffort-pod61742f30_0211_4c11_8ea6_4553c059d26e.slice - libcontainer container kubepods-besteffort-pod61742f30_0211_4c11_8ea6_4553c059d26e.slice. Mar 13 12:23:16.461399 kubelet[3225]: I0313 12:23:16.461365 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/61742f30-0211-4c11-8ea6-4553c059d26e-kube-proxy\") pod \"kube-proxy-xxx9r\" (UID: \"61742f30-0211-4c11-8ea6-4553c059d26e\") " pod="kube-system/kube-proxy-xxx9r" Mar 13 12:23:16.461603 kubelet[3225]: I0313 12:23:16.461586 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61742f30-0211-4c11-8ea6-4553c059d26e-xtables-lock\") pod \"kube-proxy-xxx9r\" (UID: \"61742f30-0211-4c11-8ea6-4553c059d26e\") " pod="kube-system/kube-proxy-xxx9r" Mar 13 12:23:16.463436 kubelet[3225]: I0313 12:23:16.461686 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61742f30-0211-4c11-8ea6-4553c059d26e-lib-modules\") pod \"kube-proxy-xxx9r\" (UID: \"61742f30-0211-4c11-8ea6-4553c059d26e\") " pod="kube-system/kube-proxy-xxx9r" Mar 13 12:23:16.463436 kubelet[3225]: I0313 12:23:16.461717 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxzc2\" (UniqueName: \"kubernetes.io/projected/61742f30-0211-4c11-8ea6-4553c059d26e-kube-api-access-jxzc2\") pod \"kube-proxy-xxx9r\" (UID: \"61742f30-0211-4c11-8ea6-4553c059d26e\") " pod="kube-system/kube-proxy-xxx9r" Mar 13 12:23:16.756214 systemd[1]: Created slice kubepods-besteffort-pod7c927bef_7ce4_47d6_b829_84b5e57d49bd.slice - libcontainer container kubepods-besteffort-pod7c927bef_7ce4_47d6_b829_84b5e57d49bd.slice. Mar 13 12:23:16.763351 kubelet[3225]: I0313 12:23:16.763320 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw9v9\" (UniqueName: \"kubernetes.io/projected/7c927bef-7ce4-47d6-b829-84b5e57d49bd-kube-api-access-zw9v9\") pod \"tigera-operator-5588576f44-k4grh\" (UID: \"7c927bef-7ce4-47d6-b829-84b5e57d49bd\") " pod="tigera-operator/tigera-operator-5588576f44-k4grh" Mar 13 12:23:16.763801 kubelet[3225]: I0313 12:23:16.763524 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7c927bef-7ce4-47d6-b829-84b5e57d49bd-var-lib-calico\") pod \"tigera-operator-5588576f44-k4grh\" (UID: \"7c927bef-7ce4-47d6-b829-84b5e57d49bd\") " pod="tigera-operator/tigera-operator-5588576f44-k4grh" Mar 13 12:23:16.790558 containerd[1735]: time="2026-03-13T12:23:16.790517295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xxx9r,Uid:61742f30-0211-4c11-8ea6-4553c059d26e,Namespace:kube-system,Attempt:0,}" Mar 13 12:23:17.095519 containerd[1735]: time="2026-03-13T12:23:17.093864104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-k4grh,Uid:7c927bef-7ce4-47d6-b829-84b5e57d49bd,Namespace:tigera-operator,Attempt:0,}" Mar 13 12:23:17.109412 containerd[1735]: time="2026-03-13T12:23:17.105941408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:23:17.109412 containerd[1735]: time="2026-03-13T12:23:17.105993608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:23:17.109412 containerd[1735]: time="2026-03-13T12:23:17.106004448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:23:17.109412 containerd[1735]: time="2026-03-13T12:23:17.106087609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:23:17.123636 systemd[1]: Started cri-containerd-d9e0c3ede765b5d30c2fca6d12a867f4138e04ddd7bf72e1ab4f5e184f99f41d.scope - libcontainer container d9e0c3ede765b5d30c2fca6d12a867f4138e04ddd7bf72e1ab4f5e184f99f41d. Mar 13 12:23:17.143727 containerd[1735]: time="2026-03-13T12:23:17.143688798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xxx9r,Uid:61742f30-0211-4c11-8ea6-4553c059d26e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9e0c3ede765b5d30c2fca6d12a867f4138e04ddd7bf72e1ab4f5e184f99f41d\"" Mar 13 12:23:17.205413 containerd[1735]: time="2026-03-13T12:23:17.205373871Z" level=info msg="CreateContainer within sandbox \"d9e0c3ede765b5d30c2fca6d12a867f4138e04ddd7bf72e1ab4f5e184f99f41d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 12:23:17.514496 containerd[1735]: time="2026-03-13T12:23:17.514362159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:23:17.514496 containerd[1735]: time="2026-03-13T12:23:17.514413559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:23:17.514739 containerd[1735]: time="2026-03-13T12:23:17.514660919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:23:17.514867 containerd[1735]: time="2026-03-13T12:23:17.514823800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:23:17.528572 systemd[1]: Started cri-containerd-3562e5d6389db567c6aee7cb3a236990e65755ac60d16bf7cc65b5a6134a41d3.scope - libcontainer container 3562e5d6389db567c6aee7cb3a236990e65755ac60d16bf7cc65b5a6134a41d3. Mar 13 12:23:17.562343 containerd[1735]: time="2026-03-13T12:23:17.562228767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-k4grh,Uid:7c927bef-7ce4-47d6-b829-84b5e57d49bd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3562e5d6389db567c6aee7cb3a236990e65755ac60d16bf7cc65b5a6134a41d3\"" Mar 13 12:23:17.566365 containerd[1735]: time="2026-03-13T12:23:17.565065012Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 13 12:23:17.693689 containerd[1735]: time="2026-03-13T12:23:17.693636168Z" level=info msg="CreateContainer within sandbox \"d9e0c3ede765b5d30c2fca6d12a867f4138e04ddd7bf72e1ab4f5e184f99f41d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b2a70421e4e3480624926ad268535008ece56d692db771714b681f2f5cc1f3cf\"" Mar 13 12:23:17.694689 containerd[1735]: time="2026-03-13T12:23:17.694599690Z" level=info msg="StartContainer for \"b2a70421e4e3480624926ad268535008ece56d692db771714b681f2f5cc1f3cf\"" Mar 13 12:23:17.723586 systemd[1]: Started cri-containerd-b2a70421e4e3480624926ad268535008ece56d692db771714b681f2f5cc1f3cf.scope - libcontainer container b2a70421e4e3480624926ad268535008ece56d692db771714b681f2f5cc1f3cf. Mar 13 12:23:18.747070 containerd[1735]: time="2026-03-13T12:23:18.747004423Z" level=info msg="StartContainer for \"b2a70421e4e3480624926ad268535008ece56d692db771714b681f2f5cc1f3cf\" returns successfully" Mar 13 12:23:19.786161 kubelet[3225]: I0313 12:23:19.786088 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xxx9r" podStartSLOduration=3.786075345 podStartE2EDuration="3.786075345s" podCreationTimestamp="2026-03-13 12:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:23:19.778932211 +0000 UTC m=+7.442512123" watchObservedRunningTime="2026-03-13 12:23:19.786075345 +0000 UTC m=+7.449655217" Mar 13 12:23:20.214786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1875141619.mount: Deactivated successfully. Mar 13 12:23:21.004731 containerd[1735]: time="2026-03-13T12:23:21.004680924Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:23:21.007316 containerd[1735]: time="2026-03-13T12:23:21.007124048Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 13 12:23:21.010524 containerd[1735]: time="2026-03-13T12:23:21.010258654Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:23:21.014703 containerd[1735]: time="2026-03-13T12:23:21.014668702Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:23:21.015612 containerd[1735]: time="2026-03-13T12:23:21.015585664Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 3.450489172s" Mar 13 12:23:21.015711 containerd[1735]: time="2026-03-13T12:23:21.015695344Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 13 12:23:21.024099 containerd[1735]: time="2026-03-13T12:23:21.024066160Z" level=info msg="CreateContainer within sandbox \"3562e5d6389db567c6aee7cb3a236990e65755ac60d16bf7cc65b5a6134a41d3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 13 12:23:21.047751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4269547659.mount: Deactivated successfully. Mar 13 12:23:21.055398 containerd[1735]: time="2026-03-13T12:23:21.055304178Z" level=info msg="CreateContainer within sandbox \"3562e5d6389db567c6aee7cb3a236990e65755ac60d16bf7cc65b5a6134a41d3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e8c84e15fd01477351f67e6f931737f1f7a97315b029132fe7ee051ff906fceb\"" Mar 13 12:23:21.055987 containerd[1735]: time="2026-03-13T12:23:21.055960539Z" level=info msg="StartContainer for \"e8c84e15fd01477351f67e6f931737f1f7a97315b029132fe7ee051ff906fceb\"" Mar 13 12:23:21.084657 systemd[1]: Started cri-containerd-e8c84e15fd01477351f67e6f931737f1f7a97315b029132fe7ee051ff906fceb.scope - libcontainer container e8c84e15fd01477351f67e6f931737f1f7a97315b029132fe7ee051ff906fceb. Mar 13 12:23:21.107162 containerd[1735]: time="2026-03-13T12:23:21.107116074Z" level=info msg="StartContainer for \"e8c84e15fd01477351f67e6f931737f1f7a97315b029132fe7ee051ff906fceb\" returns successfully" Mar 13 12:23:26.771678 sudo[2226]: pam_unix(sudo:session): session closed for user root Mar 13 12:23:26.852403 sshd[2223]: pam_unix(sshd:session): session closed for user core Mar 13 12:23:26.857295 systemd-logind[1698]: Session 9 logged out. Waiting for processes to exit. Mar 13 12:23:26.858015 systemd[1]: sshd@6-10.200.20.18:22-10.200.16.10:46776.service: Deactivated successfully. Mar 13 12:23:26.863734 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 12:23:26.864117 systemd[1]: session-9.scope: Consumed 6.942s CPU time, 153.3M memory peak, 0B memory swap peak. Mar 13 12:23:26.866495 systemd-logind[1698]: Removed session 9. Mar 13 12:23:31.655449 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.102.126.214 Mar 13 12:23:31.655597 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.102.126.214 Mar 13 12:23:32.730901 kubelet[3225]: I0313 12:23:32.729882 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-k4grh" podStartSLOduration=13.277528545 podStartE2EDuration="16.729866121s" podCreationTimestamp="2026-03-13 12:23:16 +0000 UTC" firstStartedPulling="2026-03-13 12:23:17.56406493 +0000 UTC m=+5.227644802" lastFinishedPulling="2026-03-13 12:23:21.016402466 +0000 UTC m=+8.679982378" observedRunningTime="2026-03-13 12:23:21.777850597 +0000 UTC m=+9.441430469" watchObservedRunningTime="2026-03-13 12:23:32.729866121 +0000 UTC m=+20.393446033" Mar 13 12:23:32.749848 systemd[1]: Created slice kubepods-besteffort-pod8a84a8b9_89b7_4605_95fa_7061da44f67f.slice - libcontainer container kubepods-besteffort-pod8a84a8b9_89b7_4605_95fa_7061da44f67f.slice. Mar 13 12:23:32.766013 kubelet[3225]: I0313 12:23:32.765908 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8a84a8b9-89b7-4605-95fa-7061da44f67f-typha-certs\") pod \"calico-typha-849bc6dbd6-mlkrf\" (UID: \"8a84a8b9-89b7-4605-95fa-7061da44f67f\") " pod="calico-system/calico-typha-849bc6dbd6-mlkrf" Mar 13 12:23:32.766013 kubelet[3225]: I0313 12:23:32.765942 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a84a8b9-89b7-4605-95fa-7061da44f67f-tigera-ca-bundle\") pod \"calico-typha-849bc6dbd6-mlkrf\" (UID: \"8a84a8b9-89b7-4605-95fa-7061da44f67f\") " pod="calico-system/calico-typha-849bc6dbd6-mlkrf" Mar 13 12:23:32.766013 kubelet[3225]: I0313 12:23:32.765959 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8ql4\" (UniqueName: \"kubernetes.io/projected/8a84a8b9-89b7-4605-95fa-7061da44f67f-kube-api-access-m8ql4\") pod \"calico-typha-849bc6dbd6-mlkrf\" (UID: \"8a84a8b9-89b7-4605-95fa-7061da44f67f\") " pod="calico-system/calico-typha-849bc6dbd6-mlkrf" Mar 13 12:23:32.884787 systemd[1]: Created slice kubepods-besteffort-podfea3f860_9650_45c9_973f_c699411d19a2.slice - libcontainer container kubepods-besteffort-podfea3f860_9650_45c9_973f_c699411d19a2.slice. Mar 13 12:23:32.966668 kubelet[3225]: I0313 12:23:32.966622 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/fea3f860-9650-45c9-973f-c699411d19a2-bpffs\") pod \"calico-node-7z7bk\" (UID: \"fea3f860-9650-45c9-973f-c699411d19a2\") " pod="calico-system/calico-node-7z7bk" Mar 13 12:23:32.966668 kubelet[3225]: I0313 12:23:32.966662 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fea3f860-9650-45c9-973f-c699411d19a2-cni-net-dir\") pod \"calico-node-7z7bk\" (UID: \"fea3f860-9650-45c9-973f-c699411d19a2\") " pod="calico-system/calico-node-7z7bk" Mar 13 12:23:32.966668 kubelet[3225]: I0313 12:23:32.966679 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/fea3f860-9650-45c9-973f-c699411d19a2-sys-fs\") pod \"calico-node-7z7bk\" (UID: \"fea3f860-9650-45c9-973f-c699411d19a2\") " pod="calico-system/calico-node-7z7bk" Mar 13 12:23:32.968410 kubelet[3225]: I0313 12:23:32.966694 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fea3f860-9650-45c9-973f-c699411d19a2-xtables-lock\") pod \"calico-node-7z7bk\" (UID: \"fea3f860-9650-45c9-973f-c699411d19a2\") " pod="calico-system/calico-node-7z7bk" Mar 13 12:23:32.968410 kubelet[3225]: I0313 12:23:32.966709 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqcfv\" (UniqueName: \"kubernetes.io/projected/fea3f860-9650-45c9-973f-c699411d19a2-kube-api-access-vqcfv\") pod \"calico-node-7z7bk\" (UID: \"fea3f860-9650-45c9-973f-c699411d19a2\") " pod="calico-system/calico-node-7z7bk" Mar 13 12:23:32.968410 kubelet[3225]: I0313 12:23:32.966729 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fea3f860-9650-45c9-973f-c699411d19a2-var-run-calico\") pod \"calico-node-7z7bk\" (UID: \"fea3f860-9650-45c9-973f-c699411d19a2\") " pod="calico-system/calico-node-7z7bk" Mar 13 12:23:32.968410 kubelet[3225]: I0313 12:23:32.966746 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fea3f860-9650-45c9-973f-c699411d19a2-lib-modules\") pod \"calico-node-7z7bk\" (UID: \"fea3f860-9650-45c9-973f-c699411d19a2\") " pod="calico-system/calico-node-7z7bk" Mar 13 12:23:32.968410 kubelet[3225]: I0313 12:23:32.966760 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fea3f860-9650-45c9-973f-c699411d19a2-node-certs\") pod \"calico-node-7z7bk\" (UID: \"fea3f860-9650-45c9-973f-c699411d19a2\") " pod="calico-system/calico-node-7z7bk" Mar 13 12:23:32.968611 kubelet[3225]: I0313 12:23:32.966813 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/fea3f860-9650-45c9-973f-c699411d19a2-nodeproc\") pod \"calico-node-7z7bk\" (UID: \"fea3f860-9650-45c9-973f-c699411d19a2\") " pod="calico-system/calico-node-7z7bk" Mar 13 12:23:32.968611 kubelet[3225]: I0313 12:23:32.966853 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fea3f860-9650-45c9-973f-c699411d19a2-var-lib-calico\") pod \"calico-node-7z7bk\" (UID: \"fea3f860-9650-45c9-973f-c699411d19a2\") " pod="calico-system/calico-node-7z7bk" Mar 13 12:23:32.968611 kubelet[3225]: I0313 12:23:32.966909 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fea3f860-9650-45c9-973f-c699411d19a2-cni-bin-dir\") pod \"calico-node-7z7bk\" (UID: \"fea3f860-9650-45c9-973f-c699411d19a2\") " pod="calico-system/calico-node-7z7bk" Mar 13 12:23:32.968611 kubelet[3225]: I0313 12:23:32.966926 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fea3f860-9650-45c9-973f-c699411d19a2-cni-log-dir\") pod \"calico-node-7z7bk\" (UID: \"fea3f860-9650-45c9-973f-c699411d19a2\") " pod="calico-system/calico-node-7z7bk" Mar 13 12:23:32.968611 kubelet[3225]: I0313 12:23:32.966940 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fea3f860-9650-45c9-973f-c699411d19a2-policysync\") pod \"calico-node-7z7bk\" (UID: \"fea3f860-9650-45c9-973f-c699411d19a2\") " pod="calico-system/calico-node-7z7bk" Mar 13 12:23:32.968720 kubelet[3225]: I0313 12:23:32.966959 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fea3f860-9650-45c9-973f-c699411d19a2-flexvol-driver-host\") pod \"calico-node-7z7bk\" (UID: \"fea3f860-9650-45c9-973f-c699411d19a2\") " pod="calico-system/calico-node-7z7bk" Mar 13 12:23:32.968720 kubelet[3225]: I0313 12:23:32.966981 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fea3f860-9650-45c9-973f-c699411d19a2-tigera-ca-bundle\") pod \"calico-node-7z7bk\" (UID: \"fea3f860-9650-45c9-973f-c699411d19a2\") " pod="calico-system/calico-node-7z7bk" Mar 13 12:23:32.971467 kubelet[3225]: E0313 12:23:32.970589 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q6rzg" podUID="1e7c6c09-c767-4bee-9251-d729af24c7dc" Mar 13 12:23:33.058860 containerd[1735]: time="2026-03-13T12:23:33.058169496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-849bc6dbd6-mlkrf,Uid:8a84a8b9-89b7-4605-95fa-7061da44f67f,Namespace:calico-system,Attempt:0,}" Mar 13 12:23:33.068873 kubelet[3225]: I0313 12:23:33.067457 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1e7c6c09-c767-4bee-9251-d729af24c7dc-varrun\") pod \"csi-node-driver-q6rzg\" (UID: \"1e7c6c09-c767-4bee-9251-d729af24c7dc\") " pod="calico-system/csi-node-driver-q6rzg" Mar 13 12:23:33.068873 kubelet[3225]: I0313 12:23:33.067496 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j68cb\" (UniqueName: \"kubernetes.io/projected/1e7c6c09-c767-4bee-9251-d729af24c7dc-kube-api-access-j68cb\") pod \"csi-node-driver-q6rzg\" (UID: \"1e7c6c09-c767-4bee-9251-d729af24c7dc\") " pod="calico-system/csi-node-driver-q6rzg" Mar 13 12:23:33.068873 kubelet[3225]: I0313 12:23:33.067545 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e7c6c09-c767-4bee-9251-d729af24c7dc-kubelet-dir\") pod \"csi-node-driver-q6rzg\" (UID: \"1e7c6c09-c767-4bee-9251-d729af24c7dc\") " pod="calico-system/csi-node-driver-q6rzg" Mar 13 12:23:33.068873 kubelet[3225]: I0313 12:23:33.067592 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1e7c6c09-c767-4bee-9251-d729af24c7dc-registration-dir\") pod \"csi-node-driver-q6rzg\" (UID: \"1e7c6c09-c767-4bee-9251-d729af24c7dc\") " pod="calico-system/csi-node-driver-q6rzg" Mar 13 12:23:33.068873 kubelet[3225]: I0313 12:23:33.067611 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1e7c6c09-c767-4bee-9251-d729af24c7dc-socket-dir\") pod \"csi-node-driver-q6rzg\" (UID: \"1e7c6c09-c767-4bee-9251-d729af24c7dc\") " pod="calico-system/csi-node-driver-q6rzg" Mar 13 12:23:33.077947 kubelet[3225]: E0313 12:23:33.077732 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.077947 kubelet[3225]: W0313 12:23:33.077753 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.078065 kubelet[3225]: E0313 12:23:33.077894 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.087352 kubelet[3225]: E0313 12:23:33.087292 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.087352 kubelet[3225]: W0313 12:23:33.087306 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.087352 kubelet[3225]: E0313 12:23:33.087322 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.098951 containerd[1735]: time="2026-03-13T12:23:33.098863012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:23:33.098951 containerd[1735]: time="2026-03-13T12:23:33.098907572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:23:33.098951 containerd[1735]: time="2026-03-13T12:23:33.098917452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:23:33.100510 containerd[1735]: time="2026-03-13T12:23:33.099143812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:23:33.116732 systemd[1]: Started cri-containerd-dcffe0ed4400014acd682e2393a66932d3d088174a8ab988e94711f76e670fa3.scope - libcontainer container dcffe0ed4400014acd682e2393a66932d3d088174a8ab988e94711f76e670fa3. Mar 13 12:23:33.146395 containerd[1735]: time="2026-03-13T12:23:33.146349261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-849bc6dbd6-mlkrf,Uid:8a84a8b9-89b7-4605-95fa-7061da44f67f,Namespace:calico-system,Attempt:0,} returns sandbox id \"dcffe0ed4400014acd682e2393a66932d3d088174a8ab988e94711f76e670fa3\"" Mar 13 12:23:33.148219 containerd[1735]: time="2026-03-13T12:23:33.148075224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 13 12:23:33.168447 kubelet[3225]: E0313 12:23:33.168351 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.168447 kubelet[3225]: W0313 12:23:33.168371 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.168447 kubelet[3225]: E0313 12:23:33.168398 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.168853 kubelet[3225]: E0313 12:23:33.168766 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.168853 kubelet[3225]: W0313 12:23:33.168780 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.168853 kubelet[3225]: E0313 12:23:33.168792 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.169004 kubelet[3225]: E0313 12:23:33.168960 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.169004 kubelet[3225]: W0313 12:23:33.168970 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.169004 kubelet[3225]: E0313 12:23:33.168981 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.169213 kubelet[3225]: E0313 12:23:33.169198 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.169213 kubelet[3225]: W0313 12:23:33.169212 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.169293 kubelet[3225]: E0313 12:23:33.169224 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.169400 kubelet[3225]: E0313 12:23:33.169386 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.169463 kubelet[3225]: W0313 12:23:33.169401 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.169463 kubelet[3225]: E0313 12:23:33.169411 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.169684 kubelet[3225]: E0313 12:23:33.169671 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.169684 kubelet[3225]: W0313 12:23:33.169682 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.169771 kubelet[3225]: E0313 12:23:33.169693 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.169891 kubelet[3225]: E0313 12:23:33.169876 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.169891 kubelet[3225]: W0313 12:23:33.169889 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.169953 kubelet[3225]: E0313 12:23:33.169901 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.170064 kubelet[3225]: E0313 12:23:33.170050 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.170064 kubelet[3225]: W0313 12:23:33.170061 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.170131 kubelet[3225]: E0313 12:23:33.170069 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.170244 kubelet[3225]: E0313 12:23:33.170231 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.170244 kubelet[3225]: W0313 12:23:33.170242 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.170323 kubelet[3225]: E0313 12:23:33.170251 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.170397 kubelet[3225]: E0313 12:23:33.170384 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.170397 kubelet[3225]: W0313 12:23:33.170394 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.170483 kubelet[3225]: E0313 12:23:33.170403 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.170573 kubelet[3225]: E0313 12:23:33.170561 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.170573 kubelet[3225]: W0313 12:23:33.170571 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.170642 kubelet[3225]: E0313 12:23:33.170581 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.170743 kubelet[3225]: E0313 12:23:33.170730 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.170743 kubelet[3225]: W0313 12:23:33.170741 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.170814 kubelet[3225]: E0313 12:23:33.170750 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.170921 kubelet[3225]: E0313 12:23:33.170910 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.170921 kubelet[3225]: W0313 12:23:33.170919 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.170992 kubelet[3225]: E0313 12:23:33.170927 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.171090 kubelet[3225]: E0313 12:23:33.171077 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.171090 kubelet[3225]: W0313 12:23:33.171088 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.171243 kubelet[3225]: E0313 12:23:33.171097 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.171342 kubelet[3225]: E0313 12:23:33.171329 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.171448 kubelet[3225]: W0313 12:23:33.171399 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.171448 kubelet[3225]: E0313 12:23:33.171415 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.171693 kubelet[3225]: E0313 12:23:33.171677 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.171693 kubelet[3225]: W0313 12:23:33.171693 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.171788 kubelet[3225]: E0313 12:23:33.171707 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.171867 kubelet[3225]: E0313 12:23:33.171853 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.171867 kubelet[3225]: W0313 12:23:33.171865 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.171928 kubelet[3225]: E0313 12:23:33.171875 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.172282 kubelet[3225]: E0313 12:23:33.172166 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.172282 kubelet[3225]: W0313 12:23:33.172179 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.172282 kubelet[3225]: E0313 12:23:33.172191 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.172400 kubelet[3225]: E0313 12:23:33.172337 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.172400 kubelet[3225]: W0313 12:23:33.172344 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.172400 kubelet[3225]: E0313 12:23:33.172352 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.172600 kubelet[3225]: E0313 12:23:33.172519 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.172600 kubelet[3225]: W0313 12:23:33.172527 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.172600 kubelet[3225]: E0313 12:23:33.172536 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.172814 kubelet[3225]: E0313 12:23:33.172722 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.172814 kubelet[3225]: W0313 12:23:33.172730 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.172814 kubelet[3225]: E0313 12:23:33.172739 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.172914 kubelet[3225]: E0313 12:23:33.172895 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.172914 kubelet[3225]: W0313 12:23:33.172908 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.173000 kubelet[3225]: E0313 12:23:33.172917 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.173082 kubelet[3225]: E0313 12:23:33.173070 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.173082 kubelet[3225]: W0313 12:23:33.173080 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.173136 kubelet[3225]: E0313 12:23:33.173088 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.173244 kubelet[3225]: E0313 12:23:33.173234 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.173244 kubelet[3225]: W0313 12:23:33.173243 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.173304 kubelet[3225]: E0313 12:23:33.173252 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.173405 kubelet[3225]: E0313 12:23:33.173395 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.173405 kubelet[3225]: W0313 12:23:33.173404 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.173478 kubelet[3225]: E0313 12:23:33.173412 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.185059 kubelet[3225]: E0313 12:23:33.185021 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:33.185059 kubelet[3225]: W0313 12:23:33.185034 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:33.185059 kubelet[3225]: E0313 12:23:33.185049 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:33.195242 containerd[1735]: time="2026-03-13T12:23:33.195210752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7z7bk,Uid:fea3f860-9650-45c9-973f-c699411d19a2,Namespace:calico-system,Attempt:0,}" Mar 13 12:23:33.232063 containerd[1735]: time="2026-03-13T12:23:33.231905101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:23:33.232063 containerd[1735]: time="2026-03-13T12:23:33.231975621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:23:33.232063 containerd[1735]: time="2026-03-13T12:23:33.231986141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:23:33.232294 containerd[1735]: time="2026-03-13T12:23:33.232156461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:23:33.249599 systemd[1]: Started cri-containerd-8dab31531f715bd8dba7c8905664706f4c2bb754794e891ddbab90db0350226f.scope - libcontainer container 8dab31531f715bd8dba7c8905664706f4c2bb754794e891ddbab90db0350226f. Mar 13 12:23:33.268790 containerd[1735]: time="2026-03-13T12:23:33.268658170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7z7bk,Uid:fea3f860-9650-45c9-973f-c699411d19a2,Namespace:calico-system,Attempt:0,} returns sandbox id \"8dab31531f715bd8dba7c8905664706f4c2bb754794e891ddbab90db0350226f\"" Mar 13 12:23:34.350668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2617320845.mount: Deactivated successfully. Mar 13 12:23:34.433945 kubelet[3225]: E0313 12:23:34.433398 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q6rzg" podUID="1e7c6c09-c767-4bee-9251-d729af24c7dc" Mar 13 12:23:34.901782 containerd[1735]: time="2026-03-13T12:23:34.901730266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:23:34.904718 containerd[1735]: time="2026-03-13T12:23:34.904685351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 13 12:23:34.908148 containerd[1735]: time="2026-03-13T12:23:34.907923917Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:23:34.913250 containerd[1735]: time="2026-03-13T12:23:34.913222247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:23:34.914152 containerd[1735]: time="2026-03-13T12:23:34.913823088Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 1.765717224s" Mar 13 12:23:34.914152 containerd[1735]: time="2026-03-13T12:23:34.913852688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 13 12:23:34.916117 containerd[1735]: time="2026-03-13T12:23:34.915068851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 13 12:23:34.932602 containerd[1735]: time="2026-03-13T12:23:34.932412683Z" level=info msg="CreateContainer within sandbox \"dcffe0ed4400014acd682e2393a66932d3d088174a8ab988e94711f76e670fa3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 13 12:23:34.956116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1040602579.mount: Deactivated successfully. Mar 13 12:23:34.973454 containerd[1735]: time="2026-03-13T12:23:34.972640918Z" level=info msg="CreateContainer within sandbox \"dcffe0ed4400014acd682e2393a66932d3d088174a8ab988e94711f76e670fa3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a95d96b7d5becccf57d924c5bcff9dbfd41fad03d944d6463071aec61bb58839\"" Mar 13 12:23:34.973745 containerd[1735]: time="2026-03-13T12:23:34.973721640Z" level=info msg="StartContainer for \"a95d96b7d5becccf57d924c5bcff9dbfd41fad03d944d6463071aec61bb58839\"" Mar 13 12:23:34.999581 systemd[1]: Started cri-containerd-a95d96b7d5becccf57d924c5bcff9dbfd41fad03d944d6463071aec61bb58839.scope - libcontainer container a95d96b7d5becccf57d924c5bcff9dbfd41fad03d944d6463071aec61bb58839. Mar 13 12:23:35.031917 containerd[1735]: time="2026-03-13T12:23:35.031685549Z" level=info msg="StartContainer for \"a95d96b7d5becccf57d924c5bcff9dbfd41fad03d944d6463071aec61bb58839\" returns successfully" Mar 13 12:23:35.806999 kubelet[3225]: I0313 12:23:35.806883 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-849bc6dbd6-mlkrf" podStartSLOduration=2.039728395 podStartE2EDuration="3.806861382s" podCreationTimestamp="2026-03-13 12:23:32 +0000 UTC" firstStartedPulling="2026-03-13 12:23:33.147812143 +0000 UTC m=+20.811392055" lastFinishedPulling="2026-03-13 12:23:34.91494513 +0000 UTC m=+22.578525042" observedRunningTime="2026-03-13 12:23:35.80595034 +0000 UTC m=+23.469530292" watchObservedRunningTime="2026-03-13 12:23:35.806861382 +0000 UTC m=+23.470441294" Mar 13 12:23:35.866587 kubelet[3225]: E0313 12:23:35.866442 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.866587 kubelet[3225]: W0313 12:23:35.866467 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.866587 kubelet[3225]: E0313 12:23:35.866487 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.867279 kubelet[3225]: E0313 12:23:35.866646 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.867279 kubelet[3225]: W0313 12:23:35.866658 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.867279 kubelet[3225]: E0313 12:23:35.866691 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.867279 kubelet[3225]: E0313 12:23:35.866835 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.867279 kubelet[3225]: W0313 12:23:35.866844 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.867279 kubelet[3225]: E0313 12:23:35.866852 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.867279 kubelet[3225]: E0313 12:23:35.866985 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.867279 kubelet[3225]: W0313 12:23:35.867092 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.867279 kubelet[3225]: E0313 12:23:35.867107 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.867884 kubelet[3225]: E0313 12:23:35.867588 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.867884 kubelet[3225]: W0313 12:23:35.867599 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.867884 kubelet[3225]: E0313 12:23:35.867612 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.868131 kubelet[3225]: E0313 12:23:35.868011 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.868131 kubelet[3225]: W0313 12:23:35.868024 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.868131 kubelet[3225]: E0313 12:23:35.868035 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.868360 kubelet[3225]: E0313 12:23:35.868295 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.868360 kubelet[3225]: W0313 12:23:35.868308 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.868360 kubelet[3225]: E0313 12:23:35.868319 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.868593 kubelet[3225]: E0313 12:23:35.868582 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.868718 kubelet[3225]: W0313 12:23:35.868656 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.868718 kubelet[3225]: E0313 12:23:35.868671 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.869012 kubelet[3225]: E0313 12:23:35.868928 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.869012 kubelet[3225]: W0313 12:23:35.868938 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.869012 kubelet[3225]: E0313 12:23:35.868949 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.869203 kubelet[3225]: E0313 12:23:35.869150 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.869203 kubelet[3225]: W0313 12:23:35.869160 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.869203 kubelet[3225]: E0313 12:23:35.869170 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.869528 kubelet[3225]: E0313 12:23:35.869420 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.869528 kubelet[3225]: W0313 12:23:35.869444 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.869528 kubelet[3225]: E0313 12:23:35.869455 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.869733 kubelet[3225]: E0313 12:23:35.869675 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.869733 kubelet[3225]: W0313 12:23:35.869685 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.869733 kubelet[3225]: E0313 12:23:35.869694 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.870030 kubelet[3225]: E0313 12:23:35.869941 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.870030 kubelet[3225]: W0313 12:23:35.869951 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.870030 kubelet[3225]: E0313 12:23:35.869961 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.870280 kubelet[3225]: E0313 12:23:35.870185 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.870280 kubelet[3225]: W0313 12:23:35.870194 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.870280 kubelet[3225]: E0313 12:23:35.870204 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.870509 kubelet[3225]: E0313 12:23:35.870419 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.870509 kubelet[3225]: W0313 12:23:35.870444 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.870509 kubelet[3225]: E0313 12:23:35.870456 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.884966 kubelet[3225]: E0313 12:23:35.884950 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.885157 kubelet[3225]: W0313 12:23:35.885046 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.885157 kubelet[3225]: E0313 12:23:35.885063 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.885334 kubelet[3225]: E0313 12:23:35.885292 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.885334 kubelet[3225]: W0313 12:23:35.885312 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.885334 kubelet[3225]: E0313 12:23:35.885323 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.885693 kubelet[3225]: E0313 12:23:35.885597 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.885693 kubelet[3225]: W0313 12:23:35.885610 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.885693 kubelet[3225]: E0313 12:23:35.885621 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.886006 kubelet[3225]: E0313 12:23:35.885898 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.886006 kubelet[3225]: W0313 12:23:35.885909 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.886006 kubelet[3225]: E0313 12:23:35.885919 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.886241 kubelet[3225]: E0313 12:23:35.886149 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.886241 kubelet[3225]: W0313 12:23:35.886159 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.886241 kubelet[3225]: E0313 12:23:35.886169 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.886566 kubelet[3225]: E0313 12:23:35.886473 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.886566 kubelet[3225]: W0313 12:23:35.886484 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.886566 kubelet[3225]: E0313 12:23:35.886494 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.887144 kubelet[3225]: E0313 12:23:35.887016 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.887144 kubelet[3225]: W0313 12:23:35.887027 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.887144 kubelet[3225]: E0313 12:23:35.887038 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.887380 kubelet[3225]: E0313 12:23:35.887287 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.887380 kubelet[3225]: W0313 12:23:35.887298 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.887380 kubelet[3225]: E0313 12:23:35.887309 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.887612 kubelet[3225]: E0313 12:23:35.887527 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.887612 kubelet[3225]: W0313 12:23:35.887538 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.887612 kubelet[3225]: E0313 12:23:35.887548 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.887854 kubelet[3225]: E0313 12:23:35.887762 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.887854 kubelet[3225]: W0313 12:23:35.887772 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.887854 kubelet[3225]: E0313 12:23:35.887781 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.888147 kubelet[3225]: E0313 12:23:35.888055 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.888147 kubelet[3225]: W0313 12:23:35.888066 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.888147 kubelet[3225]: E0313 12:23:35.888076 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.888521 kubelet[3225]: E0313 12:23:35.888359 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.888521 kubelet[3225]: W0313 12:23:35.888370 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.888521 kubelet[3225]: E0313 12:23:35.888380 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.888701 kubelet[3225]: E0313 12:23:35.888670 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.888701 kubelet[3225]: W0313 12:23:35.888680 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.888701 kubelet[3225]: E0313 12:23:35.888691 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.889128 kubelet[3225]: E0313 12:23:35.888981 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.889128 kubelet[3225]: W0313 12:23:35.888991 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.889128 kubelet[3225]: E0313 12:23:35.889001 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.889293 kubelet[3225]: E0313 12:23:35.889263 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.889293 kubelet[3225]: W0313 12:23:35.889273 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.889293 kubelet[3225]: E0313 12:23:35.889283 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.889715 kubelet[3225]: E0313 12:23:35.889600 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.889715 kubelet[3225]: W0313 12:23:35.889617 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.889715 kubelet[3225]: E0313 12:23:35.889628 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.890148 kubelet[3225]: E0313 12:23:35.890041 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.890148 kubelet[3225]: W0313 12:23:35.890051 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.890148 kubelet[3225]: E0313 12:23:35.890061 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:35.890331 kubelet[3225]: E0313 12:23:35.890204 3225 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:23:35.890331 kubelet[3225]: W0313 12:23:35.890212 3225 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:23:35.890331 kubelet[3225]: E0313 12:23:35.890222 3225 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:23:36.075055 containerd[1735]: time="2026-03-13T12:23:36.073632075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:23:36.076249 containerd[1735]: time="2026-03-13T12:23:36.076056919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 13 12:23:36.079139 containerd[1735]: time="2026-03-13T12:23:36.079076525Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:23:36.083528 containerd[1735]: time="2026-03-13T12:23:36.083464853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:23:36.084152 containerd[1735]: time="2026-03-13T12:23:36.084043894Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.168945363s" Mar 13 12:23:36.084152 containerd[1735]: time="2026-03-13T12:23:36.084075934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 13 12:23:36.091789 containerd[1735]: time="2026-03-13T12:23:36.091760028Z" level=info msg="CreateContainer within sandbox \"8dab31531f715bd8dba7c8905664706f4c2bb754794e891ddbab90db0350226f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 13 12:23:36.125663 containerd[1735]: time="2026-03-13T12:23:36.125599811Z" level=info msg="CreateContainer within sandbox \"8dab31531f715bd8dba7c8905664706f4c2bb754794e891ddbab90db0350226f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"329bc79035f0ae4850ad9304da82da88017b2d7085f618a4a048565f8705de8e\"" Mar 13 12:23:36.127146 containerd[1735]: time="2026-03-13T12:23:36.126179332Z" level=info msg="StartContainer for \"329bc79035f0ae4850ad9304da82da88017b2d7085f618a4a048565f8705de8e\"" Mar 13 12:23:36.157574 systemd[1]: Started cri-containerd-329bc79035f0ae4850ad9304da82da88017b2d7085f618a4a048565f8705de8e.scope - libcontainer container 329bc79035f0ae4850ad9304da82da88017b2d7085f618a4a048565f8705de8e. Mar 13 12:23:36.182670 containerd[1735]: time="2026-03-13T12:23:36.182630556Z" level=info msg="StartContainer for \"329bc79035f0ae4850ad9304da82da88017b2d7085f618a4a048565f8705de8e\" returns successfully" Mar 13 12:23:36.186225 systemd[1]: cri-containerd-329bc79035f0ae4850ad9304da82da88017b2d7085f618a4a048565f8705de8e.scope: Deactivated successfully. Mar 13 12:23:36.432959 kubelet[3225]: E0313 12:23:36.432917 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q6rzg" podUID="1e7c6c09-c767-4bee-9251-d729af24c7dc" Mar 13 12:23:36.796847 kubelet[3225]: I0313 12:23:36.796345 3225 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:23:36.921183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-329bc79035f0ae4850ad9304da82da88017b2d7085f618a4a048565f8705de8e-rootfs.mount: Deactivated successfully. Mar 13 12:23:37.317243 containerd[1735]: time="2026-03-13T12:23:37.317176652Z" level=info msg="shim disconnected" id=329bc79035f0ae4850ad9304da82da88017b2d7085f618a4a048565f8705de8e namespace=k8s.io Mar 13 12:23:37.317243 containerd[1735]: time="2026-03-13T12:23:37.317237853Z" level=warning msg="cleaning up after shim disconnected" id=329bc79035f0ae4850ad9304da82da88017b2d7085f618a4a048565f8705de8e namespace=k8s.io Mar 13 12:23:37.317243 containerd[1735]: time="2026-03-13T12:23:37.317246693Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 12:23:37.801142 containerd[1735]: time="2026-03-13T12:23:37.799922784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 13 12:23:38.435655 kubelet[3225]: E0313 12:23:38.435613 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q6rzg" podUID="1e7c6c09-c767-4bee-9251-d729af24c7dc" Mar 13 12:23:40.433790 kubelet[3225]: E0313 12:23:40.433747 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q6rzg" podUID="1e7c6c09-c767-4bee-9251-d729af24c7dc" Mar 13 12:23:41.935590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177911271.mount: Deactivated successfully. Mar 13 12:23:42.307525 containerd[1735]: time="2026-03-13T12:23:42.306733391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:23:42.309291 containerd[1735]: time="2026-03-13T12:23:42.309263436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 13 12:23:42.313870 containerd[1735]: time="2026-03-13T12:23:42.313840205Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:23:42.320189 containerd[1735]: time="2026-03-13T12:23:42.319129134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:23:42.320189 containerd[1735]: time="2026-03-13T12:23:42.319724095Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 4.518815469s" Mar 13 12:23:42.320189 containerd[1735]: time="2026-03-13T12:23:42.319753855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 13 12:23:42.328341 containerd[1735]: time="2026-03-13T12:23:42.328309151Z" level=info msg="CreateContainer within sandbox \"8dab31531f715bd8dba7c8905664706f4c2bb754794e891ddbab90db0350226f\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 13 12:23:42.361146 containerd[1735]: time="2026-03-13T12:23:42.361104172Z" level=info msg="CreateContainer within sandbox \"8dab31531f715bd8dba7c8905664706f4c2bb754794e891ddbab90db0350226f\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"3e276ecd80d3dcf2001092e2cf6926a0158a6abf5e684bf196b7dcd52afd2a06\"" Mar 13 12:23:42.363147 containerd[1735]: time="2026-03-13T12:23:42.361767853Z" level=info msg="StartContainer for \"3e276ecd80d3dcf2001092e2cf6926a0158a6abf5e684bf196b7dcd52afd2a06\"" Mar 13 12:23:42.385811 systemd[1]: run-containerd-runc-k8s.io-3e276ecd80d3dcf2001092e2cf6926a0158a6abf5e684bf196b7dcd52afd2a06-runc.1GM5w5.mount: Deactivated successfully. Mar 13 12:23:42.393570 systemd[1]: Started cri-containerd-3e276ecd80d3dcf2001092e2cf6926a0158a6abf5e684bf196b7dcd52afd2a06.scope - libcontainer container 3e276ecd80d3dcf2001092e2cf6926a0158a6abf5e684bf196b7dcd52afd2a06. Mar 13 12:23:42.420725 containerd[1735]: time="2026-03-13T12:23:42.420685322Z" level=info msg="StartContainer for \"3e276ecd80d3dcf2001092e2cf6926a0158a6abf5e684bf196b7dcd52afd2a06\" returns successfully" Mar 13 12:23:42.435711 kubelet[3225]: E0313 12:23:42.435643 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q6rzg" podUID="1e7c6c09-c767-4bee-9251-d729af24c7dc" Mar 13 12:23:42.462164 systemd[1]: cri-containerd-3e276ecd80d3dcf2001092e2cf6926a0158a6abf5e684bf196b7dcd52afd2a06.scope: Deactivated successfully. Mar 13 12:23:42.935973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e276ecd80d3dcf2001092e2cf6926a0158a6abf5e684bf196b7dcd52afd2a06-rootfs.mount: Deactivated successfully. Mar 13 12:23:43.796788 containerd[1735]: time="2026-03-13T12:23:43.796579160Z" level=info msg="shim disconnected" id=3e276ecd80d3dcf2001092e2cf6926a0158a6abf5e684bf196b7dcd52afd2a06 namespace=k8s.io Mar 13 12:23:43.796788 containerd[1735]: time="2026-03-13T12:23:43.796633160Z" level=warning msg="cleaning up after shim disconnected" id=3e276ecd80d3dcf2001092e2cf6926a0158a6abf5e684bf196b7dcd52afd2a06 namespace=k8s.io Mar 13 12:23:43.796788 containerd[1735]: time="2026-03-13T12:23:43.796642120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 12:23:43.815472 containerd[1735]: time="2026-03-13T12:23:43.815036545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 13 12:23:44.434573 kubelet[3225]: E0313 12:23:44.433442 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q6rzg" podUID="1e7c6c09-c767-4bee-9251-d729af24c7dc" Mar 13 12:23:46.433334 kubelet[3225]: E0313 12:23:46.433289 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q6rzg" podUID="1e7c6c09-c767-4bee-9251-d729af24c7dc" Mar 13 12:23:48.433641 kubelet[3225]: E0313 12:23:48.433557 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q6rzg" podUID="1e7c6c09-c767-4bee-9251-d729af24c7dc" Mar 13 12:23:49.396357 containerd[1735]: time="2026-03-13T12:23:49.395552632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:23:49.397950 containerd[1735]: time="2026-03-13T12:23:49.397920396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 13 12:23:49.443249 containerd[1735]: time="2026-03-13T12:23:49.443161519Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:23:49.488520 containerd[1735]: time="2026-03-13T12:23:49.488465761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:23:49.490154 containerd[1735]: time="2026-03-13T12:23:49.489472883Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 5.674397098s" Mar 13 12:23:49.490154 containerd[1735]: time="2026-03-13T12:23:49.489506643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 13 12:23:49.496660 containerd[1735]: time="2026-03-13T12:23:49.496630096Z" level=info msg="CreateContainer within sandbox \"8dab31531f715bd8dba7c8905664706f4c2bb754794e891ddbab90db0350226f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 13 12:23:49.699384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1936706248.mount: Deactivated successfully. Mar 13 12:23:49.853014 containerd[1735]: time="2026-03-13T12:23:49.852833505Z" level=info msg="CreateContainer within sandbox \"8dab31531f715bd8dba7c8905664706f4c2bb754794e891ddbab90db0350226f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0309b7e27d476efb3ac05071e97d0f8cfc76714ed98c8aec2afa2d5012e455fc\"" Mar 13 12:23:49.853543 containerd[1735]: time="2026-03-13T12:23:49.853245426Z" level=info msg="StartContainer for \"0309b7e27d476efb3ac05071e97d0f8cfc76714ed98c8aec2afa2d5012e455fc\"" Mar 13 12:23:49.881573 systemd[1]: Started cri-containerd-0309b7e27d476efb3ac05071e97d0f8cfc76714ed98c8aec2afa2d5012e455fc.scope - libcontainer container 0309b7e27d476efb3ac05071e97d0f8cfc76714ed98c8aec2afa2d5012e455fc. Mar 13 12:23:49.909817 containerd[1735]: time="2026-03-13T12:23:49.909767969Z" level=info msg="StartContainer for \"0309b7e27d476efb3ac05071e97d0f8cfc76714ed98c8aec2afa2d5012e455fc\" returns successfully" Mar 13 12:23:50.433699 kubelet[3225]: E0313 12:23:50.433631 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q6rzg" podUID="1e7c6c09-c767-4bee-9251-d729af24c7dc" Mar 13 12:23:51.087026 kubelet[3225]: I0313 12:23:51.086863 3225 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:23:52.436446 kubelet[3225]: E0313 12:23:52.433658 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q6rzg" podUID="1e7c6c09-c767-4bee-9251-d729af24c7dc" Mar 13 12:23:54.434730 kubelet[3225]: E0313 12:23:54.433999 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q6rzg" podUID="1e7c6c09-c767-4bee-9251-d729af24c7dc" Mar 13 12:23:56.434466 kubelet[3225]: E0313 12:23:56.433569 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q6rzg" podUID="1e7c6c09-c767-4bee-9251-d729af24c7dc" Mar 13 12:23:59.441023 kubelet[3225]: E0313 12:23:58.433075 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q6rzg" podUID="1e7c6c09-c767-4bee-9251-d729af24c7dc" Mar 13 12:24:00.263093 systemd[1]: cri-containerd-0309b7e27d476efb3ac05071e97d0f8cfc76714ed98c8aec2afa2d5012e455fc.scope: Deactivated successfully. Mar 13 12:24:00.284109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0309b7e27d476efb3ac05071e97d0f8cfc76714ed98c8aec2afa2d5012e455fc-rootfs.mount: Deactivated successfully. Mar 13 12:24:00.350263 kubelet[3225]: I0313 12:24:00.350233 3225 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 13 12:24:03.205203 systemd[1]: Created slice kubepods-burstable-pode6e79ed8_8793_4170_b6be_57dc4f48b338.slice - libcontainer container kubepods-burstable-pode6e79ed8_8793_4170_b6be_57dc4f48b338.slice. Mar 13 12:24:03.205792 containerd[1735]: time="2026-03-13T12:24:03.205307004Z" level=info msg="shim disconnected" id=0309b7e27d476efb3ac05071e97d0f8cfc76714ed98c8aec2afa2d5012e455fc namespace=k8s.io Mar 13 12:24:03.205792 containerd[1735]: time="2026-03-13T12:24:03.205354284Z" level=warning msg="cleaning up after shim disconnected" id=0309b7e27d476efb3ac05071e97d0f8cfc76714ed98c8aec2afa2d5012e455fc namespace=k8s.io Mar 13 12:24:03.205792 containerd[1735]: time="2026-03-13T12:24:03.205362644Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 12:24:03.212087 containerd[1735]: time="2026-03-13T12:24:03.210717933Z" level=error msg="collecting metrics for 0309b7e27d476efb3ac05071e97d0f8cfc76714ed98c8aec2afa2d5012e455fc" error="ttrpc: closed: unknown" Mar 13 12:24:03.223627 systemd[1]: Created slice kubepods-besteffort-pod7bda6270_360d_45ae_b4e4_6b5189003539.slice - libcontainer container kubepods-besteffort-pod7bda6270_360d_45ae_b4e4_6b5189003539.slice. Mar 13 12:24:03.237407 kubelet[3225]: I0313 12:24:03.237255 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8131164-7f35-4890-a8dd-a27d2707cba1-config-volume\") pod \"coredns-66bc5c9577-xclsd\" (UID: \"a8131164-7f35-4890-a8dd-a27d2707cba1\") " pod="kube-system/coredns-66bc5c9577-xclsd" Mar 13 12:24:03.237407 kubelet[3225]: I0313 12:24:03.237292 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp8gl\" (UniqueName: \"kubernetes.io/projected/f9d96381-f483-497e-aa8b-0362e6cdf815-kube-api-access-kp8gl\") pod \"calico-kube-controllers-64c54c94bc-57wqz\" (UID: \"f9d96381-f483-497e-aa8b-0362e6cdf815\") " pod="calico-system/calico-kube-controllers-64c54c94bc-57wqz" Mar 13 12:24:03.237407 kubelet[3225]: I0313 12:24:03.237316 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6e79ed8-8793-4170-b6be-57dc4f48b338-config-volume\") pod \"coredns-66bc5c9577-cbn7n\" (UID: \"e6e79ed8-8793-4170-b6be-57dc4f48b338\") " pod="kube-system/coredns-66bc5c9577-cbn7n" Mar 13 12:24:03.237407 kubelet[3225]: I0313 12:24:03.237332 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7czd9\" (UniqueName: \"kubernetes.io/projected/e6e79ed8-8793-4170-b6be-57dc4f48b338-kube-api-access-7czd9\") pod \"coredns-66bc5c9577-cbn7n\" (UID: \"e6e79ed8-8793-4170-b6be-57dc4f48b338\") " pod="kube-system/coredns-66bc5c9577-cbn7n" Mar 13 12:24:03.237407 kubelet[3225]: I0313 12:24:03.237349 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7bda6270-360d-45ae-b4e4-6b5189003539-nginx-config\") pod \"whisker-7587867c-vjb5k\" (UID: \"7bda6270-360d-45ae-b4e4-6b5189003539\") " pod="calico-system/whisker-7587867c-vjb5k" Mar 13 12:24:03.238101 kubelet[3225]: I0313 12:24:03.237368 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7bda6270-360d-45ae-b4e4-6b5189003539-whisker-backend-key-pair\") pod \"whisker-7587867c-vjb5k\" (UID: \"7bda6270-360d-45ae-b4e4-6b5189003539\") " pod="calico-system/whisker-7587867c-vjb5k" Mar 13 12:24:03.238101 kubelet[3225]: I0313 12:24:03.237383 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bda6270-360d-45ae-b4e4-6b5189003539-whisker-ca-bundle\") pod \"whisker-7587867c-vjb5k\" (UID: \"7bda6270-360d-45ae-b4e4-6b5189003539\") " pod="calico-system/whisker-7587867c-vjb5k" Mar 13 12:24:03.238101 kubelet[3225]: I0313 12:24:03.237396 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcj5v\" (UniqueName: \"kubernetes.io/projected/a8131164-7f35-4890-a8dd-a27d2707cba1-kube-api-access-mcj5v\") pod \"coredns-66bc5c9577-xclsd\" (UID: \"a8131164-7f35-4890-a8dd-a27d2707cba1\") " pod="kube-system/coredns-66bc5c9577-xclsd" Mar 13 12:24:03.238101 kubelet[3225]: I0313 12:24:03.237411 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9d96381-f483-497e-aa8b-0362e6cdf815-tigera-ca-bundle\") pod \"calico-kube-controllers-64c54c94bc-57wqz\" (UID: \"f9d96381-f483-497e-aa8b-0362e6cdf815\") " pod="calico-system/calico-kube-controllers-64c54c94bc-57wqz" Mar 13 12:24:03.237476 systemd[1]: Created slice kubepods-burstable-poda8131164_7f35_4890_a8dd_a27d2707cba1.slice - libcontainer container kubepods-burstable-poda8131164_7f35_4890_a8dd_a27d2707cba1.slice. Mar 13 12:24:03.243065 kubelet[3225]: I0313 12:24:03.242444 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhmcj\" (UniqueName: \"kubernetes.io/projected/7bda6270-360d-45ae-b4e4-6b5189003539-kube-api-access-rhmcj\") pod \"whisker-7587867c-vjb5k\" (UID: \"7bda6270-360d-45ae-b4e4-6b5189003539\") " pod="calico-system/whisker-7587867c-vjb5k" Mar 13 12:24:03.250737 systemd[1]: Created slice kubepods-besteffort-pod1e7c6c09_c767_4bee_9251_d729af24c7dc.slice - libcontainer container kubepods-besteffort-pod1e7c6c09_c767_4bee_9251_d729af24c7dc.slice. Mar 13 12:24:03.264257 systemd[1]: Created slice kubepods-besteffort-podf9d96381_f483_497e_aa8b_0362e6cdf815.slice - libcontainer container kubepods-besteffort-podf9d96381_f483_497e_aa8b_0362e6cdf815.slice. Mar 13 12:24:03.270036 containerd[1735]: time="2026-03-13T12:24:03.270002036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q6rzg,Uid:1e7c6c09-c767-4bee-9251-d729af24c7dc,Namespace:calico-system,Attempt:0,}" Mar 13 12:24:03.277266 systemd[1]: Created slice kubepods-besteffort-podf97ecaa3_f993_4c5c_ba9c_9db8e1bdfbf1.slice - libcontainer container kubepods-besteffort-podf97ecaa3_f993_4c5c_ba9c_9db8e1bdfbf1.slice. Mar 13 12:24:03.284845 systemd[1]: Created slice kubepods-besteffort-pod42dbc5a5_80ff_4c29_b964_19cfe9b7fccd.slice - libcontainer container kubepods-besteffort-pod42dbc5a5_80ff_4c29_b964_19cfe9b7fccd.slice. Mar 13 12:24:03.291719 systemd[1]: Created slice kubepods-besteffort-pod0a346f55_3551_472a_8019_424b407e361b.slice - libcontainer container kubepods-besteffort-pod0a346f55_3551_472a_8019_424b407e361b.slice. Mar 13 12:24:03.344280 kubelet[3225]: I0313 12:24:03.343697 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c7ft\" (UniqueName: \"kubernetes.io/projected/f97ecaa3-f993-4c5c-ba9c-9db8e1bdfbf1-kube-api-access-9c7ft\") pod \"calico-apiserver-6c7f959487-nm874\" (UID: \"f97ecaa3-f993-4c5c-ba9c-9db8e1bdfbf1\") " pod="calico-system/calico-apiserver-6c7f959487-nm874" Mar 13 12:24:03.344280 kubelet[3225]: I0313 12:24:03.343745 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f97ecaa3-f993-4c5c-ba9c-9db8e1bdfbf1-calico-apiserver-certs\") pod \"calico-apiserver-6c7f959487-nm874\" (UID: \"f97ecaa3-f993-4c5c-ba9c-9db8e1bdfbf1\") " pod="calico-system/calico-apiserver-6c7f959487-nm874" Mar 13 12:24:03.344280 kubelet[3225]: I0313 12:24:03.343762 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0a346f55-3551-472a-8019-424b407e361b-calico-apiserver-certs\") pod \"calico-apiserver-6c7f959487-mtfnw\" (UID: \"0a346f55-3551-472a-8019-424b407e361b\") " pod="calico-system/calico-apiserver-6c7f959487-mtfnw" Mar 13 12:24:03.344280 kubelet[3225]: I0313 12:24:03.343782 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42dbc5a5-80ff-4c29-b964-19cfe9b7fccd-config\") pod \"goldmane-cccfbd5cf-b86jq\" (UID: \"42dbc5a5-80ff-4c29-b964-19cfe9b7fccd\") " pod="calico-system/goldmane-cccfbd5cf-b86jq" Mar 13 12:24:03.344280 kubelet[3225]: I0313 12:24:03.343798 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/42dbc5a5-80ff-4c29-b964-19cfe9b7fccd-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-b86jq\" (UID: \"42dbc5a5-80ff-4c29-b964-19cfe9b7fccd\") " pod="calico-system/goldmane-cccfbd5cf-b86jq" Mar 13 12:24:03.344580 kubelet[3225]: I0313 12:24:03.343846 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42dbc5a5-80ff-4c29-b964-19cfe9b7fccd-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-b86jq\" (UID: \"42dbc5a5-80ff-4c29-b964-19cfe9b7fccd\") " pod="calico-system/goldmane-cccfbd5cf-b86jq" Mar 13 12:24:03.344580 kubelet[3225]: I0313 12:24:03.343863 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmgdm\" (UniqueName: \"kubernetes.io/projected/42dbc5a5-80ff-4c29-b964-19cfe9b7fccd-kube-api-access-mmgdm\") pod \"goldmane-cccfbd5cf-b86jq\" (UID: \"42dbc5a5-80ff-4c29-b964-19cfe9b7fccd\") " pod="calico-system/goldmane-cccfbd5cf-b86jq" Mar 13 12:24:03.344580 kubelet[3225]: I0313 12:24:03.343880 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs49k\" (UniqueName: \"kubernetes.io/projected/0a346f55-3551-472a-8019-424b407e361b-kube-api-access-hs49k\") pod \"calico-apiserver-6c7f959487-mtfnw\" (UID: \"0a346f55-3551-472a-8019-424b407e361b\") " pod="calico-system/calico-apiserver-6c7f959487-mtfnw" Mar 13 12:24:03.393619 containerd[1735]: time="2026-03-13T12:24:03.393577211Z" level=error msg="Failed to destroy network for sandbox \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.394035 containerd[1735]: time="2026-03-13T12:24:03.394010571Z" level=error msg="encountered an error cleaning up failed sandbox \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.394151 containerd[1735]: time="2026-03-13T12:24:03.394129692Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q6rzg,Uid:1e7c6c09-c767-4bee-9251-d729af24c7dc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.394516 kubelet[3225]: E0313 12:24:03.394390 3225 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.394516 kubelet[3225]: E0313 12:24:03.394461 3225 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q6rzg" Mar 13 12:24:03.394516 kubelet[3225]: E0313 12:24:03.394481 3225 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q6rzg" Mar 13 12:24:03.394721 kubelet[3225]: E0313 12:24:03.394682 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q6rzg_calico-system(1e7c6c09-c767-4bee-9251-d729af24c7dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q6rzg_calico-system(1e7c6c09-c767-4bee-9251-d729af24c7dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q6rzg" podUID="1e7c6c09-c767-4bee-9251-d729af24c7dc" Mar 13 12:24:03.525345 containerd[1735]: time="2026-03-13T12:24:03.524929199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cbn7n,Uid:e6e79ed8-8793-4170-b6be-57dc4f48b338,Namespace:kube-system,Attempt:0,}" Mar 13 12:24:03.534598 containerd[1735]: time="2026-03-13T12:24:03.534564135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7587867c-vjb5k,Uid:7bda6270-360d-45ae-b4e4-6b5189003539,Namespace:calico-system,Attempt:0,}" Mar 13 12:24:03.555683 containerd[1735]: time="2026-03-13T12:24:03.555650452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xclsd,Uid:a8131164-7f35-4890-a8dd-a27d2707cba1,Namespace:kube-system,Attempt:0,}" Mar 13 12:24:03.576893 containerd[1735]: time="2026-03-13T12:24:03.576856929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64c54c94bc-57wqz,Uid:f9d96381-f483-497e-aa8b-0362e6cdf815,Namespace:calico-system,Attempt:0,}" Mar 13 12:24:03.603566 containerd[1735]: time="2026-03-13T12:24:03.603127214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7f959487-mtfnw,Uid:0a346f55-3551-472a-8019-424b407e361b,Namespace:calico-system,Attempt:0,}" Mar 13 12:24:03.603566 containerd[1735]: time="2026-03-13T12:24:03.603496695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7f959487-nm874,Uid:f97ecaa3-f993-4c5c-ba9c-9db8e1bdfbf1,Namespace:calico-system,Attempt:0,}" Mar 13 12:24:03.604066 containerd[1735]: time="2026-03-13T12:24:03.604029256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-b86jq,Uid:42dbc5a5-80ff-4c29-b964-19cfe9b7fccd,Namespace:calico-system,Attempt:0,}" Mar 13 12:24:03.652248 containerd[1735]: time="2026-03-13T12:24:03.652165940Z" level=error msg="Failed to destroy network for sandbox \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.652739 containerd[1735]: time="2026-03-13T12:24:03.652711780Z" level=error msg="encountered an error cleaning up failed sandbox \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.652876 containerd[1735]: time="2026-03-13T12:24:03.652854821Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cbn7n,Uid:e6e79ed8-8793-4170-b6be-57dc4f48b338,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.653202 kubelet[3225]: E0313 12:24:03.653151 3225 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.653268 kubelet[3225]: E0313 12:24:03.653223 3225 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-cbn7n" Mar 13 12:24:03.653268 kubelet[3225]: E0313 12:24:03.653242 3225 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-cbn7n" Mar 13 12:24:03.653345 kubelet[3225]: E0313 12:24:03.653314 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-cbn7n_kube-system(e6e79ed8-8793-4170-b6be-57dc4f48b338)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-cbn7n_kube-system(e6e79ed8-8793-4170-b6be-57dc4f48b338)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-cbn7n" podUID="e6e79ed8-8793-4170-b6be-57dc4f48b338" Mar 13 12:24:03.674932 containerd[1735]: time="2026-03-13T12:24:03.674461138Z" level=error msg="Failed to destroy network for sandbox \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.677576 containerd[1735]: time="2026-03-13T12:24:03.677543104Z" level=error msg="encountered an error cleaning up failed sandbox \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.677715 containerd[1735]: time="2026-03-13T12:24:03.677693904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7587867c-vjb5k,Uid:7bda6270-360d-45ae-b4e4-6b5189003539,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.678556 kubelet[3225]: E0313 12:24:03.678514 3225 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.678666 kubelet[3225]: E0313 12:24:03.678573 3225 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7587867c-vjb5k" Mar 13 12:24:03.678666 kubelet[3225]: E0313 12:24:03.678594 3225 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7587867c-vjb5k" Mar 13 12:24:03.678666 kubelet[3225]: E0313 12:24:03.678639 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7587867c-vjb5k_calico-system(7bda6270-360d-45ae-b4e4-6b5189003539)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7587867c-vjb5k_calico-system(7bda6270-360d-45ae-b4e4-6b5189003539)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7587867c-vjb5k" podUID="7bda6270-360d-45ae-b4e4-6b5189003539" Mar 13 12:24:03.728729 containerd[1735]: time="2026-03-13T12:24:03.728676592Z" level=error msg="Failed to destroy network for sandbox \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.729417 containerd[1735]: time="2026-03-13T12:24:03.729379794Z" level=error msg="encountered an error cleaning up failed sandbox \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.729636 containerd[1735]: time="2026-03-13T12:24:03.729472714Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xclsd,Uid:a8131164-7f35-4890-a8dd-a27d2707cba1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.731898 kubelet[3225]: E0313 12:24:03.730460 3225 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.731898 kubelet[3225]: E0313 12:24:03.730506 3225 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xclsd" Mar 13 12:24:03.731898 kubelet[3225]: E0313 12:24:03.730527 3225 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xclsd" Mar 13 12:24:03.732089 kubelet[3225]: E0313 12:24:03.730575 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xclsd_kube-system(a8131164-7f35-4890-a8dd-a27d2707cba1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xclsd_kube-system(a8131164-7f35-4890-a8dd-a27d2707cba1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xclsd" podUID="a8131164-7f35-4890-a8dd-a27d2707cba1" Mar 13 12:24:03.806241 containerd[1735]: time="2026-03-13T12:24:03.806053607Z" level=error msg="Failed to destroy network for sandbox \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.807245 containerd[1735]: time="2026-03-13T12:24:03.806863648Z" level=error msg="encountered an error cleaning up failed sandbox \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.807245 containerd[1735]: time="2026-03-13T12:24:03.806920968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64c54c94bc-57wqz,Uid:f9d96381-f483-497e-aa8b-0362e6cdf815,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.809518 kubelet[3225]: E0313 12:24:03.808974 3225 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.809518 kubelet[3225]: E0313 12:24:03.809042 3225 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64c54c94bc-57wqz" Mar 13 12:24:03.817574 kubelet[3225]: E0313 12:24:03.816481 3225 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64c54c94bc-57wqz" Mar 13 12:24:03.817574 kubelet[3225]: E0313 12:24:03.816585 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64c54c94bc-57wqz_calico-system(f9d96381-f483-497e-aa8b-0362e6cdf815)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64c54c94bc-57wqz_calico-system(f9d96381-f483-497e-aa8b-0362e6cdf815)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64c54c94bc-57wqz" podUID="f9d96381-f483-497e-aa8b-0362e6cdf815" Mar 13 12:24:03.829488 containerd[1735]: time="2026-03-13T12:24:03.829427647Z" level=error msg="Failed to destroy network for sandbox \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.830121 containerd[1735]: time="2026-03-13T12:24:03.830092168Z" level=error msg="encountered an error cleaning up failed sandbox \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.830609 containerd[1735]: time="2026-03-13T12:24:03.830353249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-b86jq,Uid:42dbc5a5-80ff-4c29-b964-19cfe9b7fccd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.831554 kubelet[3225]: E0313 12:24:03.830961 3225 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.831554 kubelet[3225]: E0313 12:24:03.831010 3225 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-b86jq" Mar 13 12:24:03.831554 kubelet[3225]: E0313 12:24:03.831038 3225 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-b86jq" Mar 13 12:24:03.831696 kubelet[3225]: E0313 12:24:03.831084 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-b86jq_calico-system(42dbc5a5-80ff-4c29-b964-19cfe9b7fccd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-b86jq_calico-system(42dbc5a5-80ff-4c29-b964-19cfe9b7fccd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-b86jq" podUID="42dbc5a5-80ff-4c29-b964-19cfe9b7fccd" Mar 13 12:24:03.833842 containerd[1735]: time="2026-03-13T12:24:03.833803615Z" level=error msg="Failed to destroy network for sandbox \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.834989 containerd[1735]: time="2026-03-13T12:24:03.834394336Z" level=error msg="encountered an error cleaning up failed sandbox \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.834989 containerd[1735]: time="2026-03-13T12:24:03.834530696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7f959487-nm874,Uid:f97ecaa3-f993-4c5c-ba9c-9db8e1bdfbf1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.835113 kubelet[3225]: E0313 12:24:03.834785 3225 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.835113 kubelet[3225]: E0313 12:24:03.834825 3225 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6c7f959487-nm874" Mar 13 12:24:03.835113 kubelet[3225]: E0313 12:24:03.834841 3225 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6c7f959487-nm874" Mar 13 12:24:03.835197 kubelet[3225]: E0313 12:24:03.834879 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c7f959487-nm874_calico-system(f97ecaa3-f993-4c5c-ba9c-9db8e1bdfbf1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c7f959487-nm874_calico-system(f97ecaa3-f993-4c5c-ba9c-9db8e1bdfbf1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6c7f959487-nm874" podUID="f97ecaa3-f993-4c5c-ba9c-9db8e1bdfbf1" Mar 13 12:24:03.835868 containerd[1735]: time="2026-03-13T12:24:03.835507778Z" level=error msg="Failed to destroy network for sandbox \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.836666 containerd[1735]: time="2026-03-13T12:24:03.836619820Z" level=error msg="encountered an error cleaning up failed sandbox \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.836751 containerd[1735]: time="2026-03-13T12:24:03.836670180Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7f959487-mtfnw,Uid:0a346f55-3551-472a-8019-424b407e361b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.837229 kubelet[3225]: E0313 12:24:03.836795 3225 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.837229 kubelet[3225]: E0313 12:24:03.836831 3225 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6c7f959487-mtfnw" Mar 13 12:24:03.837229 kubelet[3225]: E0313 12:24:03.836845 3225 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6c7f959487-mtfnw" Mar 13 12:24:03.837351 kubelet[3225]: E0313 12:24:03.836894 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c7f959487-mtfnw_calico-system(0a346f55-3551-472a-8019-424b407e361b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c7f959487-mtfnw_calico-system(0a346f55-3551-472a-8019-424b407e361b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6c7f959487-mtfnw" podUID="0a346f55-3551-472a-8019-424b407e361b" Mar 13 12:24:03.852330 containerd[1735]: time="2026-03-13T12:24:03.851469286Z" level=info msg="StopPodSandbox for \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\"" Mar 13 12:24:03.852330 containerd[1735]: time="2026-03-13T12:24:03.851634726Z" level=info msg="Ensure that sandbox 085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4 in task-service has been cleanup successfully" Mar 13 12:24:03.852468 kubelet[3225]: I0313 12:24:03.852179 3225 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Mar 13 12:24:03.853621 kubelet[3225]: I0313 12:24:03.853603 3225 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Mar 13 12:24:03.854488 containerd[1735]: time="2026-03-13T12:24:03.854463051Z" level=info msg="StopPodSandbox for \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\"" Mar 13 12:24:03.855171 containerd[1735]: time="2026-03-13T12:24:03.855146732Z" level=info msg="Ensure that sandbox 4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83 in task-service has been cleanup successfully" Mar 13 12:24:03.855718 kubelet[3225]: I0313 12:24:03.854958 3225 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Mar 13 12:24:03.856600 containerd[1735]: time="2026-03-13T12:24:03.856578294Z" level=info msg="StopPodSandbox for \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\"" Mar 13 12:24:03.856823 containerd[1735]: time="2026-03-13T12:24:03.856803695Z" level=info msg="Ensure that sandbox 0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371 in task-service has been cleanup successfully" Mar 13 12:24:03.860014 kubelet[3225]: I0313 12:24:03.859425 3225 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Mar 13 12:24:03.862154 containerd[1735]: time="2026-03-13T12:24:03.862129424Z" level=info msg="StopPodSandbox for \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\"" Mar 13 12:24:03.862364 containerd[1735]: time="2026-03-13T12:24:03.862346144Z" level=info msg="Ensure that sandbox 800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e in task-service has been cleanup successfully" Mar 13 12:24:03.867163 kubelet[3225]: I0313 12:24:03.866866 3225 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Mar 13 12:24:03.867299 containerd[1735]: time="2026-03-13T12:24:03.867273153Z" level=info msg="StopPodSandbox for \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\"" Mar 13 12:24:03.867428 containerd[1735]: time="2026-03-13T12:24:03.867405473Z" level=info msg="Ensure that sandbox 9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a in task-service has been cleanup successfully" Mar 13 12:24:03.886023 kubelet[3225]: I0313 12:24:03.885164 3225 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Mar 13 12:24:03.887788 containerd[1735]: time="2026-03-13T12:24:03.887753109Z" level=info msg="StopPodSandbox for \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\"" Mar 13 12:24:03.889379 containerd[1735]: time="2026-03-13T12:24:03.889017951Z" level=info msg="Ensure that sandbox f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b in task-service has been cleanup successfully" Mar 13 12:24:03.895103 kubelet[3225]: I0313 12:24:03.895082 3225 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Mar 13 12:24:03.898024 containerd[1735]: time="2026-03-13T12:24:03.897989846Z" level=info msg="StopPodSandbox for \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\"" Mar 13 12:24:03.898183 containerd[1735]: time="2026-03-13T12:24:03.898150447Z" level=info msg="Ensure that sandbox 37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178 in task-service has been cleanup successfully" Mar 13 12:24:03.900300 containerd[1735]: time="2026-03-13T12:24:03.900269210Z" level=info msg="CreateContainer within sandbox \"8dab31531f715bd8dba7c8905664706f4c2bb754794e891ddbab90db0350226f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 13 12:24:03.901026 kubelet[3225]: I0313 12:24:03.900981 3225 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Mar 13 12:24:03.902607 containerd[1735]: time="2026-03-13T12:24:03.902387134Z" level=info msg="StopPodSandbox for \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\"" Mar 13 12:24:03.903193 containerd[1735]: time="2026-03-13T12:24:03.902980815Z" level=info msg="Ensure that sandbox 5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12 in task-service has been cleanup successfully" Mar 13 12:24:03.940580 containerd[1735]: time="2026-03-13T12:24:03.940535080Z" level=error msg="StopPodSandbox for \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\" failed" error="failed to destroy network for sandbox \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.941300 kubelet[3225]: E0313 12:24:03.941121 3225 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Mar 13 12:24:03.941300 kubelet[3225]: E0313 12:24:03.941176 3225 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a"} Mar 13 12:24:03.941300 kubelet[3225]: E0313 12:24:03.941234 3225 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1e7c6c09-c767-4bee-9251-d729af24c7dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 13 12:24:03.941300 kubelet[3225]: E0313 12:24:03.941258 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1e7c6c09-c767-4bee-9251-d729af24c7dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q6rzg" podUID="1e7c6c09-c767-4bee-9251-d729af24c7dc" Mar 13 12:24:03.975087 containerd[1735]: time="2026-03-13T12:24:03.975037460Z" level=info msg="CreateContainer within sandbox \"8dab31531f715bd8dba7c8905664706f4c2bb754794e891ddbab90db0350226f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"53946500c91142fa820b4e56e3431bcfbfe1a57d1a71bc4732a31cdf4d34e3ce\"" Mar 13 12:24:03.975876 containerd[1735]: time="2026-03-13T12:24:03.975746581Z" level=info msg="StartContainer for \"53946500c91142fa820b4e56e3431bcfbfe1a57d1a71bc4732a31cdf4d34e3ce\"" Mar 13 12:24:03.982593 containerd[1735]: time="2026-03-13T12:24:03.982559553Z" level=error msg="StopPodSandbox for \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\" failed" error="failed to destroy network for sandbox \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.982866 kubelet[3225]: E0313 12:24:03.982833 3225 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Mar 13 12:24:03.983196 kubelet[3225]: E0313 12:24:03.983081 3225 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371"} Mar 13 12:24:03.983196 kubelet[3225]: E0313 12:24:03.983143 3225 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42dbc5a5-80ff-4c29-b964-19cfe9b7fccd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 13 12:24:03.983196 kubelet[3225]: E0313 12:24:03.983170 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42dbc5a5-80ff-4c29-b964-19cfe9b7fccd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-b86jq" podUID="42dbc5a5-80ff-4c29-b964-19cfe9b7fccd" Mar 13 12:24:03.983712 containerd[1735]: time="2026-03-13T12:24:03.983604315Z" level=error msg="StopPodSandbox for \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\" failed" error="failed to destroy network for sandbox \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:03.983880 kubelet[3225]: E0313 12:24:03.983759 3225 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Mar 13 12:24:03.983880 kubelet[3225]: E0313 12:24:03.983786 3225 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4"} Mar 13 12:24:03.983880 kubelet[3225]: E0313 12:24:03.983808 3225 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7bda6270-360d-45ae-b4e4-6b5189003539\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 13 12:24:03.983880 kubelet[3225]: E0313 12:24:03.983826 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7bda6270-360d-45ae-b4e4-6b5189003539\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7587867c-vjb5k" podUID="7bda6270-360d-45ae-b4e4-6b5189003539" Mar 13 12:24:03.999937 containerd[1735]: time="2026-03-13T12:24:03.999672423Z" level=error msg="StopPodSandbox for \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\" failed" error="failed to destroy network for sandbox \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:04.000592 kubelet[3225]: E0313 12:24:03.999902 3225 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Mar 13 12:24:04.000592 kubelet[3225]: E0313 12:24:04.000478 3225 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b"} Mar 13 12:24:04.000592 kubelet[3225]: E0313 12:24:04.000530 3225 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f97ecaa3-f993-4c5c-ba9c-9db8e1bdfbf1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 13 12:24:04.000592 kubelet[3225]: E0313 12:24:04.000561 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f97ecaa3-f993-4c5c-ba9c-9db8e1bdfbf1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6c7f959487-nm874" podUID="f97ecaa3-f993-4c5c-ba9c-9db8e1bdfbf1" Mar 13 12:24:04.004583 containerd[1735]: time="2026-03-13T12:24:04.004548791Z" level=error msg="StopPodSandbox for \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\" failed" error="failed to destroy network for sandbox \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:04.004824 kubelet[3225]: E0313 12:24:04.004706 3225 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Mar 13 12:24:04.004824 kubelet[3225]: E0313 12:24:04.004743 3225 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12"} Mar 13 12:24:04.004824 kubelet[3225]: E0313 12:24:04.004765 3225 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f9d96381-f483-497e-aa8b-0362e6cdf815\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 13 12:24:04.004824 kubelet[3225]: E0313 12:24:04.004786 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f9d96381-f483-497e-aa8b-0362e6cdf815\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64c54c94bc-57wqz" podUID="f9d96381-f483-497e-aa8b-0362e6cdf815" Mar 13 12:24:04.005426 containerd[1735]: time="2026-03-13T12:24:04.005353473Z" level=error msg="StopPodSandbox for \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\" failed" error="failed to destroy network for sandbox \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:04.005882 kubelet[3225]: E0313 12:24:04.005855 3225 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Mar 13 12:24:04.005959 kubelet[3225]: E0313 12:24:04.005885 3225 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83"} Mar 13 12:24:04.005959 kubelet[3225]: E0313 12:24:04.005910 3225 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e6e79ed8-8793-4170-b6be-57dc4f48b338\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 13 12:24:04.005959 kubelet[3225]: E0313 12:24:04.005929 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e6e79ed8-8793-4170-b6be-57dc4f48b338\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-cbn7n" podUID="e6e79ed8-8793-4170-b6be-57dc4f48b338" Mar 13 12:24:04.009561 containerd[1735]: time="2026-03-13T12:24:04.009251720Z" level=error msg="StopPodSandbox for \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\" failed" error="failed to destroy network for sandbox \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:04.009820 kubelet[3225]: E0313 12:24:04.009428 3225 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Mar 13 12:24:04.009820 kubelet[3225]: E0313 12:24:04.009758 3225 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e"} Mar 13 12:24:04.010107 kubelet[3225]: E0313 12:24:04.009921 3225 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a346f55-3551-472a-8019-424b407e361b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 13 12:24:04.010107 kubelet[3225]: E0313 12:24:04.009951 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a346f55-3551-472a-8019-424b407e361b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6c7f959487-mtfnw" podUID="0a346f55-3551-472a-8019-424b407e361b" Mar 13 12:24:04.014451 containerd[1735]: time="2026-03-13T12:24:04.014282088Z" level=error msg="StopPodSandbox for \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\" failed" error="failed to destroy network for sandbox \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:24:04.014538 kubelet[3225]: E0313 12:24:04.014471 3225 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Mar 13 12:24:04.014538 kubelet[3225]: E0313 12:24:04.014503 3225 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178"} Mar 13 12:24:04.014538 kubelet[3225]: E0313 12:24:04.014526 3225 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a8131164-7f35-4890-a8dd-a27d2707cba1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 13 12:24:04.014640 kubelet[3225]: E0313 12:24:04.014545 3225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a8131164-7f35-4890-a8dd-a27d2707cba1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xclsd" podUID="a8131164-7f35-4890-a8dd-a27d2707cba1" Mar 13 12:24:04.027599 systemd[1]: Started cri-containerd-53946500c91142fa820b4e56e3431bcfbfe1a57d1a71bc4732a31cdf4d34e3ce.scope - libcontainer container 53946500c91142fa820b4e56e3431bcfbfe1a57d1a71bc4732a31cdf4d34e3ce. Mar 13 12:24:04.056858 containerd[1735]: time="2026-03-13T12:24:04.056723282Z" level=info msg="StartContainer for \"53946500c91142fa820b4e56e3431bcfbfe1a57d1a71bc4732a31cdf4d34e3ce\" returns successfully" Mar 13 12:24:04.317401 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a-shm.mount: Deactivated successfully. Mar 13 12:24:04.906087 containerd[1735]: time="2026-03-13T12:24:04.905941117Z" level=info msg="StopPodSandbox for \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\"" Mar 13 12:24:04.943029 kubelet[3225]: I0313 12:24:04.942915 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7z7bk" podStartSLOduration=16.722290028 podStartE2EDuration="32.942893581s" podCreationTimestamp="2026-03-13 12:23:32 +0000 UTC" firstStartedPulling="2026-03-13 12:23:33.270088732 +0000 UTC m=+20.933668644" lastFinishedPulling="2026-03-13 12:23:49.490692285 +0000 UTC m=+37.154272197" observedRunningTime="2026-03-13 12:24:04.942894381 +0000 UTC m=+52.606474293" watchObservedRunningTime="2026-03-13 12:24:04.942893581 +0000 UTC m=+52.606473453" Mar 13 12:24:05.010796 containerd[1735]: 2026-03-13 12:24:04.971 [INFO][4444] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Mar 13 12:24:05.010796 containerd[1735]: 2026-03-13 12:24:04.971 [INFO][4444] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" iface="eth0" netns="/var/run/netns/cni-5d1ab1a2-5bde-b62e-7720-238b9163e4a2" Mar 13 12:24:05.010796 containerd[1735]: 2026-03-13 12:24:04.973 [INFO][4444] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" iface="eth0" netns="/var/run/netns/cni-5d1ab1a2-5bde-b62e-7720-238b9163e4a2" Mar 13 12:24:05.010796 containerd[1735]: 2026-03-13 12:24:04.973 [INFO][4444] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" iface="eth0" netns="/var/run/netns/cni-5d1ab1a2-5bde-b62e-7720-238b9163e4a2" Mar 13 12:24:05.010796 containerd[1735]: 2026-03-13 12:24:04.973 [INFO][4444] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Mar 13 12:24:05.010796 containerd[1735]: 2026-03-13 12:24:04.974 [INFO][4444] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Mar 13 12:24:05.010796 containerd[1735]: 2026-03-13 12:24:04.992 [INFO][4452] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" HandleID="k8s-pod-network.085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Workload="ci--4081.3.101--461ebd96c0-k8s-whisker--7587867c--vjb5k-eth0" Mar 13 12:24:05.010796 containerd[1735]: 2026-03-13 12:24:04.992 [INFO][4452] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:24:05.010796 containerd[1735]: 2026-03-13 12:24:04.992 [INFO][4452] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:24:05.010796 containerd[1735]: 2026-03-13 12:24:05.005 [WARNING][4452] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" HandleID="k8s-pod-network.085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Workload="ci--4081.3.101--461ebd96c0-k8s-whisker--7587867c--vjb5k-eth0" Mar 13 12:24:05.010796 containerd[1735]: 2026-03-13 12:24:05.005 [INFO][4452] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" HandleID="k8s-pod-network.085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Workload="ci--4081.3.101--461ebd96c0-k8s-whisker--7587867c--vjb5k-eth0" Mar 13 12:24:05.010796 containerd[1735]: 2026-03-13 12:24:05.006 [INFO][4452] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:24:05.010796 containerd[1735]: 2026-03-13 12:24:05.009 [INFO][4444] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Mar 13 12:24:05.013309 containerd[1735]: time="2026-03-13T12:24:05.012543542Z" level=info msg="TearDown network for sandbox \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\" successfully" Mar 13 12:24:05.013309 containerd[1735]: time="2026-03-13T12:24:05.012573662Z" level=info msg="StopPodSandbox for \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\" returns successfully" Mar 13 12:24:05.014072 systemd[1]: run-netns-cni\x2d5d1ab1a2\x2d5bde\x2db62e\x2d7720\x2d238b9163e4a2.mount: Deactivated successfully. Mar 13 12:24:05.054129 kubelet[3225]: I0313 12:24:05.054086 3225 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhmcj\" (UniqueName: \"kubernetes.io/projected/7bda6270-360d-45ae-b4e4-6b5189003539-kube-api-access-rhmcj\") pod \"7bda6270-360d-45ae-b4e4-6b5189003539\" (UID: \"7bda6270-360d-45ae-b4e4-6b5189003539\") " Mar 13 12:24:05.054129 kubelet[3225]: I0313 12:24:05.054134 3225 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bda6270-360d-45ae-b4e4-6b5189003539-whisker-ca-bundle\") pod \"7bda6270-360d-45ae-b4e4-6b5189003539\" (UID: \"7bda6270-360d-45ae-b4e4-6b5189003539\") " Mar 13 12:24:05.054305 kubelet[3225]: I0313 12:24:05.054168 3225 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7bda6270-360d-45ae-b4e4-6b5189003539-whisker-backend-key-pair\") pod \"7bda6270-360d-45ae-b4e4-6b5189003539\" (UID: \"7bda6270-360d-45ae-b4e4-6b5189003539\") " Mar 13 12:24:05.054305 kubelet[3225]: I0313 12:24:05.054186 3225 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7bda6270-360d-45ae-b4e4-6b5189003539-nginx-config\") pod \"7bda6270-360d-45ae-b4e4-6b5189003539\" (UID: \"7bda6270-360d-45ae-b4e4-6b5189003539\") " Mar 13 12:24:05.054590 kubelet[3225]: I0313 12:24:05.054567 3225 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bda6270-360d-45ae-b4e4-6b5189003539-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "7bda6270-360d-45ae-b4e4-6b5189003539" (UID: "7bda6270-360d-45ae-b4e4-6b5189003539"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 12:24:05.054879 kubelet[3225]: I0313 12:24:05.054813 3225 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bda6270-360d-45ae-b4e4-6b5189003539-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7bda6270-360d-45ae-b4e4-6b5189003539" (UID: "7bda6270-360d-45ae-b4e4-6b5189003539"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 12:24:05.060038 kubelet[3225]: I0313 12:24:05.059998 3225 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bda6270-360d-45ae-b4e4-6b5189003539-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7bda6270-360d-45ae-b4e4-6b5189003539" (UID: "7bda6270-360d-45ae-b4e4-6b5189003539"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 12:24:05.060164 kubelet[3225]: I0313 12:24:05.060096 3225 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bda6270-360d-45ae-b4e4-6b5189003539-kube-api-access-rhmcj" (OuterVolumeSpecName: "kube-api-access-rhmcj") pod "7bda6270-360d-45ae-b4e4-6b5189003539" (UID: "7bda6270-360d-45ae-b4e4-6b5189003539"). InnerVolumeSpecName "kube-api-access-rhmcj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 12:24:05.060701 systemd[1]: var-lib-kubelet-pods-7bda6270\x2d360d\x2d45ae\x2db4e4\x2d6b5189003539-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drhmcj.mount: Deactivated successfully. Mar 13 12:24:05.060908 systemd[1]: var-lib-kubelet-pods-7bda6270\x2d360d\x2d45ae\x2db4e4\x2d6b5189003539-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 13 12:24:05.155377 kubelet[3225]: I0313 12:24:05.155339 3225 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rhmcj\" (UniqueName: \"kubernetes.io/projected/7bda6270-360d-45ae-b4e4-6b5189003539-kube-api-access-rhmcj\") on node \"ci-4081.3.101-461ebd96c0\" DevicePath \"\"" Mar 13 12:24:05.155377 kubelet[3225]: I0313 12:24:05.155372 3225 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bda6270-360d-45ae-b4e4-6b5189003539-whisker-ca-bundle\") on node \"ci-4081.3.101-461ebd96c0\" DevicePath \"\"" Mar 13 12:24:05.155377 kubelet[3225]: I0313 12:24:05.155382 3225 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7bda6270-360d-45ae-b4e4-6b5189003539-whisker-backend-key-pair\") on node \"ci-4081.3.101-461ebd96c0\" DevicePath \"\"" Mar 13 12:24:05.155583 kubelet[3225]: I0313 12:24:05.155391 3225 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7bda6270-360d-45ae-b4e4-6b5189003539-nginx-config\") on node \"ci-4081.3.101-461ebd96c0\" DevicePath \"\"" Mar 13 12:24:05.699521 kernel: calico-node[4489]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 13 12:24:05.914600 systemd[1]: Removed slice kubepods-besteffort-pod7bda6270_360d_45ae_b4e4_6b5189003539.slice - libcontainer container kubepods-besteffort-pod7bda6270_360d_45ae_b4e4_6b5189003539.slice. Mar 13 12:24:05.988144 systemd[1]: Created slice kubepods-besteffort-podb7069cfc_6b12_4463_8b3d_cafbd1f0e9cb.slice - libcontainer container kubepods-besteffort-podb7069cfc_6b12_4463_8b3d_cafbd1f0e9cb.slice. Mar 13 12:24:06.059534 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.96.0.10 Mar 13 12:24:06.059647 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.96.0.10 Mar 13 12:24:06.061941 kubelet[3225]: I0313 12:24:06.061767 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b7069cfc-6b12-4463-8b3d-cafbd1f0e9cb-nginx-config\") pod \"whisker-dc875f6c4-ff5m9\" (UID: \"b7069cfc-6b12-4463-8b3d-cafbd1f0e9cb\") " pod="calico-system/whisker-dc875f6c4-ff5m9" Mar 13 12:24:06.061941 kubelet[3225]: I0313 12:24:06.061813 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bfxr\" (UniqueName: \"kubernetes.io/projected/b7069cfc-6b12-4463-8b3d-cafbd1f0e9cb-kube-api-access-9bfxr\") pod \"whisker-dc875f6c4-ff5m9\" (UID: \"b7069cfc-6b12-4463-8b3d-cafbd1f0e9cb\") " pod="calico-system/whisker-dc875f6c4-ff5m9" Mar 13 12:24:06.061941 kubelet[3225]: I0313 12:24:06.061837 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b7069cfc-6b12-4463-8b3d-cafbd1f0e9cb-whisker-backend-key-pair\") pod \"whisker-dc875f6c4-ff5m9\" (UID: \"b7069cfc-6b12-4463-8b3d-cafbd1f0e9cb\") " pod="calico-system/whisker-dc875f6c4-ff5m9" Mar 13 12:24:06.061941 kubelet[3225]: I0313 12:24:06.061858 3225 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7069cfc-6b12-4463-8b3d-cafbd1f0e9cb-whisker-ca-bundle\") pod \"whisker-dc875f6c4-ff5m9\" (UID: \"b7069cfc-6b12-4463-8b3d-cafbd1f0e9cb\") " pod="calico-system/whisker-dc875f6c4-ff5m9" Mar 13 12:24:06.075064 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.96.0.10 Mar 13 12:24:06.075421 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.96.0.10 Mar 13 12:24:06.075486 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.96.0.10 Mar 13 12:24:06.086450 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.96.0.10 Mar 13 12:24:06.086554 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.96.0.10 Mar 13 12:24:06.099454 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.96.0.10 Mar 13 12:24:06.112923 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.96.0.10 Mar 13 12:24:06.113020 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.96.0.10 Mar 13 12:24:06.244313 systemd-networkd[1354]: vxlan.calico: Link UP Mar 13 12:24:06.244320 systemd-networkd[1354]: vxlan.calico: Gained carrier Mar 13 12:24:06.300697 containerd[1735]: time="2026-03-13T12:24:06.300480178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dc875f6c4-ff5m9,Uid:b7069cfc-6b12-4463-8b3d-cafbd1f0e9cb,Namespace:calico-system,Attempt:0,}" Mar 13 12:24:06.439178 kubelet[3225]: I0313 12:24:06.439124 3225 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bda6270-360d-45ae-b4e4-6b5189003539" path="/var/lib/kubelet/pods/7bda6270-360d-45ae-b4e4-6b5189003539/volumes" Mar 13 12:24:06.441651 systemd-networkd[1354]: cali177c54b77cc: Link UP Mar 13 12:24:06.443465 systemd-networkd[1354]: cali177c54b77cc: Gained carrier Mar 13 12:24:06.465089 containerd[1735]: 2026-03-13 12:24:06.369 [INFO][4637] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--461ebd96c0-k8s-whisker--dc875f6c4--ff5m9-eth0 whisker-dc875f6c4- calico-system b7069cfc-6b12-4463-8b3d-cafbd1f0e9cb 925 0 2026-03-13 12:24:05 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:dc875f6c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.101-461ebd96c0 whisker-dc875f6c4-ff5m9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali177c54b77cc [] [] }} ContainerID="0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4" Namespace="calico-system" Pod="whisker-dc875f6c4-ff5m9" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-whisker--dc875f6c4--ff5m9-" Mar 13 12:24:06.465089 containerd[1735]: 2026-03-13 12:24:06.370 [INFO][4637] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4" Namespace="calico-system" Pod="whisker-dc875f6c4-ff5m9" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-whisker--dc875f6c4--ff5m9-eth0" Mar 13 12:24:06.465089 containerd[1735]: 2026-03-13 12:24:06.394 [INFO][4646] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4" HandleID="k8s-pod-network.0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4" Workload="ci--4081.3.101--461ebd96c0-k8s-whisker--dc875f6c4--ff5m9-eth0" Mar 13 12:24:06.465089 containerd[1735]: 2026-03-13 12:24:06.403 [INFO][4646] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4" HandleID="k8s-pod-network.0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4" Workload="ci--4081.3.101--461ebd96c0-k8s-whisker--dc875f6c4--ff5m9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273150), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-461ebd96c0", "pod":"whisker-dc875f6c4-ff5m9", "timestamp":"2026-03-13 12:24:06.394543181 +0000 UTC"}, Hostname:"ci-4081.3.101-461ebd96c0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400034d080)} Mar 13 12:24:06.465089 containerd[1735]: 2026-03-13 12:24:06.403 [INFO][4646] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:24:06.465089 containerd[1735]: 2026-03-13 12:24:06.403 [INFO][4646] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:24:06.465089 containerd[1735]: 2026-03-13 12:24:06.404 [INFO][4646] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-461ebd96c0' Mar 13 12:24:06.465089 containerd[1735]: 2026-03-13 12:24:06.406 [INFO][4646] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:06.465089 containerd[1735]: 2026-03-13 12:24:06.411 [INFO][4646] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:06.465089 containerd[1735]: 2026-03-13 12:24:06.415 [INFO][4646] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:06.465089 containerd[1735]: 2026-03-13 12:24:06.416 [INFO][4646] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:06.465089 containerd[1735]: 2026-03-13 12:24:06.418 [INFO][4646] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:06.465089 containerd[1735]: 2026-03-13 12:24:06.418 [INFO][4646] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:06.465089 containerd[1735]: 2026-03-13 12:24:06.420 [INFO][4646] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4 Mar 13 12:24:06.465089 containerd[1735]: 2026-03-13 12:24:06.424 [INFO][4646] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:06.465089 containerd[1735]: 2026-03-13 12:24:06.429 [INFO][4646] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.1/26] block=192.168.100.0/26 handle="k8s-pod-network.0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:06.465089 containerd[1735]: 2026-03-13 12:24:06.429 [INFO][4646] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.1/26] handle="k8s-pod-network.0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:06.465089 containerd[1735]: 2026-03-13 12:24:06.429 [INFO][4646] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:24:06.465089 containerd[1735]: 2026-03-13 12:24:06.429 [INFO][4646] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.1/26] IPv6=[] ContainerID="0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4" HandleID="k8s-pod-network.0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4" Workload="ci--4081.3.101--461ebd96c0-k8s-whisker--dc875f6c4--ff5m9-eth0" Mar 13 12:24:06.465930 containerd[1735]: 2026-03-13 12:24:06.432 [INFO][4637] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4" Namespace="calico-system" Pod="whisker-dc875f6c4-ff5m9" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-whisker--dc875f6c4--ff5m9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-whisker--dc875f6c4--ff5m9-eth0", GenerateName:"whisker-dc875f6c4-", Namespace:"calico-system", SelfLink:"", UID:"b7069cfc-6b12-4463-8b3d-cafbd1f0e9cb", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"dc875f6c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"", Pod:"whisker-dc875f6c4-ff5m9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.100.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali177c54b77cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:24:06.465930 containerd[1735]: 2026-03-13 12:24:06.433 [INFO][4637] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.1/32] ContainerID="0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4" Namespace="calico-system" Pod="whisker-dc875f6c4-ff5m9" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-whisker--dc875f6c4--ff5m9-eth0" Mar 13 12:24:06.465930 containerd[1735]: 2026-03-13 12:24:06.433 [INFO][4637] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali177c54b77cc ContainerID="0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4" Namespace="calico-system" Pod="whisker-dc875f6c4-ff5m9" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-whisker--dc875f6c4--ff5m9-eth0" Mar 13 12:24:06.465930 containerd[1735]: 2026-03-13 12:24:06.443 [INFO][4637] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4" Namespace="calico-system" Pod="whisker-dc875f6c4-ff5m9" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-whisker--dc875f6c4--ff5m9-eth0" Mar 13 12:24:06.465930 containerd[1735]: 2026-03-13 12:24:06.444 [INFO][4637] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4" Namespace="calico-system" Pod="whisker-dc875f6c4-ff5m9" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-whisker--dc875f6c4--ff5m9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-whisker--dc875f6c4--ff5m9-eth0", GenerateName:"whisker-dc875f6c4-", Namespace:"calico-system", SelfLink:"", UID:"b7069cfc-6b12-4463-8b3d-cafbd1f0e9cb", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 24, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"dc875f6c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4", Pod:"whisker-dc875f6c4-ff5m9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.100.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali177c54b77cc", MAC:"aa:0f:26:10:ad:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:24:06.465930 containerd[1735]: 2026-03-13 12:24:06.462 [INFO][4637] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4" Namespace="calico-system" Pod="whisker-dc875f6c4-ff5m9" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-whisker--dc875f6c4--ff5m9-eth0" Mar 13 12:24:06.493395 containerd[1735]: time="2026-03-13T12:24:06.493245113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:24:06.493395 containerd[1735]: time="2026-03-13T12:24:06.493318873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:24:06.493395 containerd[1735]: time="2026-03-13T12:24:06.493332433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:24:06.494289 containerd[1735]: time="2026-03-13T12:24:06.494242314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:24:06.522640 systemd[1]: Started cri-containerd-0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4.scope - libcontainer container 0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4. Mar 13 12:24:06.574699 containerd[1735]: time="2026-03-13T12:24:06.574656094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dc875f6c4-ff5m9,Uid:b7069cfc-6b12-4463-8b3d-cafbd1f0e9cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4\"" Mar 13 12:24:06.574699 containerd[1735]: time="2026-03-13T12:24:06.577020778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 13 12:24:07.809577 systemd-networkd[1354]: vxlan.calico: Gained IPv6LL Mar 13 12:24:07.933458 containerd[1735]: time="2026-03-13T12:24:07.933329326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:07.935895 containerd[1735]: time="2026-03-13T12:24:07.935746250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 13 12:24:07.939161 containerd[1735]: time="2026-03-13T12:24:07.938540295Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:07.944172 containerd[1735]: time="2026-03-13T12:24:07.943970064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:07.944914 containerd[1735]: time="2026-03-13T12:24:07.944666345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.367609087s" Mar 13 12:24:07.944914 containerd[1735]: time="2026-03-13T12:24:07.944701225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 13 12:24:07.952758 containerd[1735]: time="2026-03-13T12:24:07.952598799Z" level=info msg="CreateContainer within sandbox \"0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 13 12:24:07.984822 containerd[1735]: time="2026-03-13T12:24:07.984769733Z" level=info msg="CreateContainer within sandbox \"0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"11c76b76064d62c8bbaed678a662879de6547f06a9d8289ad2cc97186e7c3911\"" Mar 13 12:24:07.986975 containerd[1735]: time="2026-03-13T12:24:07.985879694Z" level=info msg="StartContainer for \"11c76b76064d62c8bbaed678a662879de6547f06a9d8289ad2cc97186e7c3911\"" Mar 13 12:24:08.018587 systemd[1]: Started cri-containerd-11c76b76064d62c8bbaed678a662879de6547f06a9d8289ad2cc97186e7c3911.scope - libcontainer container 11c76b76064d62c8bbaed678a662879de6547f06a9d8289ad2cc97186e7c3911. Mar 13 12:24:08.050646 containerd[1735]: time="2026-03-13T12:24:08.050599283Z" level=info msg="StartContainer for \"11c76b76064d62c8bbaed678a662879de6547f06a9d8289ad2cc97186e7c3911\" returns successfully" Mar 13 12:24:08.053943 containerd[1735]: time="2026-03-13T12:24:08.053748289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 13 12:24:08.257627 systemd-networkd[1354]: cali177c54b77cc: Gained IPv6LL Mar 13 12:24:09.492233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2415259024.mount: Deactivated successfully. Mar 13 12:24:09.545166 containerd[1735]: time="2026-03-13T12:24:09.545118514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:09.547924 containerd[1735]: time="2026-03-13T12:24:09.547788638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 13 12:24:09.555215 containerd[1735]: time="2026-03-13T12:24:09.554828290Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:09.559368 containerd[1735]: time="2026-03-13T12:24:09.559337018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:09.560228 containerd[1735]: time="2026-03-13T12:24:09.560199339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.50641657s" Mar 13 12:24:09.560389 containerd[1735]: time="2026-03-13T12:24:09.560370460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 13 12:24:09.567254 containerd[1735]: time="2026-03-13T12:24:09.567221351Z" level=info msg="CreateContainer within sandbox \"0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 13 12:24:09.609758 containerd[1735]: time="2026-03-13T12:24:09.609712262Z" level=info msg="CreateContainer within sandbox \"0313c632476a77c2921dae99395bd216782a55df3537462509dc8c94b70aa7a4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0eec70be0f76d3c05703ce477ae906b76771dee45a9de7bce7b750dbb3221e20\"" Mar 13 12:24:09.612022 containerd[1735]: time="2026-03-13T12:24:09.610598104Z" level=info msg="StartContainer for \"0eec70be0f76d3c05703ce477ae906b76771dee45a9de7bce7b750dbb3221e20\"" Mar 13 12:24:09.640700 systemd[1]: Started cri-containerd-0eec70be0f76d3c05703ce477ae906b76771dee45a9de7bce7b750dbb3221e20.scope - libcontainer container 0eec70be0f76d3c05703ce477ae906b76771dee45a9de7bce7b750dbb3221e20. Mar 13 12:24:09.710032 containerd[1735]: time="2026-03-13T12:24:09.709985671Z" level=info msg="StartContainer for \"0eec70be0f76d3c05703ce477ae906b76771dee45a9de7bce7b750dbb3221e20\" returns successfully" Mar 13 12:24:09.932022 kubelet[3225]: I0313 12:24:09.931955 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-dc875f6c4-ff5m9" podStartSLOduration=1.946427319 podStartE2EDuration="4.931938204s" podCreationTimestamp="2026-03-13 12:24:05 +0000 UTC" firstStartedPulling="2026-03-13 12:24:06.575940816 +0000 UTC m=+54.239520728" lastFinishedPulling="2026-03-13 12:24:09.561451741 +0000 UTC m=+57.225031613" observedRunningTime="2026-03-13 12:24:09.931572043 +0000 UTC m=+57.595151915" watchObservedRunningTime="2026-03-13 12:24:09.931938204 +0000 UTC m=+57.595518076" Mar 13 12:24:11.854758 kernel: net_ratelimit: 4 callbacks suppressed Mar 13 12:24:11.854881 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.101.115.65 Mar 13 12:24:12.416055 containerd[1735]: time="2026-03-13T12:24:12.416007377Z" level=info msg="StopPodSandbox for \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\"" Mar 13 12:24:12.482857 containerd[1735]: 2026-03-13 12:24:12.452 [WARNING][4863] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-whisker--7587867c--vjb5k-eth0" Mar 13 12:24:12.482857 containerd[1735]: 2026-03-13 12:24:12.452 [INFO][4863] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Mar 13 12:24:12.482857 containerd[1735]: 2026-03-13 12:24:12.452 [INFO][4863] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" iface="eth0" netns="" Mar 13 12:24:12.482857 containerd[1735]: 2026-03-13 12:24:12.452 [INFO][4863] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Mar 13 12:24:12.482857 containerd[1735]: 2026-03-13 12:24:12.452 [INFO][4863] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Mar 13 12:24:12.482857 containerd[1735]: 2026-03-13 12:24:12.469 [INFO][4873] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" HandleID="k8s-pod-network.085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Workload="ci--4081.3.101--461ebd96c0-k8s-whisker--7587867c--vjb5k-eth0" Mar 13 12:24:12.482857 containerd[1735]: 2026-03-13 12:24:12.470 [INFO][4873] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:24:12.482857 containerd[1735]: 2026-03-13 12:24:12.470 [INFO][4873] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:24:12.482857 containerd[1735]: 2026-03-13 12:24:12.478 [WARNING][4873] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" HandleID="k8s-pod-network.085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Workload="ci--4081.3.101--461ebd96c0-k8s-whisker--7587867c--vjb5k-eth0" Mar 13 12:24:12.482857 containerd[1735]: 2026-03-13 12:24:12.478 [INFO][4873] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" HandleID="k8s-pod-network.085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Workload="ci--4081.3.101--461ebd96c0-k8s-whisker--7587867c--vjb5k-eth0" Mar 13 12:24:12.482857 containerd[1735]: 2026-03-13 12:24:12.479 [INFO][4873] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:24:12.482857 containerd[1735]: 2026-03-13 12:24:12.481 [INFO][4863] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Mar 13 12:24:12.483280 containerd[1735]: time="2026-03-13T12:24:12.482902649Z" level=info msg="TearDown network for sandbox \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\" successfully" Mar 13 12:24:12.483280 containerd[1735]: time="2026-03-13T12:24:12.482928729Z" level=info msg="StopPodSandbox for \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\" returns successfully" Mar 13 12:24:12.483593 containerd[1735]: time="2026-03-13T12:24:12.483566330Z" level=info msg="RemovePodSandbox for \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\"" Mar 13 12:24:12.491738 containerd[1735]: time="2026-03-13T12:24:12.491701904Z" level=info msg="Forcibly stopping sandbox \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\"" Mar 13 12:24:12.564865 containerd[1735]: 2026-03-13 12:24:12.533 [WARNING][4887] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-whisker--7587867c--vjb5k-eth0" Mar 13 12:24:12.564865 containerd[1735]: 2026-03-13 12:24:12.533 [INFO][4887] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Mar 13 12:24:12.564865 containerd[1735]: 2026-03-13 12:24:12.533 [INFO][4887] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" iface="eth0" netns="" Mar 13 12:24:12.564865 containerd[1735]: 2026-03-13 12:24:12.533 [INFO][4887] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Mar 13 12:24:12.564865 containerd[1735]: 2026-03-13 12:24:12.533 [INFO][4887] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Mar 13 12:24:12.564865 containerd[1735]: 2026-03-13 12:24:12.551 [INFO][4894] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" HandleID="k8s-pod-network.085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Workload="ci--4081.3.101--461ebd96c0-k8s-whisker--7587867c--vjb5k-eth0" Mar 13 12:24:12.564865 containerd[1735]: 2026-03-13 12:24:12.552 [INFO][4894] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:24:12.564865 containerd[1735]: 2026-03-13 12:24:12.552 [INFO][4894] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:24:12.564865 containerd[1735]: 2026-03-13 12:24:12.560 [WARNING][4894] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" HandleID="k8s-pod-network.085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Workload="ci--4081.3.101--461ebd96c0-k8s-whisker--7587867c--vjb5k-eth0" Mar 13 12:24:12.564865 containerd[1735]: 2026-03-13 12:24:12.560 [INFO][4894] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" HandleID="k8s-pod-network.085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Workload="ci--4081.3.101--461ebd96c0-k8s-whisker--7587867c--vjb5k-eth0" Mar 13 12:24:12.564865 containerd[1735]: 2026-03-13 12:24:12.561 [INFO][4894] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:24:12.564865 containerd[1735]: 2026-03-13 12:24:12.563 [INFO][4887] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4" Mar 13 12:24:12.565211 containerd[1735]: time="2026-03-13T12:24:12.564910547Z" level=info msg="TearDown network for sandbox \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\" successfully" Mar 13 12:24:12.575057 containerd[1735]: time="2026-03-13T12:24:12.575019044Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 13 12:24:12.575148 containerd[1735]: time="2026-03-13T12:24:12.575086004Z" level=info msg="RemovePodSandbox \"085e985c5b136585db2550eba413ddee73a1fde156706019f804d96c120798e4\" returns successfully" Mar 13 12:24:14.435324 containerd[1735]: time="2026-03-13T12:24:14.435169129Z" level=info msg="StopPodSandbox for \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\"" Mar 13 12:24:14.516708 containerd[1735]: 2026-03-13 12:24:14.481 [INFO][4909] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Mar 13 12:24:14.516708 containerd[1735]: 2026-03-13 12:24:14.481 [INFO][4909] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" iface="eth0" netns="/var/run/netns/cni-61bf75ff-5a95-8bf8-cbae-0a8bb9f1c02a" Mar 13 12:24:14.516708 containerd[1735]: 2026-03-13 12:24:14.481 [INFO][4909] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" iface="eth0" netns="/var/run/netns/cni-61bf75ff-5a95-8bf8-cbae-0a8bb9f1c02a" Mar 13 12:24:14.516708 containerd[1735]: 2026-03-13 12:24:14.483 [INFO][4909] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" iface="eth0" netns="/var/run/netns/cni-61bf75ff-5a95-8bf8-cbae-0a8bb9f1c02a" Mar 13 12:24:14.516708 containerd[1735]: 2026-03-13 12:24:14.483 [INFO][4909] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Mar 13 12:24:14.516708 containerd[1735]: 2026-03-13 12:24:14.483 [INFO][4909] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Mar 13 12:24:14.516708 containerd[1735]: 2026-03-13 12:24:14.502 [INFO][4917] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" HandleID="k8s-pod-network.5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0" Mar 13 12:24:14.516708 containerd[1735]: 2026-03-13 12:24:14.503 [INFO][4917] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:24:14.516708 containerd[1735]: 2026-03-13 12:24:14.503 [INFO][4917] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:24:14.516708 containerd[1735]: 2026-03-13 12:24:14.511 [WARNING][4917] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" HandleID="k8s-pod-network.5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0" Mar 13 12:24:14.516708 containerd[1735]: 2026-03-13 12:24:14.511 [INFO][4917] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" HandleID="k8s-pod-network.5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0" Mar 13 12:24:14.516708 containerd[1735]: 2026-03-13 12:24:14.513 [INFO][4917] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:24:14.516708 containerd[1735]: 2026-03-13 12:24:14.515 [INFO][4909] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Mar 13 12:24:14.518713 containerd[1735]: time="2026-03-13T12:24:14.518354189Z" level=info msg="TearDown network for sandbox \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\" successfully" Mar 13 12:24:14.518713 containerd[1735]: time="2026-03-13T12:24:14.518386149Z" level=info msg="StopPodSandbox for \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\" returns successfully" Mar 13 12:24:14.519778 systemd[1]: run-netns-cni\x2d61bf75ff\x2d5a95\x2d8bf8\x2dcbae\x2d0a8bb9f1c02a.mount: Deactivated successfully. Mar 13 12:24:14.525406 containerd[1735]: time="2026-03-13T12:24:14.525053600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64c54c94bc-57wqz,Uid:f9d96381-f483-497e-aa8b-0362e6cdf815,Namespace:calico-system,Attempt:1,}" Mar 13 12:24:14.704600 systemd-networkd[1354]: cali39e38bce82f: Link UP Mar 13 12:24:14.705414 systemd-networkd[1354]: cali39e38bce82f: Gained carrier Mar 13 12:24:14.724310 containerd[1735]: 2026-03-13 12:24:14.592 [INFO][4923] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0 calico-kube-controllers-64c54c94bc- calico-system f9d96381-f483-497e-aa8b-0362e6cdf815 964 0 2026-03-13 12:23:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64c54c94bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.101-461ebd96c0 calico-kube-controllers-64c54c94bc-57wqz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali39e38bce82f [] [] }} ContainerID="d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3" Namespace="calico-system" Pod="calico-kube-controllers-64c54c94bc-57wqz" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-" Mar 13 12:24:14.724310 containerd[1735]: 2026-03-13 12:24:14.592 [INFO][4923] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3" Namespace="calico-system" Pod="calico-kube-controllers-64c54c94bc-57wqz" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0" Mar 13 12:24:14.724310 containerd[1735]: 2026-03-13 12:24:14.630 [INFO][4935] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3" HandleID="k8s-pod-network.d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0" Mar 13 12:24:14.724310 containerd[1735]: 2026-03-13 12:24:14.639 [INFO][4935] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3" HandleID="k8s-pod-network.d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002734f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-461ebd96c0", "pod":"calico-kube-controllers-64c54c94bc-57wqz", "timestamp":"2026-03-13 12:24:14.630023296 +0000 UTC"}, Hostname:"ci-4081.3.101-461ebd96c0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400030f080)} Mar 13 12:24:14.724310 containerd[1735]: 2026-03-13 12:24:14.639 [INFO][4935] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:24:14.724310 containerd[1735]: 2026-03-13 12:24:14.639 [INFO][4935] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:24:14.724310 containerd[1735]: 2026-03-13 12:24:14.639 [INFO][4935] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-461ebd96c0' Mar 13 12:24:14.724310 containerd[1735]: 2026-03-13 12:24:14.641 [INFO][4935] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:14.724310 containerd[1735]: 2026-03-13 12:24:14.645 [INFO][4935] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:14.724310 containerd[1735]: 2026-03-13 12:24:14.648 [INFO][4935] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:14.724310 containerd[1735]: 2026-03-13 12:24:14.650 [INFO][4935] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:14.724310 containerd[1735]: 2026-03-13 12:24:14.652 [INFO][4935] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:14.724310 containerd[1735]: 2026-03-13 12:24:14.652 [INFO][4935] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:14.724310 containerd[1735]: 2026-03-13 12:24:14.653 [INFO][4935] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3 Mar 13 12:24:14.724310 containerd[1735]: 2026-03-13 12:24:14.669 [INFO][4935] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:14.724310 containerd[1735]: 2026-03-13 12:24:14.698 [INFO][4935] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.2/26] block=192.168.100.0/26 handle="k8s-pod-network.d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:14.724310 containerd[1735]: 2026-03-13 12:24:14.698 [INFO][4935] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.2/26] handle="k8s-pod-network.d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:14.724310 containerd[1735]: 2026-03-13 12:24:14.698 [INFO][4935] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:24:14.724310 containerd[1735]: 2026-03-13 12:24:14.698 [INFO][4935] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.2/26] IPv6=[] ContainerID="d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3" HandleID="k8s-pod-network.d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0" Mar 13 12:24:14.724837 containerd[1735]: 2026-03-13 12:24:14.701 [INFO][4923] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3" Namespace="calico-system" Pod="calico-kube-controllers-64c54c94bc-57wqz" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0", GenerateName:"calico-kube-controllers-64c54c94bc-", Namespace:"calico-system", SelfLink:"", UID:"f9d96381-f483-497e-aa8b-0362e6cdf815", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64c54c94bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"", Pod:"calico-kube-controllers-64c54c94bc-57wqz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali39e38bce82f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:24:14.724837 containerd[1735]: 2026-03-13 12:24:14.701 [INFO][4923] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.2/32] ContainerID="d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3" Namespace="calico-system" Pod="calico-kube-controllers-64c54c94bc-57wqz" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0" Mar 13 12:24:14.724837 containerd[1735]: 2026-03-13 12:24:14.701 [INFO][4923] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39e38bce82f ContainerID="d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3" Namespace="calico-system" Pod="calico-kube-controllers-64c54c94bc-57wqz" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0" Mar 13 12:24:14.724837 containerd[1735]: 2026-03-13 12:24:14.704 [INFO][4923] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3" Namespace="calico-system" Pod="calico-kube-controllers-64c54c94bc-57wqz" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0" Mar 13 12:24:14.724837 containerd[1735]: 2026-03-13 12:24:14.706 [INFO][4923] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3" Namespace="calico-system" Pod="calico-kube-controllers-64c54c94bc-57wqz" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0", GenerateName:"calico-kube-controllers-64c54c94bc-", Namespace:"calico-system", SelfLink:"", UID:"f9d96381-f483-497e-aa8b-0362e6cdf815", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64c54c94bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3", Pod:"calico-kube-controllers-64c54c94bc-57wqz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali39e38bce82f", MAC:"ea:9e:cd:7d:9b:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:24:14.724837 containerd[1735]: 2026-03-13 12:24:14.720 [INFO][4923] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3" Namespace="calico-system" Pod="calico-kube-controllers-64c54c94bc-57wqz" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0" Mar 13 12:24:14.749397 containerd[1735]: time="2026-03-13T12:24:14.749161816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:24:14.749397 containerd[1735]: time="2026-03-13T12:24:14.749219776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:24:14.749397 containerd[1735]: time="2026-03-13T12:24:14.749234296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:24:14.749397 containerd[1735]: time="2026-03-13T12:24:14.749308817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:24:14.781655 systemd[1]: Started cri-containerd-d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3.scope - libcontainer container d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3. Mar 13 12:24:14.808504 containerd[1735]: time="2026-03-13T12:24:14.808460636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64c54c94bc-57wqz,Uid:f9d96381-f483-497e-aa8b-0362e6cdf815,Namespace:calico-system,Attempt:1,} returns sandbox id \"d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3\"" Mar 13 12:24:14.811120 containerd[1735]: time="2026-03-13T12:24:14.811080960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 13 12:24:15.435235 containerd[1735]: time="2026-03-13T12:24:15.434162926Z" level=info msg="StopPodSandbox for \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\"" Mar 13 12:24:15.516184 containerd[1735]: 2026-03-13 12:24:15.479 [INFO][5015] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Mar 13 12:24:15.516184 containerd[1735]: 2026-03-13 12:24:15.479 [INFO][5015] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" iface="eth0" netns="/var/run/netns/cni-e4bfdd8c-1177-eb45-0686-9b03c3a14e2b" Mar 13 12:24:15.516184 containerd[1735]: 2026-03-13 12:24:15.479 [INFO][5015] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" iface="eth0" netns="/var/run/netns/cni-e4bfdd8c-1177-eb45-0686-9b03c3a14e2b" Mar 13 12:24:15.516184 containerd[1735]: 2026-03-13 12:24:15.479 [INFO][5015] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" iface="eth0" netns="/var/run/netns/cni-e4bfdd8c-1177-eb45-0686-9b03c3a14e2b" Mar 13 12:24:15.516184 containerd[1735]: 2026-03-13 12:24:15.479 [INFO][5015] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Mar 13 12:24:15.516184 containerd[1735]: 2026-03-13 12:24:15.479 [INFO][5015] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Mar 13 12:24:15.516184 containerd[1735]: 2026-03-13 12:24:15.500 [INFO][5023] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" HandleID="k8s-pod-network.4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0" Mar 13 12:24:15.516184 containerd[1735]: 2026-03-13 12:24:15.500 [INFO][5023] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:24:15.516184 containerd[1735]: 2026-03-13 12:24:15.500 [INFO][5023] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:24:15.516184 containerd[1735]: 2026-03-13 12:24:15.510 [WARNING][5023] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" HandleID="k8s-pod-network.4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0" Mar 13 12:24:15.516184 containerd[1735]: 2026-03-13 12:24:15.510 [INFO][5023] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" HandleID="k8s-pod-network.4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0" Mar 13 12:24:15.516184 containerd[1735]: 2026-03-13 12:24:15.511 [INFO][5023] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:24:15.516184 containerd[1735]: 2026-03-13 12:24:15.513 [INFO][5015] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Mar 13 12:24:15.519140 containerd[1735]: time="2026-03-13T12:24:15.518608798Z" level=info msg="TearDown network for sandbox \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\" successfully" Mar 13 12:24:15.519140 containerd[1735]: time="2026-03-13T12:24:15.518642918Z" level=info msg="StopPodSandbox for \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\" returns successfully" Mar 13 12:24:15.519216 systemd[1]: run-containerd-runc-k8s.io-d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3-runc.KdhTqy.mount: Deactivated successfully. Mar 13 12:24:15.522107 systemd[1]: run-netns-cni\x2de4bfdd8c\x2d1177\x2deb45\x2d0686\x2d9b03c3a14e2b.mount: Deactivated successfully. Mar 13 12:24:15.526599 containerd[1735]: time="2026-03-13T12:24:15.526173891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cbn7n,Uid:e6e79ed8-8793-4170-b6be-57dc4f48b338,Namespace:kube-system,Attempt:1,}" Mar 13 12:24:15.667047 systemd-networkd[1354]: calif9169657261: Link UP Mar 13 12:24:15.671083 systemd-networkd[1354]: calif9169657261: Gained carrier Mar 13 12:24:15.692034 containerd[1735]: 2026-03-13 12:24:15.599 [INFO][5030] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0 coredns-66bc5c9577- kube-system e6e79ed8-8793-4170-b6be-57dc4f48b338 972 0 2026-03-13 12:23:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.101-461ebd96c0 coredns-66bc5c9577-cbn7n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif9169657261 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928" Namespace="kube-system" Pod="coredns-66bc5c9577-cbn7n" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-" Mar 13 12:24:15.692034 containerd[1735]: 2026-03-13 12:24:15.599 [INFO][5030] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928" Namespace="kube-system" Pod="coredns-66bc5c9577-cbn7n" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0" Mar 13 12:24:15.692034 containerd[1735]: 2026-03-13 12:24:15.621 [INFO][5041] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928" HandleID="k8s-pod-network.8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0" Mar 13 12:24:15.692034 containerd[1735]: 2026-03-13 12:24:15.632 [INFO][5041] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928" HandleID="k8s-pod-network.8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed740), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.101-461ebd96c0", "pod":"coredns-66bc5c9577-cbn7n", "timestamp":"2026-03-13 12:24:15.621528303 +0000 UTC"}, Hostname:"ci-4081.3.101-461ebd96c0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000200f20)} Mar 13 12:24:15.692034 containerd[1735]: 2026-03-13 12:24:15.632 [INFO][5041] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:24:15.692034 containerd[1735]: 2026-03-13 12:24:15.632 [INFO][5041] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:24:15.692034 containerd[1735]: 2026-03-13 12:24:15.632 [INFO][5041] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-461ebd96c0' Mar 13 12:24:15.692034 containerd[1735]: 2026-03-13 12:24:15.634 [INFO][5041] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:15.692034 containerd[1735]: 2026-03-13 12:24:15.639 [INFO][5041] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:15.692034 containerd[1735]: 2026-03-13 12:24:15.643 [INFO][5041] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:15.692034 containerd[1735]: 2026-03-13 12:24:15.646 [INFO][5041] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:15.692034 containerd[1735]: 2026-03-13 12:24:15.648 [INFO][5041] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:15.692034 containerd[1735]: 2026-03-13 12:24:15.648 [INFO][5041] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:15.692034 containerd[1735]: 2026-03-13 12:24:15.649 [INFO][5041] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928 Mar 13 12:24:15.692034 containerd[1735]: 2026-03-13 12:24:15.654 [INFO][5041] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:15.692034 containerd[1735]: 2026-03-13 12:24:15.660 [INFO][5041] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.3/26] block=192.168.100.0/26 handle="k8s-pod-network.8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:15.692034 containerd[1735]: 2026-03-13 12:24:15.660 [INFO][5041] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.3/26] handle="k8s-pod-network.8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:15.692034 containerd[1735]: 2026-03-13 12:24:15.660 [INFO][5041] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:24:15.692034 containerd[1735]: 2026-03-13 12:24:15.660 [INFO][5041] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.3/26] IPv6=[] ContainerID="8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928" HandleID="k8s-pod-network.8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0" Mar 13 12:24:15.692661 containerd[1735]: 2026-03-13 12:24:15.663 [INFO][5030] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928" Namespace="kube-system" Pod="coredns-66bc5c9577-cbn7n" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e6e79ed8-8793-4170-b6be-57dc4f48b338", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"", Pod:"coredns-66bc5c9577-cbn7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9169657261", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:24:15.692661 containerd[1735]: 2026-03-13 12:24:15.663 [INFO][5030] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.3/32] ContainerID="8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928" Namespace="kube-system" Pod="coredns-66bc5c9577-cbn7n" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0" Mar 13 12:24:15.692661 containerd[1735]: 2026-03-13 12:24:15.663 [INFO][5030] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9169657261 ContainerID="8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928" Namespace="kube-system" Pod="coredns-66bc5c9577-cbn7n" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0" Mar 13 12:24:15.692661 containerd[1735]: 2026-03-13 12:24:15.670 [INFO][5030] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928" Namespace="kube-system" Pod="coredns-66bc5c9577-cbn7n" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0" Mar 13 12:24:15.692661 containerd[1735]: 2026-03-13 12:24:15.671 [INFO][5030] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928" Namespace="kube-system" Pod="coredns-66bc5c9577-cbn7n" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e6e79ed8-8793-4170-b6be-57dc4f48b338", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928", Pod:"coredns-66bc5c9577-cbn7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9169657261", MAC:"f2:89:cb:71:b7:a9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:24:15.692863 containerd[1735]: 2026-03-13 12:24:15.687 [INFO][5030] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928" Namespace="kube-system" Pod="coredns-66bc5c9577-cbn7n" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0" Mar 13 12:24:15.735557 containerd[1735]: time="2026-03-13T12:24:15.735344828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:24:15.735557 containerd[1735]: time="2026-03-13T12:24:15.735405588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:24:15.735557 containerd[1735]: time="2026-03-13T12:24:15.735420868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:24:15.735928 containerd[1735]: time="2026-03-13T12:24:15.735541468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:24:15.756619 systemd[1]: Started cri-containerd-8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928.scope - libcontainer container 8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928. Mar 13 12:24:15.765020 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.101.115.65 Mar 13 12:24:15.794994 containerd[1735]: time="2026-03-13T12:24:15.794866095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cbn7n,Uid:e6e79ed8-8793-4170-b6be-57dc4f48b338,Namespace:kube-system,Attempt:1,} returns sandbox id \"8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928\"" Mar 13 12:24:15.808882 containerd[1735]: time="2026-03-13T12:24:15.808700440Z" level=info msg="CreateContainer within sandbox \"8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 12:24:15.863305 containerd[1735]: time="2026-03-13T12:24:15.863260698Z" level=info msg="CreateContainer within sandbox \"8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b255e938830f968dcf2a5d6d62dfaefc1894d1b17545a94db97cb8e1e088d43\"" Mar 13 12:24:15.864052 containerd[1735]: time="2026-03-13T12:24:15.863932219Z" level=info msg="StartContainer for \"4b255e938830f968dcf2a5d6d62dfaefc1894d1b17545a94db97cb8e1e088d43\"" Mar 13 12:24:15.890674 systemd[1]: Started cri-containerd-4b255e938830f968dcf2a5d6d62dfaefc1894d1b17545a94db97cb8e1e088d43.scope - libcontainer container 4b255e938830f968dcf2a5d6d62dfaefc1894d1b17545a94db97cb8e1e088d43. Mar 13 12:24:15.915671 containerd[1735]: time="2026-03-13T12:24:15.915549432Z" level=info msg="StartContainer for \"4b255e938830f968dcf2a5d6d62dfaefc1894d1b17545a94db97cb8e1e088d43\" returns successfully" Mar 13 12:24:16.437464 containerd[1735]: time="2026-03-13T12:24:16.434773646Z" level=info msg="StopPodSandbox for \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\"" Mar 13 12:24:16.447584 containerd[1735]: time="2026-03-13T12:24:16.447206108Z" level=info msg="StopPodSandbox for \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\"" Mar 13 12:24:16.447584 containerd[1735]: time="2026-03-13T12:24:16.447511829Z" level=info msg="StopPodSandbox for \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\"" Mar 13 12:24:16.561048 kubelet[3225]: I0313 12:24:16.560450 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cbn7n" podStartSLOduration=60.560413272 podStartE2EDuration="1m0.560413272s" podCreationTimestamp="2026-03-13 12:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:24:15.95891675 +0000 UTC m=+63.622496662" watchObservedRunningTime="2026-03-13 12:24:16.560413272 +0000 UTC m=+64.223993184" Mar 13 12:24:16.578161 systemd-networkd[1354]: cali39e38bce82f: Gained IPv6LL Mar 13 12:24:16.772778 containerd[1735]: 2026-03-13 12:24:16.543 [INFO][5171] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Mar 13 12:24:16.772778 containerd[1735]: 2026-03-13 12:24:16.693 [INFO][5171] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" iface="eth0" netns="/var/run/netns/cni-a21d1846-23cc-9711-c53f-0e680abc6abb" Mar 13 12:24:16.772778 containerd[1735]: 2026-03-13 12:24:16.694 [INFO][5171] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" iface="eth0" netns="/var/run/netns/cni-a21d1846-23cc-9711-c53f-0e680abc6abb" Mar 13 12:24:16.772778 containerd[1735]: 2026-03-13 12:24:16.694 [INFO][5171] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" iface="eth0" netns="/var/run/netns/cni-a21d1846-23cc-9711-c53f-0e680abc6abb" Mar 13 12:24:16.772778 containerd[1735]: 2026-03-13 12:24:16.694 [INFO][5171] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Mar 13 12:24:16.772778 containerd[1735]: 2026-03-13 12:24:16.694 [INFO][5171] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Mar 13 12:24:16.772778 containerd[1735]: 2026-03-13 12:24:16.730 [INFO][5205] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" HandleID="k8s-pod-network.37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0" Mar 13 12:24:16.772778 containerd[1735]: 2026-03-13 12:24:16.745 [INFO][5205] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:24:16.772778 containerd[1735]: 2026-03-13 12:24:16.745 [INFO][5205] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:24:16.772778 containerd[1735]: 2026-03-13 12:24:16.755 [WARNING][5205] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" HandleID="k8s-pod-network.37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0" Mar 13 12:24:16.772778 containerd[1735]: 2026-03-13 12:24:16.755 [INFO][5205] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" HandleID="k8s-pod-network.37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0" Mar 13 12:24:16.772778 containerd[1735]: 2026-03-13 12:24:16.757 [INFO][5205] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:24:16.772778 containerd[1735]: 2026-03-13 12:24:16.766 [INFO][5171] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Mar 13 12:24:16.773802 containerd[1735]: time="2026-03-13T12:24:16.773614576Z" level=info msg="TearDown network for sandbox \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\" successfully" Mar 13 12:24:16.776200 containerd[1735]: time="2026-03-13T12:24:16.774015416Z" level=info msg="StopPodSandbox for \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\" returns successfully" Mar 13 12:24:16.777104 systemd[1]: run-netns-cni\x2da21d1846\x2d23cc\x2d9711\x2dc53f\x2d0e680abc6abb.mount: Deactivated successfully. Mar 13 12:24:16.780294 containerd[1735]: time="2026-03-13T12:24:16.779696067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xclsd,Uid:a8131164-7f35-4890-a8dd-a27d2707cba1,Namespace:kube-system,Attempt:1,}" Mar 13 12:24:16.786911 containerd[1735]: 2026-03-13 12:24:16.546 [INFO][5169] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Mar 13 12:24:16.786911 containerd[1735]: 2026-03-13 12:24:16.694 [INFO][5169] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" iface="eth0" netns="/var/run/netns/cni-7e24bf8c-7aa0-347c-5d18-519bf0a681a3" Mar 13 12:24:16.786911 containerd[1735]: 2026-03-13 12:24:16.694 [INFO][5169] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" iface="eth0" netns="/var/run/netns/cni-7e24bf8c-7aa0-347c-5d18-519bf0a681a3" Mar 13 12:24:16.786911 containerd[1735]: 2026-03-13 12:24:16.695 [INFO][5169] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" iface="eth0" netns="/var/run/netns/cni-7e24bf8c-7aa0-347c-5d18-519bf0a681a3" Mar 13 12:24:16.786911 containerd[1735]: 2026-03-13 12:24:16.695 [INFO][5169] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Mar 13 12:24:16.786911 containerd[1735]: 2026-03-13 12:24:16.695 [INFO][5169] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Mar 13 12:24:16.786911 containerd[1735]: 2026-03-13 12:24:16.736 [INFO][5207] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" HandleID="k8s-pod-network.800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0" Mar 13 12:24:16.786911 containerd[1735]: 2026-03-13 12:24:16.748 [INFO][5207] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:24:16.786911 containerd[1735]: 2026-03-13 12:24:16.757 [INFO][5207] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:24:16.786911 containerd[1735]: 2026-03-13 12:24:16.772 [WARNING][5207] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" HandleID="k8s-pod-network.800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0" Mar 13 12:24:16.786911 containerd[1735]: 2026-03-13 12:24:16.772 [INFO][5207] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" HandleID="k8s-pod-network.800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0" Mar 13 12:24:16.786911 containerd[1735]: 2026-03-13 12:24:16.775 [INFO][5207] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:24:16.786911 containerd[1735]: 2026-03-13 12:24:16.782 [INFO][5169] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Mar 13 12:24:16.789459 containerd[1735]: time="2026-03-13T12:24:16.787687921Z" level=info msg="TearDown network for sandbox \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\" successfully" Mar 13 12:24:16.789459 containerd[1735]: time="2026-03-13T12:24:16.787712921Z" level=info msg="StopPodSandbox for \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\" returns successfully" Mar 13 12:24:16.790715 systemd[1]: run-netns-cni\x2d7e24bf8c\x2d7aa0\x2d347c\x2d5d18\x2d519bf0a681a3.mount: Deactivated successfully. Mar 13 12:24:16.796755 containerd[1735]: time="2026-03-13T12:24:16.796722817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7f959487-mtfnw,Uid:0a346f55-3551-472a-8019-424b407e361b,Namespace:calico-system,Attempt:1,}" Mar 13 12:24:16.803140 containerd[1735]: 2026-03-13 12:24:16.567 [INFO][5170] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Mar 13 12:24:16.803140 containerd[1735]: 2026-03-13 12:24:16.694 [INFO][5170] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" iface="eth0" netns="/var/run/netns/cni-598f6f6a-42ff-02cc-6319-ce509c8bf27c" Mar 13 12:24:16.803140 containerd[1735]: 2026-03-13 12:24:16.694 [INFO][5170] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" iface="eth0" netns="/var/run/netns/cni-598f6f6a-42ff-02cc-6319-ce509c8bf27c" Mar 13 12:24:16.803140 containerd[1735]: 2026-03-13 12:24:16.696 [INFO][5170] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" iface="eth0" netns="/var/run/netns/cni-598f6f6a-42ff-02cc-6319-ce509c8bf27c" Mar 13 12:24:16.803140 containerd[1735]: 2026-03-13 12:24:16.696 [INFO][5170] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Mar 13 12:24:16.803140 containerd[1735]: 2026-03-13 12:24:16.696 [INFO][5170] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Mar 13 12:24:16.803140 containerd[1735]: 2026-03-13 12:24:16.736 [INFO][5212] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" HandleID="k8s-pod-network.0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Workload="ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0" Mar 13 12:24:16.803140 containerd[1735]: 2026-03-13 12:24:16.748 [INFO][5212] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:24:16.803140 containerd[1735]: 2026-03-13 12:24:16.776 [INFO][5212] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:24:16.803140 containerd[1735]: 2026-03-13 12:24:16.796 [WARNING][5212] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" HandleID="k8s-pod-network.0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Workload="ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0" Mar 13 12:24:16.803140 containerd[1735]: 2026-03-13 12:24:16.796 [INFO][5212] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" HandleID="k8s-pod-network.0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Workload="ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0" Mar 13 12:24:16.803140 containerd[1735]: 2026-03-13 12:24:16.797 [INFO][5212] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:24:16.803140 containerd[1735]: 2026-03-13 12:24:16.800 [INFO][5170] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Mar 13 12:24:16.806121 containerd[1735]: time="2026-03-13T12:24:16.803210789Z" level=info msg="TearDown network for sandbox \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\" successfully" Mar 13 12:24:16.806121 containerd[1735]: time="2026-03-13T12:24:16.803230429Z" level=info msg="StopPodSandbox for \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\" returns successfully" Mar 13 12:24:16.805821 systemd[1]: run-netns-cni\x2d598f6f6a\x2d42ff\x2d02cc\x2d6319\x2dce509c8bf27c.mount: Deactivated successfully. Mar 13 12:24:16.811389 containerd[1735]: time="2026-03-13T12:24:16.811064923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-b86jq,Uid:42dbc5a5-80ff-4c29-b964-19cfe9b7fccd,Namespace:calico-system,Attempt:1,}" Mar 13 12:24:17.061248 systemd-networkd[1354]: califed3dbd70a0: Link UP Mar 13 12:24:17.064528 systemd-networkd[1354]: califed3dbd70a0: Gained carrier Mar 13 12:24:17.098767 containerd[1735]: 2026-03-13 12:24:16.897 [INFO][5230] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0 coredns-66bc5c9577- kube-system a8131164-7f35-4890-a8dd-a27d2707cba1 987 0 2026-03-13 12:23:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.101-461ebd96c0 coredns-66bc5c9577-xclsd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califed3dbd70a0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7" Namespace="kube-system" Pod="coredns-66bc5c9577-xclsd" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-" Mar 13 12:24:17.098767 containerd[1735]: 2026-03-13 12:24:16.897 [INFO][5230] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7" Namespace="kube-system" Pod="coredns-66bc5c9577-xclsd" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0" Mar 13 12:24:17.098767 containerd[1735]: 2026-03-13 12:24:16.959 [INFO][5260] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7" HandleID="k8s-pod-network.3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0" Mar 13 12:24:17.098767 containerd[1735]: 2026-03-13 12:24:16.986 [INFO][5260] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7" HandleID="k8s-pod-network.3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002737d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.101-461ebd96c0", "pod":"coredns-66bc5c9577-xclsd", "timestamp":"2026-03-13 12:24:16.95958499 +0000 UTC"}, Hostname:"ci-4081.3.101-461ebd96c0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400035da20)} Mar 13 12:24:17.098767 containerd[1735]: 2026-03-13 12:24:16.986 [INFO][5260] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:24:17.098767 containerd[1735]: 2026-03-13 12:24:16.986 [INFO][5260] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:24:17.098767 containerd[1735]: 2026-03-13 12:24:16.986 [INFO][5260] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-461ebd96c0' Mar 13 12:24:17.098767 containerd[1735]: 2026-03-13 12:24:16.998 [INFO][5260] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.098767 containerd[1735]: 2026-03-13 12:24:17.008 [INFO][5260] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.098767 containerd[1735]: 2026-03-13 12:24:17.015 [INFO][5260] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.098767 containerd[1735]: 2026-03-13 12:24:17.018 [INFO][5260] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.098767 containerd[1735]: 2026-03-13 12:24:17.021 [INFO][5260] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.098767 containerd[1735]: 2026-03-13 12:24:17.021 [INFO][5260] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.098767 containerd[1735]: 2026-03-13 12:24:17.025 [INFO][5260] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7 Mar 13 12:24:17.098767 containerd[1735]: 2026-03-13 12:24:17.036 [INFO][5260] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.098767 containerd[1735]: 2026-03-13 12:24:17.047 [INFO][5260] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.4/26] block=192.168.100.0/26 handle="k8s-pod-network.3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.098767 containerd[1735]: 2026-03-13 12:24:17.047 [INFO][5260] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.4/26] handle="k8s-pod-network.3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.098767 containerd[1735]: 2026-03-13 12:24:17.047 [INFO][5260] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:24:17.098767 containerd[1735]: 2026-03-13 12:24:17.047 [INFO][5260] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.4/26] IPv6=[] ContainerID="3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7" HandleID="k8s-pod-network.3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0" Mar 13 12:24:17.099570 containerd[1735]: 2026-03-13 12:24:17.055 [INFO][5230] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7" Namespace="kube-system" Pod="coredns-66bc5c9577-xclsd" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a8131164-7f35-4890-a8dd-a27d2707cba1", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"", Pod:"coredns-66bc5c9577-xclsd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califed3dbd70a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:24:17.099570 containerd[1735]: 2026-03-13 12:24:17.055 [INFO][5230] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.4/32] ContainerID="3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7" Namespace="kube-system" Pod="coredns-66bc5c9577-xclsd" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0" Mar 13 12:24:17.099570 containerd[1735]: 2026-03-13 12:24:17.055 [INFO][5230] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califed3dbd70a0 ContainerID="3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7" Namespace="kube-system" Pod="coredns-66bc5c9577-xclsd" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0" Mar 13 12:24:17.099570 containerd[1735]: 2026-03-13 12:24:17.064 [INFO][5230] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7" Namespace="kube-system" Pod="coredns-66bc5c9577-xclsd" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0" Mar 13 12:24:17.099570 containerd[1735]: 2026-03-13 12:24:17.067 [INFO][5230] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7" Namespace="kube-system" Pod="coredns-66bc5c9577-xclsd" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a8131164-7f35-4890-a8dd-a27d2707cba1", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7", Pod:"coredns-66bc5c9577-xclsd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califed3dbd70a0", MAC:"32:bd:cb:7e:4e:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:24:17.099747 containerd[1735]: 2026-03-13 12:24:17.091 [INFO][5230] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7" Namespace="kube-system" Pod="coredns-66bc5c9577-xclsd" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0" Mar 13 12:24:17.150268 containerd[1735]: time="2026-03-13T12:24:17.150002133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:24:17.151994 containerd[1735]: time="2026-03-13T12:24:17.151641256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:24:17.151994 containerd[1735]: time="2026-03-13T12:24:17.151663456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:24:17.151994 containerd[1735]: time="2026-03-13T12:24:17.151759856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:24:17.169114 systemd-networkd[1354]: califf9b604f81a: Link UP Mar 13 12:24:17.170643 systemd-networkd[1354]: califf9b604f81a: Gained carrier Mar 13 12:24:17.191034 systemd[1]: Started cri-containerd-3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7.scope - libcontainer container 3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7. Mar 13 12:24:17.204005 containerd[1735]: 2026-03-13 12:24:16.953 [INFO][5240] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0 calico-apiserver-6c7f959487- calico-system 0a346f55-3551-472a-8019-424b407e361b 988 0 2026-03-13 12:23:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c7f959487 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.101-461ebd96c0 calico-apiserver-6c7f959487-mtfnw eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] califf9b604f81a [] [] }} ContainerID="3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1" Namespace="calico-system" Pod="calico-apiserver-6c7f959487-mtfnw" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-" Mar 13 12:24:17.204005 containerd[1735]: 2026-03-13 12:24:16.953 [INFO][5240] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1" Namespace="calico-system" Pod="calico-apiserver-6c7f959487-mtfnw" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0" Mar 13 12:24:17.204005 containerd[1735]: 2026-03-13 12:24:17.071 [INFO][5272] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1" HandleID="k8s-pod-network.3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0" Mar 13 12:24:17.204005 containerd[1735]: 2026-03-13 12:24:17.102 [INFO][5272] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1" HandleID="k8s-pod-network.3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005f02a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-461ebd96c0", "pod":"calico-apiserver-6c7f959487-mtfnw", "timestamp":"2026-03-13 12:24:17.071954633 +0000 UTC"}, Hostname:"ci-4081.3.101-461ebd96c0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004ce000)} Mar 13 12:24:17.204005 containerd[1735]: 2026-03-13 12:24:17.102 [INFO][5272] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:24:17.204005 containerd[1735]: 2026-03-13 12:24:17.102 [INFO][5272] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:24:17.204005 containerd[1735]: 2026-03-13 12:24:17.103 [INFO][5272] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-461ebd96c0' Mar 13 12:24:17.204005 containerd[1735]: 2026-03-13 12:24:17.106 [INFO][5272] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.204005 containerd[1735]: 2026-03-13 12:24:17.119 [INFO][5272] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.204005 containerd[1735]: 2026-03-13 12:24:17.126 [INFO][5272] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.204005 containerd[1735]: 2026-03-13 12:24:17.129 [INFO][5272] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.204005 containerd[1735]: 2026-03-13 12:24:17.132 [INFO][5272] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.204005 containerd[1735]: 2026-03-13 12:24:17.132 [INFO][5272] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.204005 containerd[1735]: 2026-03-13 12:24:17.135 [INFO][5272] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1 Mar 13 12:24:17.204005 containerd[1735]: 2026-03-13 12:24:17.141 [INFO][5272] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.204005 containerd[1735]: 2026-03-13 12:24:17.153 [INFO][5272] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.5/26] block=192.168.100.0/26 handle="k8s-pod-network.3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.204005 containerd[1735]: 2026-03-13 12:24:17.153 [INFO][5272] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.5/26] handle="k8s-pod-network.3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.204005 containerd[1735]: 2026-03-13 12:24:17.155 [INFO][5272] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:24:17.204005 containerd[1735]: 2026-03-13 12:24:17.155 [INFO][5272] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.5/26] IPv6=[] ContainerID="3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1" HandleID="k8s-pod-network.3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0" Mar 13 12:24:17.204573 containerd[1735]: 2026-03-13 12:24:17.164 [INFO][5240] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1" Namespace="calico-system" Pod="calico-apiserver-6c7f959487-mtfnw" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0", GenerateName:"calico-apiserver-6c7f959487-", Namespace:"calico-system", SelfLink:"", UID:"0a346f55-3551-472a-8019-424b407e361b", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7f959487", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"", Pod:"calico-apiserver-6c7f959487-mtfnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califf9b604f81a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:24:17.204573 containerd[1735]: 2026-03-13 12:24:17.164 [INFO][5240] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.5/32] ContainerID="3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1" Namespace="calico-system" Pod="calico-apiserver-6c7f959487-mtfnw" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0" Mar 13 12:24:17.204573 containerd[1735]: 2026-03-13 12:24:17.164 [INFO][5240] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf9b604f81a ContainerID="3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1" Namespace="calico-system" Pod="calico-apiserver-6c7f959487-mtfnw" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0" Mar 13 12:24:17.204573 containerd[1735]: 2026-03-13 12:24:17.170 [INFO][5240] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1" Namespace="calico-system" Pod="calico-apiserver-6c7f959487-mtfnw" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0" Mar 13 12:24:17.204573 containerd[1735]: 2026-03-13 12:24:17.173 [INFO][5240] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1" Namespace="calico-system" Pod="calico-apiserver-6c7f959487-mtfnw" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0", GenerateName:"calico-apiserver-6c7f959487-", Namespace:"calico-system", SelfLink:"", UID:"0a346f55-3551-472a-8019-424b407e361b", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7f959487", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1", Pod:"calico-apiserver-6c7f959487-mtfnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califf9b604f81a", MAC:"06:d6:8f:43:df:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:24:17.204573 containerd[1735]: 2026-03-13 12:24:17.195 [INFO][5240] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1" Namespace="calico-system" Pod="calico-apiserver-6c7f959487-mtfnw" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0" Mar 13 12:24:17.260932 containerd[1735]: time="2026-03-13T12:24:17.259919331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:24:17.260932 containerd[1735]: time="2026-03-13T12:24:17.260042371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:24:17.260932 containerd[1735]: time="2026-03-13T12:24:17.260141011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:24:17.261581 containerd[1735]: time="2026-03-13T12:24:17.260818572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:24:17.262754 containerd[1735]: time="2026-03-13T12:24:17.262720696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xclsd,Uid:a8131164-7f35-4890-a8dd-a27d2707cba1,Namespace:kube-system,Attempt:1,} returns sandbox id \"3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7\"" Mar 13 12:24:17.279910 containerd[1735]: time="2026-03-13T12:24:17.279542086Z" level=info msg="CreateContainer within sandbox \"3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 12:24:17.299268 systemd-networkd[1354]: cali02446c7813b: Link UP Mar 13 12:24:17.303638 systemd-networkd[1354]: cali02446c7813b: Gained carrier Mar 13 12:24:17.317159 systemd[1]: Started cri-containerd-3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1.scope - libcontainer container 3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1. Mar 13 12:24:17.353039 containerd[1735]: 2026-03-13 12:24:16.970 [INFO][5249] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0 goldmane-cccfbd5cf- calico-system 42dbc5a5-80ff-4c29-b964-19cfe9b7fccd 989 0 2026-03-13 12:23:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.101-461ebd96c0 goldmane-cccfbd5cf-b86jq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali02446c7813b [] [] }} ContainerID="81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f" Namespace="calico-system" Pod="goldmane-cccfbd5cf-b86jq" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-" Mar 13 12:24:17.353039 containerd[1735]: 2026-03-13 12:24:16.970 [INFO][5249] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f" Namespace="calico-system" Pod="goldmane-cccfbd5cf-b86jq" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0" Mar 13 12:24:17.353039 containerd[1735]: 2026-03-13 12:24:17.096 [INFO][5278] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f" HandleID="k8s-pod-network.81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f" Workload="ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0" Mar 13 12:24:17.353039 containerd[1735]: 2026-03-13 12:24:17.120 [INFO][5278] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f" HandleID="k8s-pod-network.81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f" Workload="ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034ac10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-461ebd96c0", "pod":"goldmane-cccfbd5cf-b86jq", "timestamp":"2026-03-13 12:24:17.096613197 +0000 UTC"}, Hostname:"ci-4081.3.101-461ebd96c0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000620420)} Mar 13 12:24:17.353039 containerd[1735]: 2026-03-13 12:24:17.120 [INFO][5278] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:24:17.353039 containerd[1735]: 2026-03-13 12:24:17.154 [INFO][5278] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:24:17.353039 containerd[1735]: 2026-03-13 12:24:17.154 [INFO][5278] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-461ebd96c0' Mar 13 12:24:17.353039 containerd[1735]: 2026-03-13 12:24:17.207 [INFO][5278] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.353039 containerd[1735]: 2026-03-13 12:24:17.219 [INFO][5278] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.353039 containerd[1735]: 2026-03-13 12:24:17.229 [INFO][5278] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.353039 containerd[1735]: 2026-03-13 12:24:17.233 [INFO][5278] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.353039 containerd[1735]: 2026-03-13 12:24:17.237 [INFO][5278] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.353039 containerd[1735]: 2026-03-13 12:24:17.238 [INFO][5278] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.353039 containerd[1735]: 2026-03-13 12:24:17.246 [INFO][5278] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f Mar 13 12:24:17.353039 containerd[1735]: 2026-03-13 12:24:17.258 [INFO][5278] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.353039 containerd[1735]: 2026-03-13 12:24:17.276 [INFO][5278] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.6/26] block=192.168.100.0/26 handle="k8s-pod-network.81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.353039 containerd[1735]: 2026-03-13 12:24:17.277 [INFO][5278] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.6/26] handle="k8s-pod-network.81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:17.353039 containerd[1735]: 2026-03-13 12:24:17.277 [INFO][5278] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:24:17.353039 containerd[1735]: 2026-03-13 12:24:17.277 [INFO][5278] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.6/26] IPv6=[] ContainerID="81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f" HandleID="k8s-pod-network.81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f" Workload="ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0" Mar 13 12:24:17.353569 containerd[1735]: 2026-03-13 12:24:17.289 [INFO][5249] cni-plugin/k8s.go 418: Populated endpoint ContainerID="81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f" Namespace="calico-system" Pod="goldmane-cccfbd5cf-b86jq" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"42dbc5a5-80ff-4c29-b964-19cfe9b7fccd", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"", Pod:"goldmane-cccfbd5cf-b86jq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali02446c7813b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:24:17.353569 containerd[1735]: 2026-03-13 12:24:17.289 [INFO][5249] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.6/32] ContainerID="81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f" Namespace="calico-system" Pod="goldmane-cccfbd5cf-b86jq" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0" Mar 13 12:24:17.353569 containerd[1735]: 2026-03-13 12:24:17.289 [INFO][5249] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02446c7813b ContainerID="81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f" Namespace="calico-system" Pod="goldmane-cccfbd5cf-b86jq" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0" Mar 13 12:24:17.353569 containerd[1735]: 2026-03-13 12:24:17.309 [INFO][5249] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f" Namespace="calico-system" Pod="goldmane-cccfbd5cf-b86jq" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0" Mar 13 12:24:17.353569 containerd[1735]: 2026-03-13 12:24:17.310 [INFO][5249] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f" Namespace="calico-system" Pod="goldmane-cccfbd5cf-b86jq" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"42dbc5a5-80ff-4c29-b964-19cfe9b7fccd", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f", Pod:"goldmane-cccfbd5cf-b86jq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali02446c7813b", MAC:"32:9a:a7:e5:6d:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:24:17.353569 containerd[1735]: 2026-03-13 12:24:17.342 [INFO][5249] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f" Namespace="calico-system" Pod="goldmane-cccfbd5cf-b86jq" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0" Mar 13 12:24:17.365243 containerd[1735]: time="2026-03-13T12:24:17.365199440Z" level=info msg="CreateContainer within sandbox \"3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8e282272798d4db5c2fc01b00cb4e9885c138f9a9cb617f2221f9f582a6a2d80\"" Mar 13 12:24:17.366045 containerd[1735]: time="2026-03-13T12:24:17.366018482Z" level=info msg="StartContainer for \"8e282272798d4db5c2fc01b00cb4e9885c138f9a9cb617f2221f9f582a6a2d80\"" Mar 13 12:24:17.409611 containerd[1735]: time="2026-03-13T12:24:17.409559800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7f959487-mtfnw,Uid:0a346f55-3551-472a-8019-424b407e361b,Namespace:calico-system,Attempt:1,} returns sandbox id \"3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1\"" Mar 13 12:24:17.425612 systemd[1]: Started cri-containerd-8e282272798d4db5c2fc01b00cb4e9885c138f9a9cb617f2221f9f582a6a2d80.scope - libcontainer container 8e282272798d4db5c2fc01b00cb4e9885c138f9a9cb617f2221f9f582a6a2d80. Mar 13 12:24:17.438419 containerd[1735]: time="2026-03-13T12:24:17.438032651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:24:17.438419 containerd[1735]: time="2026-03-13T12:24:17.438097091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:24:17.438419 containerd[1735]: time="2026-03-13T12:24:17.438112291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:24:17.438419 containerd[1735]: time="2026-03-13T12:24:17.438204932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:24:17.469616 systemd[1]: Started cri-containerd-81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f.scope - libcontainer container 81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f. Mar 13 12:24:17.480711 containerd[1735]: time="2026-03-13T12:24:17.480130647Z" level=info msg="StartContainer for \"8e282272798d4db5c2fc01b00cb4e9885c138f9a9cb617f2221f9f582a6a2d80\" returns successfully" Mar 13 12:24:17.537570 systemd-networkd[1354]: calif9169657261: Gained IPv6LL Mar 13 12:24:17.575521 containerd[1735]: time="2026-03-13T12:24:17.573955976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-b86jq,Uid:42dbc5a5-80ff-4c29-b964-19cfe9b7fccd,Namespace:calico-system,Attempt:1,} returns sandbox id \"81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f\"" Mar 13 12:24:17.912928 containerd[1735]: time="2026-03-13T12:24:17.912768425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:17.916912 containerd[1735]: time="2026-03-13T12:24:17.916881033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 13 12:24:17.921479 containerd[1735]: time="2026-03-13T12:24:17.920620200Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:17.925991 containerd[1735]: time="2026-03-13T12:24:17.925965489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:17.926742 containerd[1735]: time="2026-03-13T12:24:17.926714771Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.115592891s" Mar 13 12:24:17.926862 containerd[1735]: time="2026-03-13T12:24:17.926836051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 13 12:24:17.928194 containerd[1735]: time="2026-03-13T12:24:17.928168173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 12:24:17.958360 containerd[1735]: time="2026-03-13T12:24:17.958322507Z" level=info msg="CreateContainer within sandbox \"d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 13 12:24:17.968123 kubelet[3225]: I0313 12:24:17.968057 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xclsd" podStartSLOduration=61.968041285 podStartE2EDuration="1m1.968041285s" podCreationTimestamp="2026-03-13 12:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:24:17.966396922 +0000 UTC m=+65.629976834" watchObservedRunningTime="2026-03-13 12:24:17.968041285 +0000 UTC m=+65.631621157" Mar 13 12:24:18.000776 containerd[1735]: time="2026-03-13T12:24:18.000712984Z" level=info msg="CreateContainer within sandbox \"d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"42f96b429f8e37784edf7d5203ce364ee31721ef9519cbe323542f50a129ef2b\"" Mar 13 12:24:18.001888 containerd[1735]: time="2026-03-13T12:24:18.001858626Z" level=info msg="StartContainer for \"42f96b429f8e37784edf7d5203ce364ee31721ef9519cbe323542f50a129ef2b\"" Mar 13 12:24:18.029689 systemd[1]: Started cri-containerd-42f96b429f8e37784edf7d5203ce364ee31721ef9519cbe323542f50a129ef2b.scope - libcontainer container 42f96b429f8e37784edf7d5203ce364ee31721ef9519cbe323542f50a129ef2b. Mar 13 12:24:18.067236 containerd[1735]: time="2026-03-13T12:24:18.067061623Z" level=info msg="StartContainer for \"42f96b429f8e37784edf7d5203ce364ee31721ef9519cbe323542f50a129ef2b\" returns successfully" Mar 13 12:24:18.434885 containerd[1735]: time="2026-03-13T12:24:18.434825445Z" level=info msg="StopPodSandbox for \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\"" Mar 13 12:24:18.436588 containerd[1735]: time="2026-03-13T12:24:18.436401168Z" level=info msg="StopPodSandbox for \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\"" Mar 13 12:24:18.556501 containerd[1735]: 2026-03-13 12:24:18.498 [INFO][5581] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Mar 13 12:24:18.556501 containerd[1735]: 2026-03-13 12:24:18.498 [INFO][5581] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" iface="eth0" netns="/var/run/netns/cni-7ff50695-3078-b59f-39a9-e5934e23d432" Mar 13 12:24:18.556501 containerd[1735]: 2026-03-13 12:24:18.498 [INFO][5581] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" iface="eth0" netns="/var/run/netns/cni-7ff50695-3078-b59f-39a9-e5934e23d432" Mar 13 12:24:18.556501 containerd[1735]: 2026-03-13 12:24:18.499 [INFO][5581] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" iface="eth0" netns="/var/run/netns/cni-7ff50695-3078-b59f-39a9-e5934e23d432" Mar 13 12:24:18.556501 containerd[1735]: 2026-03-13 12:24:18.499 [INFO][5581] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Mar 13 12:24:18.556501 containerd[1735]: 2026-03-13 12:24:18.499 [INFO][5581] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Mar 13 12:24:18.556501 containerd[1735]: 2026-03-13 12:24:18.532 [INFO][5594] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" HandleID="k8s-pod-network.9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Workload="ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0" Mar 13 12:24:18.556501 containerd[1735]: 2026-03-13 12:24:18.533 [INFO][5594] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:24:18.556501 containerd[1735]: 2026-03-13 12:24:18.533 [INFO][5594] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:24:18.556501 containerd[1735]: 2026-03-13 12:24:18.550 [WARNING][5594] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" HandleID="k8s-pod-network.9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Workload="ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0" Mar 13 12:24:18.556501 containerd[1735]: 2026-03-13 12:24:18.550 [INFO][5594] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" HandleID="k8s-pod-network.9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Workload="ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0" Mar 13 12:24:18.556501 containerd[1735]: 2026-03-13 12:24:18.551 [INFO][5594] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:24:18.556501 containerd[1735]: 2026-03-13 12:24:18.554 [INFO][5581] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Mar 13 12:24:18.558123 containerd[1735]: time="2026-03-13T12:24:18.557592666Z" level=info msg="TearDown network for sandbox \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\" successfully" Mar 13 12:24:18.558123 containerd[1735]: time="2026-03-13T12:24:18.557623786Z" level=info msg="StopPodSandbox for \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\" returns successfully" Mar 13 12:24:18.560455 systemd[1]: run-netns-cni\x2d7ff50695\x2d3078\x2db59f\x2d39a9\x2de5934e23d432.mount: Deactivated successfully. Mar 13 12:24:18.563707 systemd-networkd[1354]: califed3dbd70a0: Gained IPv6LL Mar 13 12:24:18.563972 systemd-networkd[1354]: califf9b604f81a: Gained IPv6LL Mar 13 12:24:18.580169 containerd[1735]: 2026-03-13 12:24:18.509 [INFO][5582] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Mar 13 12:24:18.580169 containerd[1735]: 2026-03-13 12:24:18.509 [INFO][5582] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" iface="eth0" netns="/var/run/netns/cni-7cf6d40e-2ced-15a1-f64d-1d55e5dcfc51" Mar 13 12:24:18.580169 containerd[1735]: 2026-03-13 12:24:18.511 [INFO][5582] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" iface="eth0" netns="/var/run/netns/cni-7cf6d40e-2ced-15a1-f64d-1d55e5dcfc51" Mar 13 12:24:18.580169 containerd[1735]: 2026-03-13 12:24:18.511 [INFO][5582] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" iface="eth0" netns="/var/run/netns/cni-7cf6d40e-2ced-15a1-f64d-1d55e5dcfc51" Mar 13 12:24:18.580169 containerd[1735]: 2026-03-13 12:24:18.511 [INFO][5582] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Mar 13 12:24:18.580169 containerd[1735]: 2026-03-13 12:24:18.511 [INFO][5582] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Mar 13 12:24:18.580169 containerd[1735]: 2026-03-13 12:24:18.544 [INFO][5599] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" HandleID="k8s-pod-network.f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0" Mar 13 12:24:18.580169 containerd[1735]: 2026-03-13 12:24:18.544 [INFO][5599] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:24:18.580169 containerd[1735]: 2026-03-13 12:24:18.551 [INFO][5599] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:24:18.580169 containerd[1735]: 2026-03-13 12:24:18.570 [WARNING][5599] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" HandleID="k8s-pod-network.f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0" Mar 13 12:24:18.580169 containerd[1735]: 2026-03-13 12:24:18.570 [INFO][5599] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" HandleID="k8s-pod-network.f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0" Mar 13 12:24:18.580169 containerd[1735]: 2026-03-13 12:24:18.573 [INFO][5599] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:24:18.580169 containerd[1735]: 2026-03-13 12:24:18.576 [INFO][5582] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Mar 13 12:24:18.583157 containerd[1735]: time="2026-03-13T12:24:18.581922509Z" level=info msg="TearDown network for sandbox \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\" successfully" Mar 13 12:24:18.583157 containerd[1735]: time="2026-03-13T12:24:18.581954390Z" level=info msg="StopPodSandbox for \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\" returns successfully" Mar 13 12:24:18.583992 systemd[1]: run-netns-cni\x2d7cf6d40e\x2d2ced\x2d15a1\x2df64d\x2d1d55e5dcfc51.mount: Deactivated successfully. Mar 13 12:24:18.625590 systemd-networkd[1354]: cali02446c7813b: Gained IPv6LL Mar 13 12:24:18.660373 containerd[1735]: time="2026-03-13T12:24:18.660099770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q6rzg,Uid:1e7c6c09-c767-4bee-9251-d729af24c7dc,Namespace:calico-system,Attempt:1,}" Mar 13 12:24:18.666177 containerd[1735]: time="2026-03-13T12:24:18.666145061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7f959487-nm874,Uid:f97ecaa3-f993-4c5c-ba9c-9db8e1bdfbf1,Namespace:calico-system,Attempt:1,}" Mar 13 12:24:18.877066 systemd-networkd[1354]: cali48cb8262ea8: Link UP Mar 13 12:24:18.878260 systemd-networkd[1354]: cali48cb8262ea8: Gained carrier Mar 13 12:24:18.895473 containerd[1735]: 2026-03-13 12:24:18.788 [INFO][5616] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0 csi-node-driver- calico-system 1e7c6c09-c767-4bee-9251-d729af24c7dc 1027 0 2026-03-13 12:23:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.101-461ebd96c0 csi-node-driver-q6rzg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali48cb8262ea8 [] [] }} ContainerID="1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f" Namespace="calico-system" Pod="csi-node-driver-q6rzg" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-" Mar 13 12:24:18.895473 containerd[1735]: 2026-03-13 12:24:18.788 [INFO][5616] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f" Namespace="calico-system" Pod="csi-node-driver-q6rzg" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0" Mar 13 12:24:18.895473 containerd[1735]: 2026-03-13 12:24:18.824 [INFO][5640] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f" HandleID="k8s-pod-network.1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f" Workload="ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0" Mar 13 12:24:18.895473 containerd[1735]: 2026-03-13 12:24:18.836 [INFO][5640] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f" HandleID="k8s-pod-network.1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f" Workload="ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb4c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-461ebd96c0", "pod":"csi-node-driver-q6rzg", "timestamp":"2026-03-13 12:24:18.824367266 +0000 UTC"}, Hostname:"ci-4081.3.101-461ebd96c0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000268f20)} Mar 13 12:24:18.895473 containerd[1735]: 2026-03-13 12:24:18.836 [INFO][5640] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:24:18.895473 containerd[1735]: 2026-03-13 12:24:18.836 [INFO][5640] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:24:18.895473 containerd[1735]: 2026-03-13 12:24:18.836 [INFO][5640] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-461ebd96c0' Mar 13 12:24:18.895473 containerd[1735]: 2026-03-13 12:24:18.839 [INFO][5640] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:18.895473 containerd[1735]: 2026-03-13 12:24:18.844 [INFO][5640] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:18.895473 containerd[1735]: 2026-03-13 12:24:18.850 [INFO][5640] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:18.895473 containerd[1735]: 2026-03-13 12:24:18.853 [INFO][5640] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:18.895473 containerd[1735]: 2026-03-13 12:24:18.855 [INFO][5640] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:18.895473 containerd[1735]: 2026-03-13 12:24:18.855 [INFO][5640] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:18.895473 containerd[1735]: 2026-03-13 12:24:18.857 [INFO][5640] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f Mar 13 12:24:18.895473 containerd[1735]: 2026-03-13 12:24:18.862 [INFO][5640] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:18.895473 containerd[1735]: 2026-03-13 12:24:18.870 [INFO][5640] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.7/26] block=192.168.100.0/26 handle="k8s-pod-network.1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:18.895473 containerd[1735]: 2026-03-13 12:24:18.870 [INFO][5640] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.7/26] handle="k8s-pod-network.1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:18.895473 containerd[1735]: 2026-03-13 12:24:18.870 [INFO][5640] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:24:18.895473 containerd[1735]: 2026-03-13 12:24:18.871 [INFO][5640] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.7/26] IPv6=[] ContainerID="1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f" HandleID="k8s-pod-network.1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f" Workload="ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0" Mar 13 12:24:18.897338 containerd[1735]: 2026-03-13 12:24:18.873 [INFO][5616] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f" Namespace="calico-system" Pod="csi-node-driver-q6rzg" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1e7c6c09-c767-4bee-9251-d729af24c7dc", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"", Pod:"csi-node-driver-q6rzg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali48cb8262ea8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:24:18.897338 containerd[1735]: 2026-03-13 12:24:18.873 [INFO][5616] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.7/32] ContainerID="1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f" Namespace="calico-system" Pod="csi-node-driver-q6rzg" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0" Mar 13 12:24:18.897338 containerd[1735]: 2026-03-13 12:24:18.873 [INFO][5616] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48cb8262ea8 ContainerID="1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f" Namespace="calico-system" Pod="csi-node-driver-q6rzg" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0" Mar 13 12:24:18.897338 containerd[1735]: 2026-03-13 12:24:18.878 [INFO][5616] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f" Namespace="calico-system" Pod="csi-node-driver-q6rzg" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0" Mar 13 12:24:18.897338 containerd[1735]: 2026-03-13 12:24:18.878 [INFO][5616] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f" Namespace="calico-system" Pod="csi-node-driver-q6rzg" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1e7c6c09-c767-4bee-9251-d729af24c7dc", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f", Pod:"csi-node-driver-q6rzg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali48cb8262ea8", MAC:"e6:65:ad:89:04:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:24:18.897338 containerd[1735]: 2026-03-13 12:24:18.891 [INFO][5616] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f" Namespace="calico-system" Pod="csi-node-driver-q6rzg" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0" Mar 13 12:24:18.921877 containerd[1735]: time="2026-03-13T12:24:18.921780401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:24:18.921877 containerd[1735]: time="2026-03-13T12:24:18.921836881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:24:18.922244 containerd[1735]: time="2026-03-13T12:24:18.921861841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:24:18.922569 containerd[1735]: time="2026-03-13T12:24:18.922478242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:24:18.938605 systemd[1]: Started cri-containerd-1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f.scope - libcontainer container 1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f. Mar 13 12:24:18.984752 containerd[1735]: time="2026-03-13T12:24:18.984714994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q6rzg,Uid:1e7c6c09-c767-4bee-9251-d729af24c7dc,Namespace:calico-system,Attempt:1,} returns sandbox id \"1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f\"" Mar 13 12:24:18.996962 systemd-networkd[1354]: cali8f9f5cbf78b: Link UP Mar 13 12:24:18.997225 systemd-networkd[1354]: cali8f9f5cbf78b: Gained carrier Mar 13 12:24:19.012413 kubelet[3225]: I0313 12:24:19.012358 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64c54c94bc-57wqz" podStartSLOduration=43.89463307 podStartE2EDuration="47.012339764s" podCreationTimestamp="2026-03-13 12:23:32 +0000 UTC" firstStartedPulling="2026-03-13 12:24:14.809957798 +0000 UTC m=+62.473537710" lastFinishedPulling="2026-03-13 12:24:17.927664492 +0000 UTC m=+65.591244404" observedRunningTime="2026-03-13 12:24:18.980859307 +0000 UTC m=+66.644439219" watchObservedRunningTime="2026-03-13 12:24:19.012339764 +0000 UTC m=+66.675919676" Mar 13 12:24:19.016925 containerd[1735]: 2026-03-13 12:24:18.803 [INFO][5627] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0 calico-apiserver-6c7f959487- calico-system f97ecaa3-f993-4c5c-ba9c-9db8e1bdfbf1 1028 0 2026-03-13 12:23:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c7f959487 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.101-461ebd96c0 calico-apiserver-6c7f959487-nm874 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali8f9f5cbf78b [] [] }} ContainerID="aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038" Namespace="calico-system" Pod="calico-apiserver-6c7f959487-nm874" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-" Mar 13 12:24:19.016925 containerd[1735]: 2026-03-13 12:24:18.803 [INFO][5627] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038" Namespace="calico-system" Pod="calico-apiserver-6c7f959487-nm874" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0" Mar 13 12:24:19.016925 containerd[1735]: 2026-03-13 12:24:18.847 [INFO][5645] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038" HandleID="k8s-pod-network.aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0" Mar 13 12:24:19.016925 containerd[1735]: 2026-03-13 12:24:18.863 [INFO][5645] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038" HandleID="k8s-pod-network.aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273aa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-461ebd96c0", "pod":"calico-apiserver-6c7f959487-nm874", "timestamp":"2026-03-13 12:24:18.847817868 +0000 UTC"}, Hostname:"ci-4081.3.101-461ebd96c0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002031e0)} Mar 13 12:24:19.016925 containerd[1735]: 2026-03-13 12:24:18.863 [INFO][5645] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:24:19.016925 containerd[1735]: 2026-03-13 12:24:18.870 [INFO][5645] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:24:19.016925 containerd[1735]: 2026-03-13 12:24:18.870 [INFO][5645] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-461ebd96c0' Mar 13 12:24:19.016925 containerd[1735]: 2026-03-13 12:24:18.941 [INFO][5645] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:19.016925 containerd[1735]: 2026-03-13 12:24:18.946 [INFO][5645] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:19.016925 containerd[1735]: 2026-03-13 12:24:18.952 [INFO][5645] ipam/ipam.go 526: Trying affinity for 192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:19.016925 containerd[1735]: 2026-03-13 12:24:18.957 [INFO][5645] ipam/ipam.go 160: Attempting to load block cidr=192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:19.016925 containerd[1735]: 2026-03-13 12:24:18.963 [INFO][5645] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.100.0/26 host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:19.016925 containerd[1735]: 2026-03-13 12:24:18.963 [INFO][5645] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.100.0/26 handle="k8s-pod-network.aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:19.016925 containerd[1735]: 2026-03-13 12:24:18.967 [INFO][5645] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038 Mar 13 12:24:19.016925 containerd[1735]: 2026-03-13 12:24:18.973 [INFO][5645] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.100.0/26 handle="k8s-pod-network.aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:19.016925 containerd[1735]: 2026-03-13 12:24:18.986 [INFO][5645] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.100.8/26] block=192.168.100.0/26 handle="k8s-pod-network.aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:19.016925 containerd[1735]: 2026-03-13 12:24:18.986 [INFO][5645] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.100.8/26] handle="k8s-pod-network.aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038" host="ci-4081.3.101-461ebd96c0" Mar 13 12:24:19.016925 containerd[1735]: 2026-03-13 12:24:18.986 [INFO][5645] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:24:19.016925 containerd[1735]: 2026-03-13 12:24:18.986 [INFO][5645] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.100.8/26] IPv6=[] ContainerID="aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038" HandleID="k8s-pod-network.aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0" Mar 13 12:24:19.017402 containerd[1735]: 2026-03-13 12:24:18.991 [INFO][5627] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038" Namespace="calico-system" Pod="calico-apiserver-6c7f959487-nm874" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0", GenerateName:"calico-apiserver-6c7f959487-", Namespace:"calico-system", SelfLink:"", UID:"f97ecaa3-f993-4c5c-ba9c-9db8e1bdfbf1", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7f959487", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"", Pod:"calico-apiserver-6c7f959487-nm874", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8f9f5cbf78b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:24:19.017402 containerd[1735]: 2026-03-13 12:24:18.991 [INFO][5627] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.8/32] ContainerID="aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038" Namespace="calico-system" Pod="calico-apiserver-6c7f959487-nm874" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0" Mar 13 12:24:19.017402 containerd[1735]: 2026-03-13 12:24:18.992 [INFO][5627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f9f5cbf78b ContainerID="aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038" Namespace="calico-system" Pod="calico-apiserver-6c7f959487-nm874" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0" Mar 13 12:24:19.017402 containerd[1735]: 2026-03-13 12:24:18.996 [INFO][5627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038" Namespace="calico-system" Pod="calico-apiserver-6c7f959487-nm874" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0" Mar 13 12:24:19.017402 containerd[1735]: 2026-03-13 12:24:18.998 [INFO][5627] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038" Namespace="calico-system" Pod="calico-apiserver-6c7f959487-nm874" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0", GenerateName:"calico-apiserver-6c7f959487-", Namespace:"calico-system", SelfLink:"", UID:"f97ecaa3-f993-4c5c-ba9c-9db8e1bdfbf1", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7f959487", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038", Pod:"calico-apiserver-6c7f959487-nm874", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8f9f5cbf78b", MAC:"3e:bd:0f:d7:eb:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:24:19.017402 containerd[1735]: 2026-03-13 12:24:19.013 [INFO][5627] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038" Namespace="calico-system" Pod="calico-apiserver-6c7f959487-nm874" WorkloadEndpoint="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0" Mar 13 12:24:19.050231 containerd[1735]: time="2026-03-13T12:24:19.050146472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:24:19.050630 containerd[1735]: time="2026-03-13T12:24:19.050587513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:24:19.050691 containerd[1735]: time="2026-03-13T12:24:19.050634793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:24:19.050892 containerd[1735]: time="2026-03-13T12:24:19.050854953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:24:19.069598 systemd[1]: Started cri-containerd-aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038.scope - libcontainer container aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038. Mar 13 12:24:19.100880 containerd[1735]: time="2026-03-13T12:24:19.100845003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7f959487-nm874,Uid:f97ecaa3-f993-4c5c-ba9c-9db8e1bdfbf1,Namespace:calico-system,Attempt:1,} returns sandbox id \"aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038\"" Mar 13 12:24:20.407235 containerd[1735]: time="2026-03-13T12:24:20.407186674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:20.410606 containerd[1735]: time="2026-03-13T12:24:20.410569640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 13 12:24:20.414549 containerd[1735]: time="2026-03-13T12:24:20.414501167Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:20.419888 containerd[1735]: time="2026-03-13T12:24:20.419842096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:20.420953 containerd[1735]: time="2026-03-13T12:24:20.420616778Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.492419405s" Mar 13 12:24:20.420953 containerd[1735]: time="2026-03-13T12:24:20.420649138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 13 12:24:20.422463 containerd[1735]: time="2026-03-13T12:24:20.422328781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 13 12:24:20.429771 containerd[1735]: time="2026-03-13T12:24:20.429746154Z" level=info msg="CreateContainer within sandbox \"3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 12:24:20.467631 containerd[1735]: time="2026-03-13T12:24:20.467598462Z" level=info msg="CreateContainer within sandbox \"3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c219b642bea69a29f9854ad43d3c916e39fa187a0bac04a7f06bffee64ccae66\"" Mar 13 12:24:20.469446 containerd[1735]: time="2026-03-13T12:24:20.468299704Z" level=info msg="StartContainer for \"c219b642bea69a29f9854ad43d3c916e39fa187a0bac04a7f06bffee64ccae66\"" Mar 13 12:24:20.519586 systemd[1]: Started cri-containerd-c219b642bea69a29f9854ad43d3c916e39fa187a0bac04a7f06bffee64ccae66.scope - libcontainer container c219b642bea69a29f9854ad43d3c916e39fa187a0bac04a7f06bffee64ccae66. Mar 13 12:24:20.555420 containerd[1735]: time="2026-03-13T12:24:20.555380180Z" level=info msg="StartContainer for \"c219b642bea69a29f9854ad43d3c916e39fa187a0bac04a7f06bffee64ccae66\" returns successfully" Mar 13 12:24:20.609555 systemd-networkd[1354]: cali48cb8262ea8: Gained IPv6LL Mar 13 12:24:20.801536 systemd-networkd[1354]: cali8f9f5cbf78b: Gained IPv6LL Mar 13 12:24:20.984146 kubelet[3225]: I0313 12:24:20.984087 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6c7f959487-mtfnw" podStartSLOduration=47.977353382 podStartE2EDuration="50.983970232s" podCreationTimestamp="2026-03-13 12:23:30 +0000 UTC" firstStartedPulling="2026-03-13 12:24:17.414803369 +0000 UTC m=+65.078383281" lastFinishedPulling="2026-03-13 12:24:20.421420179 +0000 UTC m=+68.085000131" observedRunningTime="2026-03-13 12:24:20.98293395 +0000 UTC m=+68.646513862" watchObservedRunningTime="2026-03-13 12:24:20.983970232 +0000 UTC m=+68.647550144" Mar 13 12:24:21.971932 kubelet[3225]: I0313 12:24:21.971011 3225 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:24:22.846461 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.101.115.65 Mar 13 12:24:25.009999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2988901806.mount: Deactivated successfully. Mar 13 12:24:26.048689 containerd[1735]: time="2026-03-13T12:24:26.048642648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:26.096830 containerd[1735]: time="2026-03-13T12:24:26.096523934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 13 12:24:26.143499 containerd[1735]: time="2026-03-13T12:24:26.143328178Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:26.192251 containerd[1735]: time="2026-03-13T12:24:26.192205625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:26.193103 containerd[1735]: time="2026-03-13T12:24:26.193075747Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 5.770715926s" Mar 13 12:24:26.193155 containerd[1735]: time="2026-03-13T12:24:26.193108187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 13 12:24:26.195081 containerd[1735]: time="2026-03-13T12:24:26.195021470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 13 12:24:26.240356 containerd[1735]: time="2026-03-13T12:24:26.240322992Z" level=info msg="CreateContainer within sandbox \"81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 13 12:24:26.543761 containerd[1735]: time="2026-03-13T12:24:26.543714816Z" level=info msg="CreateContainer within sandbox \"81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a53e659fc56b8d705dd57ee93615cc23acc1a4cb9af7549218dc4876a40d3841\"" Mar 13 12:24:26.548464 containerd[1735]: time="2026-03-13T12:24:26.546027500Z" level=info msg="StartContainer for \"a53e659fc56b8d705dd57ee93615cc23acc1a4cb9af7549218dc4876a40d3841\"" Mar 13 12:24:26.583200 systemd[1]: run-containerd-runc-k8s.io-a53e659fc56b8d705dd57ee93615cc23acc1a4cb9af7549218dc4876a40d3841-runc.yLJrJU.mount: Deactivated successfully. Mar 13 12:24:26.594574 systemd[1]: Started cri-containerd-a53e659fc56b8d705dd57ee93615cc23acc1a4cb9af7549218dc4876a40d3841.scope - libcontainer container a53e659fc56b8d705dd57ee93615cc23acc1a4cb9af7549218dc4876a40d3841. Mar 13 12:24:26.652727 containerd[1735]: time="2026-03-13T12:24:26.652653851Z" level=info msg="StartContainer for \"a53e659fc56b8d705dd57ee93615cc23acc1a4cb9af7549218dc4876a40d3841\" returns successfully" Mar 13 12:24:27.004868 kubelet[3225]: I0313 12:24:27.004003 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-b86jq" podStartSLOduration=48.386881273 podStartE2EDuration="57.003988241s" podCreationTimestamp="2026-03-13 12:23:30 +0000 UTC" firstStartedPulling="2026-03-13 12:24:17.577054581 +0000 UTC m=+65.240634493" lastFinishedPulling="2026-03-13 12:24:26.194161549 +0000 UTC m=+73.857741461" observedRunningTime="2026-03-13 12:24:27.002785399 +0000 UTC m=+74.666365311" watchObservedRunningTime="2026-03-13 12:24:27.003988241 +0000 UTC m=+74.667568113" Mar 13 12:24:27.647095 kubelet[3225]: I0313 12:24:27.646781 3225 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:24:27.695626 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.102.126.214 Mar 13 12:24:27.695736 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.102.126.214 Mar 13 12:24:27.695756 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.102.126.214 Mar 13 12:24:27.695771 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.102.126.214 Mar 13 12:24:27.722012 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.102.126.214 Mar 13 12:24:27.722108 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.102.126.214 Mar 13 12:24:27.722129 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.102.126.214 Mar 13 12:24:27.736871 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.102.126.214 Mar 13 12:24:27.736950 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.102.126.214 Mar 13 12:24:27.873286 kernel: net_ratelimit: 15 callbacks suppressed Mar 13 12:24:27.873400 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.102.126.214 Mar 13 12:24:27.873423 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.102.126.214 Mar 13 12:24:27.873449 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.102.126.214 Mar 13 12:24:27.889401 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.102.126.214 Mar 13 12:24:28.596849 containerd[1735]: time="2026-03-13T12:24:28.596799338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:28.600362 containerd[1735]: time="2026-03-13T12:24:28.600330904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 13 12:24:28.645462 containerd[1735]: time="2026-03-13T12:24:28.645213385Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:28.692727 containerd[1735]: time="2026-03-13T12:24:28.692456790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:28.693876 containerd[1735]: time="2026-03-13T12:24:28.693204991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 2.498145961s" Mar 13 12:24:28.693876 containerd[1735]: time="2026-03-13T12:24:28.693237871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 13 12:24:28.694157 containerd[1735]: time="2026-03-13T12:24:28.694132033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 12:24:28.802034 containerd[1735]: time="2026-03-13T12:24:28.801898986Z" level=info msg="CreateContainer within sandbox \"1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 13 12:24:29.149468 containerd[1735]: time="2026-03-13T12:24:29.149396809Z" level=info msg="CreateContainer within sandbox \"1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b792de34802c3f1aa266105a8579125107849c28230aa10052d836a179aa9d70\"" Mar 13 12:24:29.150520 containerd[1735]: time="2026-03-13T12:24:29.150129651Z" level=info msg="StartContainer for \"b792de34802c3f1aa266105a8579125107849c28230aa10052d836a179aa9d70\"" Mar 13 12:24:29.182596 systemd[1]: Started cri-containerd-b792de34802c3f1aa266105a8579125107849c28230aa10052d836a179aa9d70.scope - libcontainer container b792de34802c3f1aa266105a8579125107849c28230aa10052d836a179aa9d70. Mar 13 12:24:29.291028 containerd[1735]: time="2026-03-13T12:24:29.290978023Z" level=info msg="StartContainer for \"b792de34802c3f1aa266105a8579125107849c28230aa10052d836a179aa9d70\" returns successfully" Mar 13 12:24:29.544380 containerd[1735]: time="2026-03-13T12:24:29.543927077Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:29.547675 containerd[1735]: time="2026-03-13T12:24:29.546529322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 13 12:24:29.548852 containerd[1735]: time="2026-03-13T12:24:29.548824886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 854.659973ms" Mar 13 12:24:29.548988 containerd[1735]: time="2026-03-13T12:24:29.548857646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 13 12:24:29.550616 containerd[1735]: time="2026-03-13T12:24:29.550415729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 13 12:24:29.639467 containerd[1735]: time="2026-03-13T12:24:29.639413168Z" level=info msg="CreateContainer within sandbox \"aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 12:24:30.004938 containerd[1735]: time="2026-03-13T12:24:30.004854264Z" level=info msg="CreateContainer within sandbox \"aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ec43748b7c7377fc398f6c6a22ee32d632ebf685062747ead7950803d3899fdf\"" Mar 13 12:24:30.006522 containerd[1735]: time="2026-03-13T12:24:30.006493347Z" level=info msg="StartContainer for \"ec43748b7c7377fc398f6c6a22ee32d632ebf685062747ead7950803d3899fdf\"" Mar 13 12:24:30.041574 systemd[1]: Started cri-containerd-ec43748b7c7377fc398f6c6a22ee32d632ebf685062747ead7950803d3899fdf.scope - libcontainer container ec43748b7c7377fc398f6c6a22ee32d632ebf685062747ead7950803d3899fdf. Mar 13 12:24:30.111116 containerd[1735]: time="2026-03-13T12:24:30.110974054Z" level=info msg="StartContainer for \"ec43748b7c7377fc398f6c6a22ee32d632ebf685062747ead7950803d3899fdf\" returns successfully" Mar 13 12:24:32.006637 kubelet[3225]: I0313 12:24:32.006599 3225 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:24:32.553951 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.101.115.65 Mar 13 12:24:34.995061 kubelet[3225]: I0313 12:24:34.995000 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6c7f959487-nm874" podStartSLOduration=54.547585228 podStartE2EDuration="1m4.994985029s" podCreationTimestamp="2026-03-13 12:23:30 +0000 UTC" firstStartedPulling="2026-03-13 12:24:19.102309886 +0000 UTC m=+66.765889798" lastFinishedPulling="2026-03-13 12:24:29.549709687 +0000 UTC m=+77.213289599" observedRunningTime="2026-03-13 12:24:31.016746919 +0000 UTC m=+78.680326871" watchObservedRunningTime="2026-03-13 12:24:34.994985029 +0000 UTC m=+82.658564941" Mar 13 12:24:39.344605 containerd[1735]: time="2026-03-13T12:24:39.344545523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:39.390823 containerd[1735]: time="2026-03-13T12:24:39.390760285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 13 12:24:39.454454 containerd[1735]: time="2026-03-13T12:24:39.453302517Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:39.501478 containerd[1735]: time="2026-03-13T12:24:39.501414523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:24:39.502739 containerd[1735]: time="2026-03-13T12:24:39.502134125Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 9.951670716s" Mar 13 12:24:39.502739 containerd[1735]: time="2026-03-13T12:24:39.502565925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 13 12:24:39.511752 containerd[1735]: time="2026-03-13T12:24:39.511642942Z" level=info msg="CreateContainer within sandbox \"1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 13 12:24:39.855322 containerd[1735]: time="2026-03-13T12:24:39.855188556Z" level=info msg="CreateContainer within sandbox \"1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"37da87f5733e16b2cf701a5ce2864939ab3ca24f5da81792490b9878b18ef7d2\"" Mar 13 12:24:39.856035 containerd[1735]: time="2026-03-13T12:24:39.855884957Z" level=info msg="StartContainer for \"37da87f5733e16b2cf701a5ce2864939ab3ca24f5da81792490b9878b18ef7d2\"" Mar 13 12:24:39.898737 systemd[1]: Started cri-containerd-37da87f5733e16b2cf701a5ce2864939ab3ca24f5da81792490b9878b18ef7d2.scope - libcontainer container 37da87f5733e16b2cf701a5ce2864939ab3ca24f5da81792490b9878b18ef7d2. Mar 13 12:24:39.931329 containerd[1735]: time="2026-03-13T12:24:39.931247292Z" level=info msg="StartContainer for \"37da87f5733e16b2cf701a5ce2864939ab3ca24f5da81792490b9878b18ef7d2\" returns successfully" Mar 13 12:24:40.559197 kubelet[3225]: I0313 12:24:40.559154 3225 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 13 12:24:40.559197 kubelet[3225]: I0313 12:24:40.559199 3225 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 13 12:24:47.259651 kubelet[3225]: I0313 12:24:47.259375 3225 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:24:47.285765 kubelet[3225]: I0313 12:24:47.285697 3225 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-q6rzg" podStartSLOduration=54.769471866 podStartE2EDuration="1m15.285682554s" podCreationTimestamp="2026-03-13 12:23:32 +0000 UTC" firstStartedPulling="2026-03-13 12:24:18.988272801 +0000 UTC m=+66.651852713" lastFinishedPulling="2026-03-13 12:24:39.504483489 +0000 UTC m=+87.168063401" observedRunningTime="2026-03-13 12:24:40.051536667 +0000 UTC m=+87.715116579" watchObservedRunningTime="2026-03-13 12:24:47.285682554 +0000 UTC m=+94.949262466" Mar 13 12:24:47.641460 kernel: icmp: detected local route for 10.200.20.18 during ICMP sending, src 10.101.115.65 Mar 13 12:24:56.152951 systemd[1]: Started sshd@7-10.200.20.18:22-10.200.16.10:52150.service - OpenSSH per-connection server daemon (10.200.16.10:52150). Mar 13 12:24:56.652323 sshd[6185]: Accepted publickey for core from 10.200.16.10 port 52150 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:24:56.654538 sshd[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:24:56.658398 systemd-logind[1698]: New session 10 of user core. Mar 13 12:24:56.667563 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 12:24:57.073252 sshd[6185]: pam_unix(sshd:session): session closed for user core Mar 13 12:24:57.076905 systemd[1]: sshd@7-10.200.20.18:22-10.200.16.10:52150.service: Deactivated successfully. Mar 13 12:24:57.079200 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 12:24:57.080342 systemd-logind[1698]: Session 10 logged out. Waiting for processes to exit. Mar 13 12:24:57.081310 systemd-logind[1698]: Removed session 10. Mar 13 12:25:02.168731 systemd[1]: Started sshd@8-10.200.20.18:22-10.200.16.10:59514.service - OpenSSH per-connection server daemon (10.200.16.10:59514). Mar 13 12:25:02.602351 systemd[1]: run-containerd-runc-k8s.io-a53e659fc56b8d705dd57ee93615cc23acc1a4cb9af7549218dc4876a40d3841-runc.HgJAUn.mount: Deactivated successfully. Mar 13 12:25:02.655098 sshd[6228]: Accepted publickey for core from 10.200.16.10 port 59514 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:25:02.656564 sshd[6228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:25:02.661503 systemd-logind[1698]: New session 11 of user core. Mar 13 12:25:02.662598 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 12:25:03.096212 sshd[6228]: pam_unix(sshd:session): session closed for user core Mar 13 12:25:03.100014 systemd[1]: sshd@8-10.200.20.18:22-10.200.16.10:59514.service: Deactivated successfully. Mar 13 12:25:03.101902 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 12:25:03.103212 systemd-logind[1698]: Session 11 logged out. Waiting for processes to exit. Mar 13 12:25:03.104401 systemd-logind[1698]: Removed session 11. Mar 13 12:25:04.991650 systemd[1]: run-containerd-runc-k8s.io-53946500c91142fa820b4e56e3431bcfbfe1a57d1a71bc4732a31cdf4d34e3ce-runc.DzWRJV.mount: Deactivated successfully. Mar 13 12:25:08.184534 systemd[1]: Started sshd@9-10.200.20.18:22-10.200.16.10:59518.service - OpenSSH per-connection server daemon (10.200.16.10:59518). Mar 13 12:25:08.698319 sshd[6310]: Accepted publickey for core from 10.200.16.10 port 59518 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:25:08.699779 sshd[6310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:25:08.709543 systemd-logind[1698]: New session 12 of user core. Mar 13 12:25:08.713796 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 12:25:09.323999 sshd[6310]: pam_unix(sshd:session): session closed for user core Mar 13 12:25:09.328168 systemd-logind[1698]: Session 12 logged out. Waiting for processes to exit. Mar 13 12:25:09.329074 systemd[1]: sshd@9-10.200.20.18:22-10.200.16.10:59518.service: Deactivated successfully. Mar 13 12:25:09.333781 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 12:25:09.335863 systemd-logind[1698]: Removed session 12. Mar 13 12:25:12.578376 containerd[1735]: time="2026-03-13T12:25:12.578331562Z" level=info msg="StopPodSandbox for \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\"" Mar 13 12:25:12.643951 containerd[1735]: 2026-03-13 12:25:12.612 [WARNING][6335] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0", GenerateName:"calico-apiserver-6c7f959487-", Namespace:"calico-system", SelfLink:"", UID:"0a346f55-3551-472a-8019-424b407e361b", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7f959487", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1", Pod:"calico-apiserver-6c7f959487-mtfnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califf9b604f81a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:25:12.643951 containerd[1735]: 2026-03-13 12:25:12.612 [INFO][6335] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Mar 13 12:25:12.643951 containerd[1735]: 2026-03-13 12:25:12.612 [INFO][6335] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" iface="eth0" netns="" Mar 13 12:25:12.643951 containerd[1735]: 2026-03-13 12:25:12.612 [INFO][6335] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Mar 13 12:25:12.643951 containerd[1735]: 2026-03-13 12:25:12.612 [INFO][6335] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Mar 13 12:25:12.643951 containerd[1735]: 2026-03-13 12:25:12.630 [INFO][6343] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" HandleID="k8s-pod-network.800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0" Mar 13 12:25:12.643951 containerd[1735]: 2026-03-13 12:25:12.630 [INFO][6343] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:25:12.643951 containerd[1735]: 2026-03-13 12:25:12.630 [INFO][6343] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:25:12.643951 containerd[1735]: 2026-03-13 12:25:12.639 [WARNING][6343] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" HandleID="k8s-pod-network.800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0" Mar 13 12:25:12.643951 containerd[1735]: 2026-03-13 12:25:12.639 [INFO][6343] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" HandleID="k8s-pod-network.800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0" Mar 13 12:25:12.643951 containerd[1735]: 2026-03-13 12:25:12.640 [INFO][6343] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:25:12.643951 containerd[1735]: 2026-03-13 12:25:12.642 [INFO][6335] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Mar 13 12:25:12.644421 containerd[1735]: time="2026-03-13T12:25:12.643981761Z" level=info msg="TearDown network for sandbox \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\" successfully" Mar 13 12:25:12.644421 containerd[1735]: time="2026-03-13T12:25:12.644009761Z" level=info msg="StopPodSandbox for \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\" returns successfully" Mar 13 12:25:12.645173 containerd[1735]: time="2026-03-13T12:25:12.644850442Z" level=info msg="RemovePodSandbox for \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\"" Mar 13 12:25:12.645173 containerd[1735]: time="2026-03-13T12:25:12.644892602Z" level=info msg="Forcibly stopping sandbox \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\"" Mar 13 12:25:12.710410 containerd[1735]: 2026-03-13 12:25:12.678 [WARNING][6357] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0", GenerateName:"calico-apiserver-6c7f959487-", Namespace:"calico-system", SelfLink:"", UID:"0a346f55-3551-472a-8019-424b407e361b", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7f959487", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"3e4dd92bbe71270e82817001495516d160b83522f1ed3ef69ffc13d3950aafc1", Pod:"calico-apiserver-6c7f959487-mtfnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califf9b604f81a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:25:12.710410 containerd[1735]: 2026-03-13 12:25:12.679 [INFO][6357] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Mar 13 12:25:12.710410 containerd[1735]: 2026-03-13 12:25:12.679 [INFO][6357] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" iface="eth0" netns="" Mar 13 12:25:12.710410 containerd[1735]: 2026-03-13 12:25:12.679 [INFO][6357] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Mar 13 12:25:12.710410 containerd[1735]: 2026-03-13 12:25:12.679 [INFO][6357] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Mar 13 12:25:12.710410 containerd[1735]: 2026-03-13 12:25:12.697 [INFO][6364] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" HandleID="k8s-pod-network.800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0" Mar 13 12:25:12.710410 containerd[1735]: 2026-03-13 12:25:12.697 [INFO][6364] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:25:12.710410 containerd[1735]: 2026-03-13 12:25:12.697 [INFO][6364] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:25:12.710410 containerd[1735]: 2026-03-13 12:25:12.705 [WARNING][6364] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" HandleID="k8s-pod-network.800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0" Mar 13 12:25:12.710410 containerd[1735]: 2026-03-13 12:25:12.705 [INFO][6364] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" HandleID="k8s-pod-network.800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--mtfnw-eth0" Mar 13 12:25:12.710410 containerd[1735]: 2026-03-13 12:25:12.707 [INFO][6364] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:25:12.710410 containerd[1735]: 2026-03-13 12:25:12.708 [INFO][6357] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e" Mar 13 12:25:12.710905 containerd[1735]: time="2026-03-13T12:25:12.710473841Z" level=info msg="TearDown network for sandbox \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\" successfully" Mar 13 12:25:12.793071 containerd[1735]: time="2026-03-13T12:25:12.792817750Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 13 12:25:12.793071 containerd[1735]: time="2026-03-13T12:25:12.792900190Z" level=info msg="RemovePodSandbox \"800a866ad68588c160942fec836f3928bfc6b69170ef615d7c48eeac72a30d0e\" returns successfully" Mar 13 12:25:12.793373 containerd[1735]: time="2026-03-13T12:25:12.793334871Z" level=info msg="StopPodSandbox for \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\"" Mar 13 12:25:12.860210 containerd[1735]: 2026-03-13 12:25:12.827 [WARNING][6378] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"42dbc5a5-80ff-4c29-b964-19cfe9b7fccd", ResourceVersion:"1222", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f", Pod:"goldmane-cccfbd5cf-b86jq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali02446c7813b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:25:12.860210 containerd[1735]: 2026-03-13 12:25:12.828 [INFO][6378] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Mar 13 12:25:12.860210 containerd[1735]: 2026-03-13 12:25:12.828 [INFO][6378] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" iface="eth0" netns="" Mar 13 12:25:12.860210 containerd[1735]: 2026-03-13 12:25:12.828 [INFO][6378] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Mar 13 12:25:12.860210 containerd[1735]: 2026-03-13 12:25:12.828 [INFO][6378] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Mar 13 12:25:12.860210 containerd[1735]: 2026-03-13 12:25:12.846 [INFO][6385] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" HandleID="k8s-pod-network.0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Workload="ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0" Mar 13 12:25:12.860210 containerd[1735]: 2026-03-13 12:25:12.846 [INFO][6385] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:25:12.860210 containerd[1735]: 2026-03-13 12:25:12.847 [INFO][6385] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:25:12.860210 containerd[1735]: 2026-03-13 12:25:12.855 [WARNING][6385] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" HandleID="k8s-pod-network.0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Workload="ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0" Mar 13 12:25:12.860210 containerd[1735]: 2026-03-13 12:25:12.855 [INFO][6385] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" HandleID="k8s-pod-network.0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Workload="ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0" Mar 13 12:25:12.860210 containerd[1735]: 2026-03-13 12:25:12.856 [INFO][6385] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:25:12.860210 containerd[1735]: 2026-03-13 12:25:12.858 [INFO][6378] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Mar 13 12:25:12.860696 containerd[1735]: time="2026-03-13T12:25:12.860247312Z" level=info msg="TearDown network for sandbox \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\" successfully" Mar 13 12:25:12.860696 containerd[1735]: time="2026-03-13T12:25:12.860277152Z" level=info msg="StopPodSandbox for \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\" returns successfully" Mar 13 12:25:12.861143 containerd[1735]: time="2026-03-13T12:25:12.861114433Z" level=info msg="RemovePodSandbox for \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\"" Mar 13 12:25:12.861198 containerd[1735]: time="2026-03-13T12:25:12.861149753Z" level=info msg="Forcibly stopping sandbox \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\"" Mar 13 12:25:12.929169 containerd[1735]: 2026-03-13 12:25:12.893 [WARNING][6399] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"42dbc5a5-80ff-4c29-b964-19cfe9b7fccd", ResourceVersion:"1222", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"81b1f483a2140f3ccc7cdd146f4424fc819e0d2bc44bb71ee3903b340689c67f", Pod:"goldmane-cccfbd5cf-b86jq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali02446c7813b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:25:12.929169 containerd[1735]: 2026-03-13 12:25:12.894 [INFO][6399] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Mar 13 12:25:12.929169 containerd[1735]: 2026-03-13 12:25:12.894 [INFO][6399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" iface="eth0" netns="" Mar 13 12:25:12.929169 containerd[1735]: 2026-03-13 12:25:12.894 [INFO][6399] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Mar 13 12:25:12.929169 containerd[1735]: 2026-03-13 12:25:12.894 [INFO][6399] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Mar 13 12:25:12.929169 containerd[1735]: 2026-03-13 12:25:12.912 [INFO][6406] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" HandleID="k8s-pod-network.0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Workload="ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0" Mar 13 12:25:12.929169 containerd[1735]: 2026-03-13 12:25:12.912 [INFO][6406] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:25:12.929169 containerd[1735]: 2026-03-13 12:25:12.912 [INFO][6406] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:25:12.929169 containerd[1735]: 2026-03-13 12:25:12.921 [WARNING][6406] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" HandleID="k8s-pod-network.0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Workload="ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0" Mar 13 12:25:12.929169 containerd[1735]: 2026-03-13 12:25:12.921 [INFO][6406] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" HandleID="k8s-pod-network.0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Workload="ci--4081.3.101--461ebd96c0-k8s-goldmane--cccfbd5cf--b86jq-eth0" Mar 13 12:25:12.929169 containerd[1735]: 2026-03-13 12:25:12.925 [INFO][6406] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:25:12.929169 containerd[1735]: 2026-03-13 12:25:12.927 [INFO][6399] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371" Mar 13 12:25:12.929716 containerd[1735]: time="2026-03-13T12:25:12.929232996Z" level=info msg="TearDown network for sandbox \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\" successfully" Mar 13 12:25:12.952479 containerd[1735]: time="2026-03-13T12:25:12.952405318Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 13 12:25:12.952593 containerd[1735]: time="2026-03-13T12:25:12.952514398Z" level=info msg="RemovePodSandbox \"0a10f1c4ff129e8d87942a04de07d88e92fcef00c37d5f89db009f264550c371\" returns successfully" Mar 13 12:25:12.953079 containerd[1735]: time="2026-03-13T12:25:12.953055119Z" level=info msg="StopPodSandbox for \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\"" Mar 13 12:25:13.015165 containerd[1735]: 2026-03-13 12:25:12.984 [WARNING][6420] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e6e79ed8-8793-4170-b6be-57dc4f48b338", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928", Pod:"coredns-66bc5c9577-cbn7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9169657261", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:25:13.015165 containerd[1735]: 2026-03-13 12:25:12.984 [INFO][6420] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Mar 13 12:25:13.015165 containerd[1735]: 2026-03-13 12:25:12.984 [INFO][6420] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" iface="eth0" netns="" Mar 13 12:25:13.015165 containerd[1735]: 2026-03-13 12:25:12.984 [INFO][6420] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Mar 13 12:25:13.015165 containerd[1735]: 2026-03-13 12:25:12.984 [INFO][6420] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Mar 13 12:25:13.015165 containerd[1735]: 2026-03-13 12:25:13.002 [INFO][6427] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" HandleID="k8s-pod-network.4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0" Mar 13 12:25:13.015165 containerd[1735]: 2026-03-13 12:25:13.002 [INFO][6427] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:25:13.015165 containerd[1735]: 2026-03-13 12:25:13.002 [INFO][6427] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:25:13.015165 containerd[1735]: 2026-03-13 12:25:13.010 [WARNING][6427] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" HandleID="k8s-pod-network.4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0" Mar 13 12:25:13.015165 containerd[1735]: 2026-03-13 12:25:13.010 [INFO][6427] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" HandleID="k8s-pod-network.4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0" Mar 13 12:25:13.015165 containerd[1735]: 2026-03-13 12:25:13.011 [INFO][6427] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:25:13.015165 containerd[1735]: 2026-03-13 12:25:13.013 [INFO][6420] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Mar 13 12:25:13.015165 containerd[1735]: time="2026-03-13T12:25:13.014779751Z" level=info msg="TearDown network for sandbox \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\" successfully" Mar 13 12:25:13.015165 containerd[1735]: time="2026-03-13T12:25:13.014804631Z" level=info msg="StopPodSandbox for \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\" returns successfully" Mar 13 12:25:13.016210 containerd[1735]: time="2026-03-13T12:25:13.015885233Z" level=info msg="RemovePodSandbox for \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\"" Mar 13 12:25:13.016210 containerd[1735]: time="2026-03-13T12:25:13.015917793Z" level=info msg="Forcibly stopping sandbox \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\"" Mar 13 12:25:13.079791 containerd[1735]: 2026-03-13 12:25:13.047 [WARNING][6441] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e6e79ed8-8793-4170-b6be-57dc4f48b338", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"8d65c1aed4214d7600dbe1db9ce49625e7a49c5c73197b6d72801ec2a522d928", Pod:"coredns-66bc5c9577-cbn7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif9169657261", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:25:13.079791 containerd[1735]: 2026-03-13 12:25:13.047 [INFO][6441] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Mar 13 12:25:13.079791 containerd[1735]: 2026-03-13 12:25:13.047 [INFO][6441] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" iface="eth0" netns="" Mar 13 12:25:13.079791 containerd[1735]: 2026-03-13 12:25:13.047 [INFO][6441] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Mar 13 12:25:13.079791 containerd[1735]: 2026-03-13 12:25:13.047 [INFO][6441] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Mar 13 12:25:13.079791 containerd[1735]: 2026-03-13 12:25:13.066 [INFO][6448] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" HandleID="k8s-pod-network.4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0" Mar 13 12:25:13.079791 containerd[1735]: 2026-03-13 12:25:13.066 [INFO][6448] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:25:13.079791 containerd[1735]: 2026-03-13 12:25:13.066 [INFO][6448] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:25:13.079791 containerd[1735]: 2026-03-13 12:25:13.074 [WARNING][6448] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" HandleID="k8s-pod-network.4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0" Mar 13 12:25:13.079791 containerd[1735]: 2026-03-13 12:25:13.074 [INFO][6448] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" HandleID="k8s-pod-network.4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--cbn7n-eth0" Mar 13 12:25:13.079791 containerd[1735]: 2026-03-13 12:25:13.076 [INFO][6448] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:25:13.079791 containerd[1735]: 2026-03-13 12:25:13.078 [INFO][6441] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83" Mar 13 12:25:13.080196 containerd[1735]: time="2026-03-13T12:25:13.079839348Z" level=info msg="TearDown network for sandbox \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\" successfully" Mar 13 12:25:13.096907 containerd[1735]: time="2026-03-13T12:25:13.096869259Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 13 12:25:13.096987 containerd[1735]: time="2026-03-13T12:25:13.096943219Z" level=info msg="RemovePodSandbox \"4cfcec1926d59f0f27a3f60aaea499e2d4a91a92ae7ada62fdeddcf16063cb83\" returns successfully" Mar 13 12:25:13.097473 containerd[1735]: time="2026-03-13T12:25:13.097447820Z" level=info msg="StopPodSandbox for \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\"" Mar 13 12:25:13.161677 containerd[1735]: 2026-03-13 12:25:13.131 [WARNING][6462] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1e7c6c09-c767-4bee-9251-d729af24c7dc", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f", Pod:"csi-node-driver-q6rzg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali48cb8262ea8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:25:13.161677 containerd[1735]: 2026-03-13 12:25:13.132 [INFO][6462] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Mar 13 12:25:13.161677 containerd[1735]: 2026-03-13 12:25:13.132 [INFO][6462] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" iface="eth0" netns="" Mar 13 12:25:13.161677 containerd[1735]: 2026-03-13 12:25:13.132 [INFO][6462] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Mar 13 12:25:13.161677 containerd[1735]: 2026-03-13 12:25:13.132 [INFO][6462] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Mar 13 12:25:13.161677 containerd[1735]: 2026-03-13 12:25:13.149 [INFO][6469] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" HandleID="k8s-pod-network.9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Workload="ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0" Mar 13 12:25:13.161677 containerd[1735]: 2026-03-13 12:25:13.149 [INFO][6469] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:25:13.161677 containerd[1735]: 2026-03-13 12:25:13.149 [INFO][6469] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:25:13.161677 containerd[1735]: 2026-03-13 12:25:13.157 [WARNING][6469] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" HandleID="k8s-pod-network.9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Workload="ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0" Mar 13 12:25:13.161677 containerd[1735]: 2026-03-13 12:25:13.157 [INFO][6469] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" HandleID="k8s-pod-network.9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Workload="ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0" Mar 13 12:25:13.161677 containerd[1735]: 2026-03-13 12:25:13.158 [INFO][6469] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:25:13.161677 containerd[1735]: 2026-03-13 12:25:13.160 [INFO][6462] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Mar 13 12:25:13.161677 containerd[1735]: time="2026-03-13T12:25:13.161623416Z" level=info msg="TearDown network for sandbox \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\" successfully" Mar 13 12:25:13.161677 containerd[1735]: time="2026-03-13T12:25:13.161648416Z" level=info msg="StopPodSandbox for \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\" returns successfully" Mar 13 12:25:13.163729 containerd[1735]: time="2026-03-13T12:25:13.163315939Z" level=info msg="RemovePodSandbox for \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\"" Mar 13 12:25:13.163729 containerd[1735]: time="2026-03-13T12:25:13.163464459Z" level=info msg="Forcibly stopping sandbox \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\"" Mar 13 12:25:13.234307 containerd[1735]: 2026-03-13 12:25:13.201 [WARNING][6483] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1e7c6c09-c767-4bee-9251-d729af24c7dc", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"1484a5036062e86ed6add63c1516bffdc714b964f5b8cbca86107fb54d95568f", Pod:"csi-node-driver-q6rzg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali48cb8262ea8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:25:13.234307 containerd[1735]: 2026-03-13 12:25:13.201 [INFO][6483] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Mar 13 12:25:13.234307 containerd[1735]: 2026-03-13 12:25:13.201 [INFO][6483] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" iface="eth0" netns="" Mar 13 12:25:13.234307 containerd[1735]: 2026-03-13 12:25:13.201 [INFO][6483] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Mar 13 12:25:13.234307 containerd[1735]: 2026-03-13 12:25:13.201 [INFO][6483] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Mar 13 12:25:13.234307 containerd[1735]: 2026-03-13 12:25:13.221 [INFO][6491] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" HandleID="k8s-pod-network.9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Workload="ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0" Mar 13 12:25:13.234307 containerd[1735]: 2026-03-13 12:25:13.221 [INFO][6491] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:25:13.234307 containerd[1735]: 2026-03-13 12:25:13.221 [INFO][6491] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:25:13.234307 containerd[1735]: 2026-03-13 12:25:13.229 [WARNING][6491] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" HandleID="k8s-pod-network.9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Workload="ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0" Mar 13 12:25:13.234307 containerd[1735]: 2026-03-13 12:25:13.229 [INFO][6491] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" HandleID="k8s-pod-network.9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Workload="ci--4081.3.101--461ebd96c0-k8s-csi--node--driver--q6rzg-eth0" Mar 13 12:25:13.234307 containerd[1735]: 2026-03-13 12:25:13.230 [INFO][6491] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:25:13.234307 containerd[1735]: 2026-03-13 12:25:13.232 [INFO][6483] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a" Mar 13 12:25:13.234307 containerd[1735]: time="2026-03-13T12:25:13.234213827Z" level=info msg="TearDown network for sandbox \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\" successfully" Mar 13 12:25:13.254451 containerd[1735]: time="2026-03-13T12:25:13.254400864Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 13 12:25:13.254533 containerd[1735]: time="2026-03-13T12:25:13.254485584Z" level=info msg="RemovePodSandbox \"9990637eed46c4acf2726e6130eee0b6849968cb1fd5024b8a5005164d44c06a\" returns successfully" Mar 13 12:25:13.255239 containerd[1735]: time="2026-03-13T12:25:13.255022305Z" level=info msg="StopPodSandbox for \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\"" Mar 13 12:25:13.320630 containerd[1735]: 2026-03-13 12:25:13.287 [WARNING][6505] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0", GenerateName:"calico-apiserver-6c7f959487-", Namespace:"calico-system", SelfLink:"", UID:"f97ecaa3-f993-4c5c-ba9c-9db8e1bdfbf1", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7f959487", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038", Pod:"calico-apiserver-6c7f959487-nm874", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8f9f5cbf78b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:25:13.320630 containerd[1735]: 2026-03-13 12:25:13.287 [INFO][6505] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Mar 13 12:25:13.320630 containerd[1735]: 2026-03-13 12:25:13.287 [INFO][6505] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" iface="eth0" netns="" Mar 13 12:25:13.320630 containerd[1735]: 2026-03-13 12:25:13.287 [INFO][6505] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Mar 13 12:25:13.320630 containerd[1735]: 2026-03-13 12:25:13.287 [INFO][6505] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Mar 13 12:25:13.320630 containerd[1735]: 2026-03-13 12:25:13.306 [INFO][6512] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" HandleID="k8s-pod-network.f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0" Mar 13 12:25:13.320630 containerd[1735]: 2026-03-13 12:25:13.307 [INFO][6512] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:25:13.320630 containerd[1735]: 2026-03-13 12:25:13.307 [INFO][6512] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:25:13.320630 containerd[1735]: 2026-03-13 12:25:13.315 [WARNING][6512] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" HandleID="k8s-pod-network.f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0" Mar 13 12:25:13.320630 containerd[1735]: 2026-03-13 12:25:13.315 [INFO][6512] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" HandleID="k8s-pod-network.f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0" Mar 13 12:25:13.320630 containerd[1735]: 2026-03-13 12:25:13.317 [INFO][6512] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:25:13.320630 containerd[1735]: 2026-03-13 12:25:13.318 [INFO][6505] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Mar 13 12:25:13.321268 containerd[1735]: time="2026-03-13T12:25:13.320668544Z" level=info msg="TearDown network for sandbox \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\" successfully" Mar 13 12:25:13.321268 containerd[1735]: time="2026-03-13T12:25:13.320692944Z" level=info msg="StopPodSandbox for \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\" returns successfully" Mar 13 12:25:13.321268 containerd[1735]: time="2026-03-13T12:25:13.321122144Z" level=info msg="RemovePodSandbox for \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\"" Mar 13 12:25:13.321268 containerd[1735]: time="2026-03-13T12:25:13.321149624Z" level=info msg="Forcibly stopping sandbox \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\"" Mar 13 12:25:13.388719 containerd[1735]: 2026-03-13 12:25:13.354 [WARNING][6526] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0", GenerateName:"calico-apiserver-6c7f959487-", Namespace:"calico-system", SelfLink:"", UID:"f97ecaa3-f993-4c5c-ba9c-9db8e1bdfbf1", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7f959487", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"aa5bcb684054c641dccfb5d6d53b947a994dfbc12507c1f7f15acc6138248038", Pod:"calico-apiserver-6c7f959487-nm874", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8f9f5cbf78b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:25:13.388719 containerd[1735]: 2026-03-13 12:25:13.354 [INFO][6526] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Mar 13 12:25:13.388719 containerd[1735]: 2026-03-13 12:25:13.355 [INFO][6526] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" iface="eth0" netns="" Mar 13 12:25:13.388719 containerd[1735]: 2026-03-13 12:25:13.355 [INFO][6526] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Mar 13 12:25:13.388719 containerd[1735]: 2026-03-13 12:25:13.355 [INFO][6526] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Mar 13 12:25:13.388719 containerd[1735]: 2026-03-13 12:25:13.373 [INFO][6533] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" HandleID="k8s-pod-network.f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0" Mar 13 12:25:13.388719 containerd[1735]: 2026-03-13 12:25:13.373 [INFO][6533] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:25:13.388719 containerd[1735]: 2026-03-13 12:25:13.373 [INFO][6533] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:25:13.388719 containerd[1735]: 2026-03-13 12:25:13.383 [WARNING][6533] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" HandleID="k8s-pod-network.f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0" Mar 13 12:25:13.388719 containerd[1735]: 2026-03-13 12:25:13.383 [INFO][6533] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" HandleID="k8s-pod-network.f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--apiserver--6c7f959487--nm874-eth0" Mar 13 12:25:13.388719 containerd[1735]: 2026-03-13 12:25:13.385 [INFO][6533] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:25:13.388719 containerd[1735]: 2026-03-13 12:25:13.386 [INFO][6526] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b" Mar 13 12:25:13.389210 containerd[1735]: time="2026-03-13T12:25:13.388760467Z" level=info msg="TearDown network for sandbox \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\" successfully" Mar 13 12:25:13.396594 containerd[1735]: time="2026-03-13T12:25:13.396544361Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 13 12:25:13.396690 containerd[1735]: time="2026-03-13T12:25:13.396644521Z" level=info msg="RemovePodSandbox \"f550c618c0d7a0477ff91c3fff23f1981147e2b70c8be4ce5e96103b43c55a3b\" returns successfully" Mar 13 12:25:13.397205 containerd[1735]: time="2026-03-13T12:25:13.397178082Z" level=info msg="StopPodSandbox for \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\"" Mar 13 12:25:13.476943 containerd[1735]: 2026-03-13 12:25:13.444 [WARNING][6547] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a8131164-7f35-4890-a8dd-a27d2707cba1", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7", Pod:"coredns-66bc5c9577-xclsd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califed3dbd70a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:25:13.476943 containerd[1735]: 2026-03-13 12:25:13.444 [INFO][6547] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Mar 13 12:25:13.476943 containerd[1735]: 2026-03-13 12:25:13.444 [INFO][6547] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" iface="eth0" netns="" Mar 13 12:25:13.476943 containerd[1735]: 2026-03-13 12:25:13.444 [INFO][6547] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Mar 13 12:25:13.476943 containerd[1735]: 2026-03-13 12:25:13.444 [INFO][6547] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Mar 13 12:25:13.476943 containerd[1735]: 2026-03-13 12:25:13.462 [INFO][6554] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" HandleID="k8s-pod-network.37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0" Mar 13 12:25:13.476943 containerd[1735]: 2026-03-13 12:25:13.462 [INFO][6554] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:25:13.476943 containerd[1735]: 2026-03-13 12:25:13.462 [INFO][6554] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:25:13.476943 containerd[1735]: 2026-03-13 12:25:13.471 [WARNING][6554] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" HandleID="k8s-pod-network.37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0" Mar 13 12:25:13.476943 containerd[1735]: 2026-03-13 12:25:13.471 [INFO][6554] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" HandleID="k8s-pod-network.37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0" Mar 13 12:25:13.476943 containerd[1735]: 2026-03-13 12:25:13.473 [INFO][6554] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:25:13.476943 containerd[1735]: 2026-03-13 12:25:13.474 [INFO][6547] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Mar 13 12:25:13.476943 containerd[1735]: time="2026-03-13T12:25:13.476762626Z" level=info msg="TearDown network for sandbox \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\" successfully" Mar 13 12:25:13.476943 containerd[1735]: time="2026-03-13T12:25:13.476787066Z" level=info msg="StopPodSandbox for \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\" returns successfully" Mar 13 12:25:13.478798 containerd[1735]: time="2026-03-13T12:25:13.477801867Z" level=info msg="RemovePodSandbox for \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\"" Mar 13 12:25:13.478798 containerd[1735]: time="2026-03-13T12:25:13.477848508Z" level=info msg="Forcibly stopping sandbox \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\"" Mar 13 12:25:13.544488 containerd[1735]: 2026-03-13 12:25:13.511 [WARNING][6568] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a8131164-7f35-4890-a8dd-a27d2707cba1", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"3bc1e0adf4bd20dcee5a42d9f8caffd49c90f66aa392431660d4da68d4eb20a7", Pod:"coredns-66bc5c9577-xclsd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califed3dbd70a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:25:13.544488 containerd[1735]: 2026-03-13 12:25:13.512 [INFO][6568] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Mar 13 12:25:13.544488 containerd[1735]: 2026-03-13 12:25:13.512 [INFO][6568] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" iface="eth0" netns="" Mar 13 12:25:13.544488 containerd[1735]: 2026-03-13 12:25:13.512 [INFO][6568] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Mar 13 12:25:13.544488 containerd[1735]: 2026-03-13 12:25:13.512 [INFO][6568] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Mar 13 12:25:13.544488 containerd[1735]: 2026-03-13 12:25:13.531 [INFO][6575] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" HandleID="k8s-pod-network.37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0" Mar 13 12:25:13.544488 containerd[1735]: 2026-03-13 12:25:13.531 [INFO][6575] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:25:13.544488 containerd[1735]: 2026-03-13 12:25:13.531 [INFO][6575] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:25:13.544488 containerd[1735]: 2026-03-13 12:25:13.539 [WARNING][6575] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" HandleID="k8s-pod-network.37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0" Mar 13 12:25:13.544488 containerd[1735]: 2026-03-13 12:25:13.540 [INFO][6575] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" HandleID="k8s-pod-network.37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Workload="ci--4081.3.101--461ebd96c0-k8s-coredns--66bc5c9577--xclsd-eth0" Mar 13 12:25:13.544488 containerd[1735]: 2026-03-13 12:25:13.541 [INFO][6575] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:25:13.544488 containerd[1735]: 2026-03-13 12:25:13.542 [INFO][6568] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178" Mar 13 12:25:13.544901 containerd[1735]: time="2026-03-13T12:25:13.544523868Z" level=info msg="TearDown network for sandbox \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\" successfully" Mar 13 12:25:13.555205 containerd[1735]: time="2026-03-13T12:25:13.555158687Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 13 12:25:13.555289 containerd[1735]: time="2026-03-13T12:25:13.555230567Z" level=info msg="RemovePodSandbox \"37a32e3707f0b3759ba765278215cb2a84a82a860f25b915d9bb7e3b8e888178\" returns successfully" Mar 13 12:25:13.555709 containerd[1735]: time="2026-03-13T12:25:13.555684408Z" level=info msg="StopPodSandbox for \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\"" Mar 13 12:25:13.621333 containerd[1735]: 2026-03-13 12:25:13.589 [WARNING][6589] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0", GenerateName:"calico-kube-controllers-64c54c94bc-", Namespace:"calico-system", SelfLink:"", UID:"f9d96381-f483-497e-aa8b-0362e6cdf815", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64c54c94bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3", Pod:"calico-kube-controllers-64c54c94bc-57wqz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali39e38bce82f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:25:13.621333 containerd[1735]: 2026-03-13 12:25:13.589 [INFO][6589] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Mar 13 12:25:13.621333 containerd[1735]: 2026-03-13 12:25:13.589 [INFO][6589] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" iface="eth0" netns="" Mar 13 12:25:13.621333 containerd[1735]: 2026-03-13 12:25:13.589 [INFO][6589] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Mar 13 12:25:13.621333 containerd[1735]: 2026-03-13 12:25:13.589 [INFO][6589] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Mar 13 12:25:13.621333 containerd[1735]: 2026-03-13 12:25:13.608 [INFO][6597] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" HandleID="k8s-pod-network.5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0" Mar 13 12:25:13.621333 containerd[1735]: 2026-03-13 12:25:13.608 [INFO][6597] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:25:13.621333 containerd[1735]: 2026-03-13 12:25:13.608 [INFO][6597] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:25:13.621333 containerd[1735]: 2026-03-13 12:25:13.616 [WARNING][6597] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" HandleID="k8s-pod-network.5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0" Mar 13 12:25:13.621333 containerd[1735]: 2026-03-13 12:25:13.616 [INFO][6597] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" HandleID="k8s-pod-network.5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0" Mar 13 12:25:13.621333 containerd[1735]: 2026-03-13 12:25:13.617 [INFO][6597] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:25:13.621333 containerd[1735]: 2026-03-13 12:25:13.619 [INFO][6589] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Mar 13 12:25:13.622292 containerd[1735]: time="2026-03-13T12:25:13.621422127Z" level=info msg="TearDown network for sandbox \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\" successfully" Mar 13 12:25:13.622292 containerd[1735]: time="2026-03-13T12:25:13.622040168Z" level=info msg="StopPodSandbox for \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\" returns successfully" Mar 13 12:25:13.622913 containerd[1735]: time="2026-03-13T12:25:13.622510209Z" level=info msg="RemovePodSandbox for \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\"" Mar 13 12:25:13.622913 containerd[1735]: time="2026-03-13T12:25:13.622535409Z" level=info msg="Forcibly stopping sandbox \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\"" Mar 13 12:25:13.889201 containerd[1735]: 2026-03-13 12:25:13.655 [WARNING][6611] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0", GenerateName:"calico-kube-controllers-64c54c94bc-", Namespace:"calico-system", SelfLink:"", UID:"f9d96381-f483-497e-aa8b-0362e6cdf815", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 23, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64c54c94bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-461ebd96c0", ContainerID:"d517bcc7540d8ce1714670603dcefd537547574ee244d5746f0a01625514bec3", Pod:"calico-kube-controllers-64c54c94bc-57wqz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali39e38bce82f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:25:13.889201 containerd[1735]: 2026-03-13 12:25:13.852 [INFO][6611] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Mar 13 12:25:13.889201 containerd[1735]: 2026-03-13 12:25:13.852 [INFO][6611] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" iface="eth0" netns="" Mar 13 12:25:13.889201 containerd[1735]: 2026-03-13 12:25:13.852 [INFO][6611] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Mar 13 12:25:13.889201 containerd[1735]: 2026-03-13 12:25:13.852 [INFO][6611] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Mar 13 12:25:13.889201 containerd[1735]: 2026-03-13 12:25:13.875 [INFO][6619] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" HandleID="k8s-pod-network.5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0" Mar 13 12:25:13.889201 containerd[1735]: 2026-03-13 12:25:13.875 [INFO][6619] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:25:13.889201 containerd[1735]: 2026-03-13 12:25:13.876 [INFO][6619] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:25:13.889201 containerd[1735]: 2026-03-13 12:25:13.884 [WARNING][6619] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" HandleID="k8s-pod-network.5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0" Mar 13 12:25:13.889201 containerd[1735]: 2026-03-13 12:25:13.884 [INFO][6619] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" HandleID="k8s-pod-network.5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Workload="ci--4081.3.101--461ebd96c0-k8s-calico--kube--controllers--64c54c94bc--57wqz-eth0" Mar 13 12:25:13.889201 containerd[1735]: 2026-03-13 12:25:13.885 [INFO][6619] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:25:13.889201 containerd[1735]: 2026-03-13 12:25:13.887 [INFO][6611] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12" Mar 13 12:25:13.889885 containerd[1735]: time="2026-03-13T12:25:13.889491211Z" level=info msg="TearDown network for sandbox \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\" successfully" Mar 13 12:25:14.407669 systemd[1]: Started sshd@10-10.200.20.18:22-10.200.16.10:59408.service - OpenSSH per-connection server daemon (10.200.16.10:59408). Mar 13 12:25:14.856563 sshd[6626]: Accepted publickey for core from 10.200.16.10 port 59408 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:25:14.857948 sshd[6626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:25:14.862039 systemd-logind[1698]: New session 13 of user core. Mar 13 12:25:14.870581 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 12:25:15.241137 sshd[6626]: pam_unix(sshd:session): session closed for user core Mar 13 12:25:15.244804 systemd[1]: sshd@10-10.200.20.18:22-10.200.16.10:59408.service: Deactivated successfully. Mar 13 12:25:15.247126 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 12:25:15.248345 systemd-logind[1698]: Session 13 logged out. Waiting for processes to exit. Mar 13 12:25:15.249883 systemd-logind[1698]: Removed session 13. Mar 13 12:25:15.547298 containerd[1735]: time="2026-03-13T12:25:15.547175447Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 13 12:25:15.547298 containerd[1735]: time="2026-03-13T12:25:15.547262927Z" level=info msg="RemovePodSandbox \"5dcd67651505410ecb64a710539424bfd6dd8a0fd1cc17c3aa1bffaaeae86c12\" returns successfully" Mar 13 12:25:20.344075 systemd[1]: Started sshd@11-10.200.20.18:22-10.200.16.10:46008.service - OpenSSH per-connection server daemon (10.200.16.10:46008). Mar 13 12:25:20.829896 sshd[6660]: Accepted publickey for core from 10.200.16.10 port 46008 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:25:20.831470 sshd[6660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:25:20.835165 systemd-logind[1698]: New session 14 of user core. Mar 13 12:25:20.840556 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 12:25:21.280470 sshd[6660]: pam_unix(sshd:session): session closed for user core Mar 13 12:25:21.284089 systemd[1]: sshd@11-10.200.20.18:22-10.200.16.10:46008.service: Deactivated successfully. Mar 13 12:25:21.285997 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 12:25:21.287237 systemd-logind[1698]: Session 14 logged out. Waiting for processes to exit. Mar 13 12:25:21.287990 systemd-logind[1698]: Removed session 14. Mar 13 12:25:21.343689 systemd[1]: Started sshd@12-10.200.20.18:22-10.200.16.10:46020.service - OpenSSH per-connection server daemon (10.200.16.10:46020). Mar 13 12:25:21.794879 sshd[6692]: Accepted publickey for core from 10.200.16.10 port 46020 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:25:21.796294 sshd[6692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:25:21.800968 systemd-logind[1698]: New session 15 of user core. Mar 13 12:25:21.807589 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 12:25:22.231669 sshd[6692]: pam_unix(sshd:session): session closed for user core Mar 13 12:25:22.235879 systemd[1]: sshd@12-10.200.20.18:22-10.200.16.10:46020.service: Deactivated successfully. Mar 13 12:25:22.237854 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 12:25:22.238626 systemd-logind[1698]: Session 15 logged out. Waiting for processes to exit. Mar 13 12:25:22.239937 systemd-logind[1698]: Removed session 15. Mar 13 12:25:22.310672 systemd[1]: Started sshd@13-10.200.20.18:22-10.200.16.10:46032.service - OpenSSH per-connection server daemon (10.200.16.10:46032). Mar 13 12:25:22.759097 sshd[6703]: Accepted publickey for core from 10.200.16.10 port 46032 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:25:22.762190 sshd[6703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:25:22.766711 systemd-logind[1698]: New session 16 of user core. Mar 13 12:25:22.770609 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 12:25:23.196616 sshd[6703]: pam_unix(sshd:session): session closed for user core Mar 13 12:25:23.199965 systemd[1]: sshd@13-10.200.20.18:22-10.200.16.10:46032.service: Deactivated successfully. Mar 13 12:25:23.202070 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 12:25:23.203752 systemd-logind[1698]: Session 16 logged out. Waiting for processes to exit. Mar 13 12:25:23.204904 systemd-logind[1698]: Removed session 16. Mar 13 12:25:28.259231 systemd[1]: Started sshd@14-10.200.20.18:22-10.200.16.10:46038.service - OpenSSH per-connection server daemon (10.200.16.10:46038). Mar 13 12:25:28.713187 sshd[6744]: Accepted publickey for core from 10.200.16.10 port 46038 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:25:28.714581 sshd[6744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:25:28.722781 systemd-logind[1698]: New session 17 of user core. Mar 13 12:25:28.728593 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 12:25:29.100804 sshd[6744]: pam_unix(sshd:session): session closed for user core Mar 13 12:25:29.104106 systemd[1]: sshd@14-10.200.20.18:22-10.200.16.10:46038.service: Deactivated successfully. Mar 13 12:25:29.105872 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 12:25:29.108129 systemd-logind[1698]: Session 17 logged out. Waiting for processes to exit. Mar 13 12:25:29.109257 systemd-logind[1698]: Removed session 17. Mar 13 12:25:29.194127 systemd[1]: Started sshd@15-10.200.20.18:22-10.200.16.10:46054.service - OpenSSH per-connection server daemon (10.200.16.10:46054). Mar 13 12:25:29.681971 sshd[6757]: Accepted publickey for core from 10.200.16.10 port 46054 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:25:29.683339 sshd[6757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:25:29.687969 systemd-logind[1698]: New session 18 of user core. Mar 13 12:25:29.698584 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 12:25:31.147236 sshd[6757]: pam_unix(sshd:session): session closed for user core Mar 13 12:25:31.150913 systemd[1]: sshd@15-10.200.20.18:22-10.200.16.10:46054.service: Deactivated successfully. Mar 13 12:25:31.153086 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 12:25:31.155152 systemd-logind[1698]: Session 18 logged out. Waiting for processes to exit. Mar 13 12:25:31.156300 systemd-logind[1698]: Removed session 18. Mar 13 12:25:31.234997 systemd[1]: Started sshd@16-10.200.20.18:22-10.200.16.10:49944.service - OpenSSH per-connection server daemon (10.200.16.10:49944). Mar 13 12:25:31.728028 sshd[6768]: Accepted publickey for core from 10.200.16.10 port 49944 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:25:31.728977 sshd[6768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:25:31.733645 systemd-logind[1698]: New session 19 of user core. Mar 13 12:25:31.740578 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 12:25:33.235248 sshd[6768]: pam_unix(sshd:session): session closed for user core Mar 13 12:25:33.238712 systemd[1]: sshd@16-10.200.20.18:22-10.200.16.10:49944.service: Deactivated successfully. Mar 13 12:25:33.242462 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 12:25:33.244984 systemd-logind[1698]: Session 19 logged out. Waiting for processes to exit. Mar 13 12:25:33.246131 systemd-logind[1698]: Removed session 19. Mar 13 12:25:33.332662 systemd[1]: Started sshd@17-10.200.20.18:22-10.200.16.10:49956.service - OpenSSH per-connection server daemon (10.200.16.10:49956). Mar 13 12:25:33.819291 sshd[6800]: Accepted publickey for core from 10.200.16.10 port 49956 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:25:33.820698 sshd[6800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:25:33.825495 systemd-logind[1698]: New session 20 of user core. Mar 13 12:25:33.830602 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 13 12:25:34.345901 sshd[6800]: pam_unix(sshd:session): session closed for user core Mar 13 12:25:34.349035 systemd[1]: sshd@17-10.200.20.18:22-10.200.16.10:49956.service: Deactivated successfully. Mar 13 12:25:34.350750 systemd[1]: session-20.scope: Deactivated successfully. Mar 13 12:25:34.351402 systemd-logind[1698]: Session 20 logged out. Waiting for processes to exit. Mar 13 12:25:34.352232 systemd-logind[1698]: Removed session 20. Mar 13 12:25:34.444950 systemd[1]: Started sshd@18-10.200.20.18:22-10.200.16.10:49962.service - OpenSSH per-connection server daemon (10.200.16.10:49962). Mar 13 12:25:34.928537 sshd[6813]: Accepted publickey for core from 10.200.16.10 port 49962 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:25:34.931025 sshd[6813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:25:34.937851 systemd-logind[1698]: New session 21 of user core. Mar 13 12:25:34.941586 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 13 12:25:35.336662 sshd[6813]: pam_unix(sshd:session): session closed for user core Mar 13 12:25:35.340011 systemd[1]: sshd@18-10.200.20.18:22-10.200.16.10:49962.service: Deactivated successfully. Mar 13 12:25:35.342173 systemd[1]: session-21.scope: Deactivated successfully. Mar 13 12:25:35.344115 systemd-logind[1698]: Session 21 logged out. Waiting for processes to exit. Mar 13 12:25:35.345302 systemd-logind[1698]: Removed session 21. Mar 13 12:25:40.432736 systemd[1]: Started sshd@19-10.200.20.18:22-10.200.16.10:49564.service - OpenSSH per-connection server daemon (10.200.16.10:49564). Mar 13 12:25:40.923448 sshd[6874]: Accepted publickey for core from 10.200.16.10 port 49564 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:25:41.617277 sshd[6874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:25:41.621165 systemd-logind[1698]: New session 22 of user core. Mar 13 12:25:41.625570 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 13 12:25:41.965684 sshd[6874]: pam_unix(sshd:session): session closed for user core Mar 13 12:25:41.969554 systemd[1]: sshd@19-10.200.20.18:22-10.200.16.10:49564.service: Deactivated successfully. Mar 13 12:25:41.969614 systemd-logind[1698]: Session 22 logged out. Waiting for processes to exit. Mar 13 12:25:41.971637 systemd[1]: session-22.scope: Deactivated successfully. Mar 13 12:25:41.972630 systemd-logind[1698]: Removed session 22. Mar 13 12:25:47.055081 systemd[1]: Started sshd@20-10.200.20.18:22-10.200.16.10:49572.service - OpenSSH per-connection server daemon (10.200.16.10:49572). Mar 13 12:25:47.543459 sshd[6899]: Accepted publickey for core from 10.200.16.10 port 49572 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:25:47.544847 sshd[6899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:25:47.549285 systemd-logind[1698]: New session 23 of user core. Mar 13 12:25:47.558576 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 13 12:25:47.952271 sshd[6899]: pam_unix(sshd:session): session closed for user core Mar 13 12:25:47.957805 systemd[1]: sshd@20-10.200.20.18:22-10.200.16.10:49572.service: Deactivated successfully. Mar 13 12:25:47.960487 systemd[1]: session-23.scope: Deactivated successfully. Mar 13 12:25:47.961442 systemd-logind[1698]: Session 23 logged out. Waiting for processes to exit. Mar 13 12:25:47.962309 systemd-logind[1698]: Removed session 23. Mar 13 12:25:49.981820 systemd[1]: run-containerd-runc-k8s.io-42f96b429f8e37784edf7d5203ce364ee31721ef9519cbe323542f50a129ef2b-runc.6cRNei.mount: Deactivated successfully. Mar 13 12:25:53.041300 systemd[1]: Started sshd@21-10.200.20.18:22-10.200.16.10:46654.service - OpenSSH per-connection server daemon (10.200.16.10:46654). Mar 13 12:25:53.535814 sshd[6934]: Accepted publickey for core from 10.200.16.10 port 46654 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:25:53.537716 sshd[6934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:25:53.546990 systemd-logind[1698]: New session 24 of user core. Mar 13 12:25:53.550602 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 13 12:25:53.943924 sshd[6934]: pam_unix(sshd:session): session closed for user core Mar 13 12:25:53.947138 systemd-logind[1698]: Session 24 logged out. Waiting for processes to exit. Mar 13 12:25:53.947534 systemd[1]: sshd@21-10.200.20.18:22-10.200.16.10:46654.service: Deactivated successfully. Mar 13 12:25:53.949196 systemd[1]: session-24.scope: Deactivated successfully. Mar 13 12:25:53.951531 systemd-logind[1698]: Removed session 24. Mar 13 12:25:59.036975 systemd[1]: Started sshd@22-10.200.20.18:22-10.200.16.10:46658.service - OpenSSH per-connection server daemon (10.200.16.10:46658). Mar 13 12:25:59.529499 sshd[6967]: Accepted publickey for core from 10.200.16.10 port 46658 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:25:59.530553 sshd[6967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:25:59.535634 systemd-logind[1698]: New session 25 of user core. Mar 13 12:25:59.543660 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 13 12:25:59.943225 sshd[6967]: pam_unix(sshd:session): session closed for user core Mar 13 12:25:59.946528 systemd[1]: sshd@22-10.200.20.18:22-10.200.16.10:46658.service: Deactivated successfully. Mar 13 12:25:59.948358 systemd[1]: session-25.scope: Deactivated successfully. Mar 13 12:25:59.950231 systemd-logind[1698]: Session 25 logged out. Waiting for processes to exit. Mar 13 12:25:59.951180 systemd-logind[1698]: Removed session 25. Mar 13 12:26:05.027714 systemd[1]: Started sshd@23-10.200.20.18:22-10.200.16.10:32900.service - OpenSSH per-connection server daemon (10.200.16.10:32900). Mar 13 12:26:05.481470 sshd[7037]: Accepted publickey for core from 10.200.16.10 port 32900 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:26:05.482999 sshd[7037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:26:05.487116 systemd-logind[1698]: New session 26 of user core. Mar 13 12:26:05.492817 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 13 12:26:05.869171 sshd[7037]: pam_unix(sshd:session): session closed for user core Mar 13 12:26:05.872563 systemd[1]: sshd@23-10.200.20.18:22-10.200.16.10:32900.service: Deactivated successfully. Mar 13 12:26:05.874729 systemd[1]: session-26.scope: Deactivated successfully. Mar 13 12:26:05.875568 systemd-logind[1698]: Session 26 logged out. Waiting for processes to exit. Mar 13 12:26:05.876552 systemd-logind[1698]: Removed session 26.