Mar 2 13:29:58.207271 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 2 13:29:58.207298 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Mar 2 11:11:01 -00 2026 Mar 2 13:29:58.207308 kernel: KASLR enabled Mar 2 13:29:58.207315 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 2 13:29:58.207323 kernel: printk: bootconsole [pl11] enabled Mar 2 13:29:58.207329 kernel: efi: EFI v2.7 by EDK II Mar 2 13:29:58.207337 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 2 13:29:58.207344 kernel: random: crng init done Mar 2 13:29:58.207351 kernel: ACPI: Early table checksum verification disabled Mar 2 13:29:58.207358 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 2 13:29:58.207364 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:29:58.207370 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:29:58.207379 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 2 13:29:58.207386 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:29:58.207394 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:29:58.207401 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:29:58.207409 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:29:58.207417 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:29:58.207425 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:29:58.207433 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 2 13:29:58.207441 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:29:58.207449 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 2 13:29:58.207456 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 2 13:29:58.207463 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 2 13:29:58.207470 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 2 13:29:58.209536 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 2 13:29:58.209557 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 2 13:29:58.209564 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 2 13:29:58.209577 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 2 13:29:58.209584 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 2 13:29:58.209592 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 2 13:29:58.209599 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 2 13:29:58.209606 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 2 13:29:58.209612 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 2 13:29:58.209620 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 2 13:29:58.209627 kernel: Zone ranges: Mar 2 13:29:58.209635 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 2 13:29:58.209643 kernel: DMA32 empty Mar 2 13:29:58.209651 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 2 13:29:58.209659 kernel: Movable zone start for each node Mar 2 13:29:58.209672 kernel: Early memory node ranges Mar 2 13:29:58.209680 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 2 13:29:58.209689 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 2 13:29:58.209697 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 2 13:29:58.209705 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 2 13:29:58.209715 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 2 13:29:58.209723 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 2 13:29:58.209730 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 2 13:29:58.209739 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 2 13:29:58.209747 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 2 13:29:58.209754 kernel: psci: probing for conduit method from ACPI. Mar 2 13:29:58.209762 kernel: psci: PSCIv1.1 detected in firmware. Mar 2 13:29:58.209770 kernel: psci: Using standard PSCI v0.2 function IDs Mar 2 13:29:58.209777 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 2 13:29:58.209785 kernel: psci: SMC Calling Convention v1.4 Mar 2 13:29:58.209793 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 2 13:29:58.209800 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 2 13:29:58.209809 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 2 13:29:58.209817 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 2 13:29:58.209825 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 2 13:29:58.209833 kernel: Detected PIPT I-cache on CPU0 Mar 2 13:29:58.209841 kernel: CPU features: detected: GIC system register CPU interface Mar 2 13:29:58.209849 kernel: CPU features: detected: Hardware dirty bit management Mar 2 13:29:58.209857 kernel: CPU features: detected: Spectre-BHB Mar 2 13:29:58.209864 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 2 13:29:58.209872 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 2 13:29:58.209879 kernel: CPU features: detected: ARM erratum 1418040 Mar 2 13:29:58.209888 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 2 13:29:58.209897 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 2 13:29:58.209905 kernel: alternatives: applying boot alternatives Mar 2 13:29:58.209914 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7ecec6e0f4313fe7e6ab44dac0c51edbf0b22765a212833abcec729cd9dc543f Mar 2 13:29:58.209923 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 13:29:58.209931 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 13:29:58.209939 kernel: Fallback order for Node 0: 0 Mar 2 13:29:58.209946 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 2 13:29:58.209954 kernel: Policy zone: Normal Mar 2 13:29:58.209962 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 13:29:58.209969 kernel: software IO TLB: area num 2. Mar 2 13:29:58.209977 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 2 13:29:58.209987 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 2 13:29:58.209995 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 2 13:29:58.210002 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 13:29:58.210011 kernel: rcu: RCU event tracing is enabled. Mar 2 13:29:58.210020 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 2 13:29:58.210027 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 13:29:58.210035 kernel: Tracing variant of Tasks RCU enabled. Mar 2 13:29:58.210043 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 13:29:58.210050 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 2 13:29:58.210058 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 2 13:29:58.210065 kernel: GICv3: 960 SPIs implemented Mar 2 13:29:58.210075 kernel: GICv3: 0 Extended SPIs implemented Mar 2 13:29:58.210083 kernel: Root IRQ handler: gic_handle_irq Mar 2 13:29:58.210091 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 2 13:29:58.210099 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 2 13:29:58.210108 kernel: ITS: No ITS available, not enabling LPIs Mar 2 13:29:58.210116 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 13:29:58.210124 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 2 13:29:58.210131 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 2 13:29:58.210140 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 2 13:29:58.210148 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 2 13:29:58.210156 kernel: Console: colour dummy device 80x25 Mar 2 13:29:58.210165 kernel: printk: console [tty1] enabled Mar 2 13:29:58.210173 kernel: ACPI: Core revision 20230628 Mar 2 13:29:58.210182 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 2 13:29:58.210191 kernel: pid_max: default: 32768 minimum: 301 Mar 2 13:29:58.210199 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 2 13:29:58.210208 kernel: landlock: Up and running. Mar 2 13:29:58.210217 kernel: SELinux: Initializing. Mar 2 13:29:58.210225 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:29:58.210233 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:29:58.210243 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 2 13:29:58.210251 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 2 13:29:58.210260 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 2 13:29:58.210268 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 2 13:29:58.210275 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 2 13:29:58.210283 kernel: rcu: Hierarchical SRCU implementation. Mar 2 13:29:58.210292 kernel: rcu: Max phase no-delay instances is 400. Mar 2 13:29:58.210301 kernel: Remapping and enabling EFI services. Mar 2 13:29:58.210315 kernel: smp: Bringing up secondary CPUs ... Mar 2 13:29:58.210324 kernel: Detected PIPT I-cache on CPU1 Mar 2 13:29:58.210332 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 2 13:29:58.210340 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 2 13:29:58.210350 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 2 13:29:58.210359 kernel: smp: Brought up 1 node, 2 CPUs Mar 2 13:29:58.210368 kernel: SMP: Total of 2 processors activated. Mar 2 13:29:58.210376 kernel: CPU features: detected: 32-bit EL0 Support Mar 2 13:29:58.210386 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 2 13:29:58.210396 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 2 13:29:58.210404 kernel: CPU features: detected: CRC32 instructions Mar 2 13:29:58.210412 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 2 13:29:58.210420 kernel: CPU features: detected: LSE atomic instructions Mar 2 13:29:58.210429 kernel: CPU features: detected: Privileged Access Never Mar 2 13:29:58.210437 kernel: CPU: All CPU(s) started at EL1 Mar 2 13:29:58.210449 kernel: alternatives: applying system-wide alternatives Mar 2 13:29:58.210457 kernel: devtmpfs: initialized Mar 2 13:29:58.210466 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 13:29:58.210504 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 2 13:29:58.210514 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 13:29:58.210522 kernel: SMBIOS 3.1.0 present. Mar 2 13:29:58.210531 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 2 13:29:58.210540 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 13:29:58.210549 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 2 13:29:58.210558 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 2 13:29:58.210567 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 2 13:29:58.210575 kernel: audit: initializing netlink subsys (disabled) Mar 2 13:29:58.210586 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 2 13:29:58.210595 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 13:29:58.210604 kernel: cpuidle: using governor menu Mar 2 13:29:58.210612 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 2 13:29:58.210621 kernel: ASID allocator initialised with 32768 entries Mar 2 13:29:58.210630 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 13:29:58.210638 kernel: Serial: AMBA PL011 UART driver Mar 2 13:29:58.210647 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 2 13:29:58.210656 kernel: Modules: 0 pages in range for non-PLT usage Mar 2 13:29:58.210667 kernel: Modules: 509008 pages in range for PLT usage Mar 2 13:29:58.210676 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 13:29:58.210684 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 13:29:58.210693 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 2 13:29:58.210701 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 2 13:29:58.210709 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 13:29:58.210717 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 13:29:58.210725 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 2 13:29:58.210734 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 2 13:29:58.210744 kernel: ACPI: Added _OSI(Module Device) Mar 2 13:29:58.210753 kernel: ACPI: Added _OSI(Processor Device) Mar 2 13:29:58.210761 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 13:29:58.210769 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 13:29:58.210778 kernel: ACPI: Interpreter enabled Mar 2 13:29:58.210787 kernel: ACPI: Using GIC for interrupt routing Mar 2 13:29:58.210796 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 2 13:29:58.210804 kernel: printk: console [ttyAMA0] enabled Mar 2 13:29:58.210813 kernel: printk: bootconsole [pl11] disabled Mar 2 13:29:58.210824 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 2 13:29:58.210833 kernel: iommu: Default domain type: Translated Mar 2 13:29:58.210841 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 2 13:29:58.210850 kernel: efivars: Registered efivars operations Mar 2 13:29:58.210859 kernel: vgaarb: loaded Mar 2 13:29:58.210869 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 2 13:29:58.210877 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 13:29:58.210885 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 13:29:58.210894 kernel: pnp: PnP ACPI init Mar 2 13:29:58.210904 kernel: pnp: PnP ACPI: found 0 devices Mar 2 13:29:58.210912 kernel: NET: Registered PF_INET protocol family Mar 2 13:29:58.210921 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 13:29:58.210930 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 13:29:58.210939 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 13:29:58.210947 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 13:29:58.210956 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 13:29:58.210964 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 13:29:58.210973 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:29:58.210983 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:29:58.210992 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 13:29:58.211001 kernel: PCI: CLS 0 bytes, default 64 Mar 2 13:29:58.211009 kernel: kvm [1]: HYP mode not available Mar 2 13:29:58.211018 kernel: Initialise system trusted keyrings Mar 2 13:29:58.211028 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 13:29:58.211036 kernel: Key type asymmetric registered Mar 2 13:29:58.211045 kernel: Asymmetric key parser 'x509' registered Mar 2 13:29:58.211054 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 2 13:29:58.211065 kernel: io scheduler mq-deadline registered Mar 2 13:29:58.211074 kernel: io scheduler kyber registered Mar 2 13:29:58.211082 kernel: io scheduler bfq registered Mar 2 13:29:58.211091 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 13:29:58.211118 kernel: thunder_xcv, ver 1.0 Mar 2 13:29:58.211127 kernel: thunder_bgx, ver 1.0 Mar 2 13:29:58.211136 kernel: nicpf, ver 1.0 Mar 2 13:29:58.211145 kernel: nicvf, ver 1.0 Mar 2 13:29:58.211330 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 2 13:29:58.211427 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-02T13:29:57 UTC (1772458197) Mar 2 13:29:58.211439 kernel: efifb: probing for efifb Mar 2 13:29:58.211448 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 2 13:29:58.211457 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 2 13:29:58.211465 kernel: efifb: scrolling: redraw Mar 2 13:29:58.211473 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 2 13:29:58.213531 kernel: Console: switching to colour frame buffer device 128x48 Mar 2 13:29:58.213542 kernel: fb0: EFI VGA frame buffer device Mar 2 13:29:58.213556 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 2 13:29:58.213565 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 2 13:29:58.213573 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 2 13:29:58.213582 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 2 13:29:58.213590 kernel: watchdog: Hard watchdog permanently disabled Mar 2 13:29:58.213598 kernel: NET: Registered PF_INET6 protocol family Mar 2 13:29:58.213607 kernel: Segment Routing with IPv6 Mar 2 13:29:58.213615 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 13:29:58.213623 kernel: NET: Registered PF_PACKET protocol family Mar 2 13:29:58.213633 kernel: Key type dns_resolver registered Mar 2 13:29:58.213642 kernel: registered taskstats version 1 Mar 2 13:29:58.213650 kernel: Loading compiled-in X.509 certificates Mar 2 13:29:58.213658 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 888055ac257926b028c9aac8084c1e2b1bcee773' Mar 2 13:29:58.213666 kernel: Key type .fscrypt registered Mar 2 13:29:58.213674 kernel: Key type fscrypt-provisioning registered Mar 2 13:29:58.213682 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 13:29:58.213691 kernel: ima: Allocated hash algorithm: sha1 Mar 2 13:29:58.213699 kernel: ima: No architecture policies found Mar 2 13:29:58.213711 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 2 13:29:58.213719 kernel: clk: Disabling unused clocks Mar 2 13:29:58.213727 kernel: Freeing unused kernel memory: 39424K Mar 2 13:29:58.213735 kernel: Run /init as init process Mar 2 13:29:58.213743 kernel: with arguments: Mar 2 13:29:58.213751 kernel: /init Mar 2 13:29:58.213759 kernel: with environment: Mar 2 13:29:58.213767 kernel: HOME=/ Mar 2 13:29:58.213775 kernel: TERM=linux Mar 2 13:29:58.213785 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:29:58.213798 systemd[1]: Detected virtualization microsoft. Mar 2 13:29:58.213807 systemd[1]: Detected architecture arm64. Mar 2 13:29:58.213816 systemd[1]: Running in initrd. Mar 2 13:29:58.213824 systemd[1]: No hostname configured, using default hostname. Mar 2 13:29:58.213833 systemd[1]: Hostname set to . Mar 2 13:29:58.213842 systemd[1]: Initializing machine ID from random generator. Mar 2 13:29:58.213854 systemd[1]: Queued start job for default target initrd.target. Mar 2 13:29:58.213863 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:29:58.213872 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:29:58.213882 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 13:29:58.213891 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:29:58.213901 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 13:29:58.213911 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 13:29:58.213921 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 13:29:58.213932 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 13:29:58.213941 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:29:58.213950 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:29:58.213958 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:29:58.213968 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:29:58.213977 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:29:58.213985 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:29:58.213994 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:29:58.214006 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:29:58.214015 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 13:29:58.214024 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 2 13:29:58.214033 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:29:58.214042 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:29:58.214051 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:29:58.214059 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:29:58.214068 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 13:29:58.214079 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:29:58.214088 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 13:29:58.214096 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 13:29:58.214105 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:29:58.214143 systemd-journald[217]: Collecting audit messages is disabled. Mar 2 13:29:58.214168 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:29:58.214177 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:29:58.214187 systemd-journald[217]: Journal started Mar 2 13:29:58.214207 systemd-journald[217]: Runtime Journal (/run/log/journal/97b3856521b348838335108a02e9cd92) is 8.0M, max 78.5M, 70.5M free. Mar 2 13:29:58.220290 systemd-modules-load[218]: Inserted module 'overlay' Mar 2 13:29:58.240314 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:29:58.251058 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 13:29:58.256797 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 13:29:58.264166 kernel: Bridge firewalling registered Mar 2 13:29:58.261550 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 2 13:29:58.269149 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:29:58.279250 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 13:29:58.287661 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:29:58.295885 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:29:58.315792 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:29:58.329353 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:29:58.342572 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:29:58.357739 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:29:58.370230 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:29:58.380672 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:29:58.393048 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:29:58.399111 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:29:58.426810 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 13:29:58.434709 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:29:58.453697 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:29:58.470691 dracut-cmdline[250]: dracut-dracut-053 Mar 2 13:29:58.482074 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7ecec6e0f4313fe7e6ab44dac0c51edbf0b22765a212833abcec729cd9dc543f Mar 2 13:29:58.486842 systemd-resolved[251]: Positive Trust Anchors: Mar 2 13:29:58.486852 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:29:58.486882 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:29:58.489060 systemd-resolved[251]: Defaulting to hostname 'linux'. Mar 2 13:29:58.507316 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:29:58.514864 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:29:58.532078 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:29:58.614509 kernel: SCSI subsystem initialized Mar 2 13:29:58.621490 kernel: Loading iSCSI transport class v2.0-870. Mar 2 13:29:58.630489 kernel: iscsi: registered transport (tcp) Mar 2 13:29:58.647994 kernel: iscsi: registered transport (qla4xxx) Mar 2 13:29:58.648047 kernel: QLogic iSCSI HBA Driver Mar 2 13:29:58.683386 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 13:29:58.703605 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 13:29:58.732512 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 13:29:58.732578 kernel: device-mapper: uevent: version 1.0.3 Mar 2 13:29:58.738103 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 2 13:29:58.787495 kernel: raid6: neonx8 gen() 15804 MB/s Mar 2 13:29:58.806508 kernel: raid6: neonx4 gen() 15692 MB/s Mar 2 13:29:58.825519 kernel: raid6: neonx2 gen() 13221 MB/s Mar 2 13:29:58.845489 kernel: raid6: neonx1 gen() 10486 MB/s Mar 2 13:29:58.864485 kernel: raid6: int64x8 gen() 6978 MB/s Mar 2 13:29:58.883486 kernel: raid6: int64x4 gen() 7363 MB/s Mar 2 13:29:58.903486 kernel: raid6: int64x2 gen() 6143 MB/s Mar 2 13:29:58.925489 kernel: raid6: int64x1 gen() 5066 MB/s Mar 2 13:29:58.925500 kernel: raid6: using algorithm neonx8 gen() 15804 MB/s Mar 2 13:29:58.948221 kernel: raid6: .... xor() 12030 MB/s, rmw enabled Mar 2 13:29:58.948233 kernel: raid6: using neon recovery algorithm Mar 2 13:29:58.959381 kernel: xor: measuring software checksum speed Mar 2 13:29:58.959400 kernel: 8regs : 19613 MB/sec Mar 2 13:29:58.962171 kernel: 32regs : 19669 MB/sec Mar 2 13:29:58.968041 kernel: arm64_neon : 26188 MB/sec Mar 2 13:29:58.968053 kernel: xor: using function: arm64_neon (26188 MB/sec) Mar 2 13:29:59.018547 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 13:29:59.030232 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:29:59.043672 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:29:59.070887 systemd-udevd[436]: Using default interface naming scheme 'v255'. Mar 2 13:29:59.074394 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:29:59.094707 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 13:29:59.115131 dracut-pre-trigger[446]: rd.md=0: removing MD RAID activation Mar 2 13:29:59.146230 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:29:59.158750 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:29:59.202756 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:29:59.222693 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 13:29:59.244430 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 13:29:59.252418 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:29:59.267345 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:29:59.284459 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:29:59.308807 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 13:29:59.332011 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:29:59.346644 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:29:59.360583 kernel: hv_vmbus: Vmbus version:5.3 Mar 2 13:29:59.346766 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:29:59.367274 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:29:59.379637 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:29:59.379842 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:29:59.426894 kernel: hv_vmbus: registering driver hid_hyperv Mar 2 13:29:59.426920 kernel: hv_vmbus: registering driver hv_netvsc Mar 2 13:29:59.426931 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 2 13:29:59.426942 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 2 13:29:59.426953 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 2 13:29:59.426965 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 2 13:29:59.427133 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 2 13:29:59.417412 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:29:59.455777 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 2 13:29:59.455808 kernel: PTP clock support registered Mar 2 13:29:59.454778 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:29:59.491446 kernel: hv_utils: Registering HyperV Utility Driver Mar 2 13:29:59.491512 kernel: hv_vmbus: registering driver hv_storvsc Mar 2 13:29:59.491525 kernel: hv_vmbus: registering driver hv_utils Mar 2 13:29:59.491536 kernel: hv_utils: Heartbeat IC version 3.0 Mar 2 13:29:59.491546 kernel: hv_utils: Shutdown IC version 3.2 Mar 2 13:29:59.491557 kernel: hv_utils: TimeSync IC version 4.0 Mar 2 13:29:59.492546 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:29:59.545197 kernel: scsi host1: storvsc_host_t Mar 2 13:29:59.545392 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 2 13:29:59.545419 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 2 13:29:59.524788 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:29:59.524822 systemd-resolved[251]: Clock change detected. Flushing caches. Mar 2 13:29:59.566578 kernel: scsi host0: storvsc_host_t Mar 2 13:29:59.566780 kernel: hv_netvsc 00224878-9f4f-0022-4878-9f4f00224878 eth0: VF slot 1 added Mar 2 13:29:59.559803 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:29:59.587958 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:29:59.613033 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Mar 2 13:29:59.613246 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 13:29:59.613258 kernel: hv_vmbus: registering driver hv_pci Mar 2 13:29:59.613269 kernel: hv_pci 990cecc0-cec3-4e3c-a2d7-9cf7b329c18b: PCI VMBus probing: Using version 0x10004 Mar 2 13:29:59.613766 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:29:59.642785 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Mar 2 13:29:59.642970 kernel: hv_pci 990cecc0-cec3-4e3c-a2d7-9cf7b329c18b: PCI host bridge to bus cec3:00 Mar 2 13:29:59.643068 kernel: pci_bus cec3:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 2 13:29:59.650810 kernel: pci_bus cec3:00: No busn resource found for root bus, will use [bus 00-ff] Mar 2 13:29:59.651035 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#88 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 2 13:29:59.662614 kernel: pci cec3:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 2 13:29:59.669793 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 2 13:29:59.670060 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Mar 2 13:29:59.680301 kernel: sd 1:0:0:0: [sda] Write Protect is off Mar 2 13:29:59.680550 kernel: pci cec3:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 2 13:29:59.680583 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 2 13:29:59.676382 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:29:59.706700 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 2 13:29:59.706863 kernel: pci cec3:00:02.0: enabling Extended Tags Mar 2 13:29:59.706887 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:29:59.711492 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Mar 2 13:29:59.726527 kernel: pci cec3:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at cec3:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 2 13:29:59.736801 kernel: pci_bus cec3:00: busn_res: [bus 00-ff] end is updated to 00 Mar 2 13:29:59.737007 kernel: pci cec3:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 2 13:29:59.744598 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#47 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 2 13:29:59.784172 kernel: mlx5_core cec3:00:02.0: enabling device (0000 -> 0002) Mar 2 13:29:59.789479 kernel: mlx5_core cec3:00:02.0: firmware version: 16.30.5026 Mar 2 13:29:59.985106 kernel: hv_netvsc 00224878-9f4f-0022-4878-9f4f00224878 eth0: VF registering: eth1 Mar 2 13:29:59.985390 kernel: mlx5_core cec3:00:02.0 eth1: joined to eth0 Mar 2 13:29:59.992489 kernel: mlx5_core cec3:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 2 13:30:00.002495 kernel: mlx5_core cec3:00:02.0 enP52931s1: renamed from eth1 Mar 2 13:30:00.291669 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 2 13:30:00.309492 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (508) Mar 2 13:30:00.318496 kernel: BTRFS: device fsid 0d0ab669-47ba-4267-b368-82e952673c8e devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (489) Mar 2 13:30:00.327417 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 2 13:30:00.345873 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 2 13:30:00.358798 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 2 13:30:00.374328 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 2 13:30:00.399667 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 13:30:00.425494 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:30:00.433489 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:30:00.443681 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:30:01.444503 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:30:01.446483 disk-uuid[608]: The operation has completed successfully. Mar 2 13:30:01.526770 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 13:30:01.528490 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 13:30:01.552600 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 13:30:01.564297 sh[721]: Success Mar 2 13:30:01.601558 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 2 13:30:01.917745 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 13:30:01.932646 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 13:30:01.936450 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 13:30:01.971803 kernel: BTRFS info (device dm-0): first mount of filesystem 0d0ab669-47ba-4267-b368-82e952673c8e Mar 2 13:30:01.971856 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:30:01.978000 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 2 13:30:01.983157 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 2 13:30:01.986736 kernel: BTRFS info (device dm-0): using free space tree Mar 2 13:30:02.558359 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 13:30:02.562920 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 13:30:02.579717 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 13:30:02.584629 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 13:30:02.624068 kernel: BTRFS info (device sda6): first mount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:30:02.624128 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:30:02.624140 kernel: BTRFS info (device sda6): using free space tree Mar 2 13:30:02.665488 kernel: BTRFS info (device sda6): auto enabling async discard Mar 2 13:30:02.680602 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 2 13:30:02.684970 kernel: BTRFS info (device sda6): last unmount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:30:02.691359 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 13:30:02.699852 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:30:02.718774 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 13:30:02.730009 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:30:02.767934 systemd-networkd[905]: lo: Link UP Mar 2 13:30:02.767942 systemd-networkd[905]: lo: Gained carrier Mar 2 13:30:02.772373 systemd-networkd[905]: Enumeration completed Mar 2 13:30:02.772790 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:30:02.773107 systemd-networkd[905]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:30:02.773110 systemd-networkd[905]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:30:02.781819 systemd[1]: Reached target network.target - Network. Mar 2 13:30:02.856483 kernel: mlx5_core cec3:00:02.0 enP52931s1: Link up Mar 2 13:30:02.894739 kernel: hv_netvsc 00224878-9f4f-0022-4878-9f4f00224878 eth0: Data path switched to VF: enP52931s1 Mar 2 13:30:02.894976 systemd-networkd[905]: enP52931s1: Link UP Mar 2 13:30:02.895088 systemd-networkd[905]: eth0: Link UP Mar 2 13:30:02.895195 systemd-networkd[905]: eth0: Gained carrier Mar 2 13:30:02.895205 systemd-networkd[905]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:30:02.904596 systemd-networkd[905]: enP52931s1: Gained carrier Mar 2 13:30:02.924508 systemd-networkd[905]: eth0: DHCPv4 address 10.200.20.22/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 2 13:30:03.861739 ignition[904]: Ignition 2.19.0 Mar 2 13:30:03.861751 ignition[904]: Stage: fetch-offline Mar 2 13:30:03.863836 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:30:03.861786 ignition[904]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:30:03.861794 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:30:03.861888 ignition[904]: parsed url from cmdline: "" Mar 2 13:30:03.861892 ignition[904]: no config URL provided Mar 2 13:30:03.861896 ignition[904]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:30:03.891647 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 2 13:30:03.861903 ignition[904]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:30:03.861908 ignition[904]: failed to fetch config: resource requires networking Mar 2 13:30:03.862088 ignition[904]: Ignition finished successfully Mar 2 13:30:03.918708 ignition[914]: Ignition 2.19.0 Mar 2 13:30:03.918714 ignition[914]: Stage: fetch Mar 2 13:30:03.918882 ignition[914]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:30:03.918891 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:30:03.918981 ignition[914]: parsed url from cmdline: "" Mar 2 13:30:03.918984 ignition[914]: no config URL provided Mar 2 13:30:03.918989 ignition[914]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:30:03.918996 ignition[914]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:30:03.919015 ignition[914]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 2 13:30:04.015663 ignition[914]: GET result: OK Mar 2 13:30:04.015718 ignition[914]: config has been read from IMDS userdata Mar 2 13:30:04.015762 ignition[914]: parsing config with SHA512: f035c3c22a15e3dd44064bf04e8d3df9eead548a283a96af48edb60cfd7066ac3a019e816677b5bb3ead1505ff3ba1411931d2748eab554cbb65787b3d65db32 Mar 2 13:30:04.020505 unknown[914]: fetched base config from "system" Mar 2 13:30:04.021017 ignition[914]: fetch: fetch complete Mar 2 13:30:04.020514 unknown[914]: fetched base config from "system" Mar 2 13:30:04.021022 ignition[914]: fetch: fetch passed Mar 2 13:30:04.020520 unknown[914]: fetched user config from "azure" Mar 2 13:30:04.021071 ignition[914]: Ignition finished successfully Mar 2 13:30:04.025305 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 2 13:30:04.043797 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 13:30:04.064330 ignition[920]: Ignition 2.19.0 Mar 2 13:30:04.064341 ignition[920]: Stage: kargs Mar 2 13:30:04.068548 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 13:30:04.064602 ignition[920]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:30:04.064612 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:30:04.065733 ignition[920]: kargs: kargs passed Mar 2 13:30:04.085772 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 13:30:04.065802 ignition[920]: Ignition finished successfully Mar 2 13:30:04.108922 ignition[926]: Ignition 2.19.0 Mar 2 13:30:04.108939 ignition[926]: Stage: disks Mar 2 13:30:04.113061 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 13:30:04.109122 ignition[926]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:30:04.119341 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 13:30:04.109137 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:30:04.127786 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 13:30:04.110210 ignition[926]: disks: disks passed Mar 2 13:30:04.136735 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:30:04.110298 ignition[926]: Ignition finished successfully Mar 2 13:30:04.145446 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:30:04.154571 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:30:04.181717 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 13:30:04.253446 systemd-fsck[935]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 2 13:30:04.263142 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 13:30:04.284612 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 13:30:04.343666 kernel: EXT4-fs (sda9): mounted filesystem a5f5c21d-8a27-4a94-875f-5735c39d000b r/w with ordered data mode. Quota mode: none. Mar 2 13:30:04.344214 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 13:30:04.348755 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 13:30:04.393550 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:30:04.413483 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (946) Mar 2 13:30:04.425489 kernel: BTRFS info (device sda6): first mount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:30:04.425547 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:30:04.425575 kernel: BTRFS info (device sda6): using free space tree Mar 2 13:30:04.433696 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 13:30:04.442589 kernel: BTRFS info (device sda6): auto enabling async discard Mar 2 13:30:04.446247 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 2 13:30:04.452586 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 13:30:04.452621 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:30:04.474768 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:30:04.482119 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 13:30:04.492685 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 13:30:04.829583 systemd-networkd[905]: eth0: Gained IPv6LL Mar 2 13:30:05.069293 coreos-metadata[963]: Mar 02 13:30:05.069 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 2 13:30:05.078176 coreos-metadata[963]: Mar 02 13:30:05.078 INFO Fetch successful Mar 2 13:30:05.078176 coreos-metadata[963]: Mar 02 13:30:05.078 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 2 13:30:05.094301 coreos-metadata[963]: Mar 02 13:30:05.094 INFO Fetch successful Mar 2 13:30:05.111149 coreos-metadata[963]: Mar 02 13:30:05.110 INFO wrote hostname ci-4081.3.101-160832fd4e to /sysroot/etc/hostname Mar 2 13:30:05.118961 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 2 13:30:05.271805 initrd-setup-root[975]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 13:30:05.311441 initrd-setup-root[982]: cut: /sysroot/etc/group: No such file or directory Mar 2 13:30:05.320145 initrd-setup-root[989]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 13:30:05.326422 initrd-setup-root[996]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 13:30:06.452450 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 13:30:06.465692 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 13:30:06.477393 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 13:30:06.493601 kernel: BTRFS info (device sda6): last unmount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:30:06.488799 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 13:30:06.517386 ignition[1064]: INFO : Ignition 2.19.0 Mar 2 13:30:06.517386 ignition[1064]: INFO : Stage: mount Mar 2 13:30:06.529637 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:30:06.529637 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:30:06.529637 ignition[1064]: INFO : mount: mount passed Mar 2 13:30:06.529637 ignition[1064]: INFO : Ignition finished successfully Mar 2 13:30:06.522255 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 13:30:06.535726 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 13:30:06.552964 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 13:30:06.581757 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:30:06.601488 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1076) Mar 2 13:30:06.601534 kernel: BTRFS info (device sda6): first mount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:30:06.611113 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:30:06.614425 kernel: BTRFS info (device sda6): using free space tree Mar 2 13:30:06.621480 kernel: BTRFS info (device sda6): auto enabling async discard Mar 2 13:30:06.623856 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:30:06.649110 ignition[1094]: INFO : Ignition 2.19.0 Mar 2 13:30:06.649110 ignition[1094]: INFO : Stage: files Mar 2 13:30:06.655494 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:30:06.655494 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:30:06.655494 ignition[1094]: DEBUG : files: compiled without relabeling support, skipping Mar 2 13:30:06.655494 ignition[1094]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 13:30:06.655494 ignition[1094]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 13:30:06.708728 ignition[1094]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 13:30:06.714509 ignition[1094]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 13:30:06.714509 ignition[1094]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 13:30:06.710013 unknown[1094]: wrote ssh authorized keys file for user: core Mar 2 13:30:06.731422 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 2 13:30:06.740159 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 2 13:30:06.797536 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 2 13:30:07.082462 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 2 13:30:07.635680 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 2 13:30:07.813174 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 2 13:30:07.813174 ignition[1094]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 2 13:30:07.838977 ignition[1094]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:30:07.848405 ignition[1094]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:30:07.848405 ignition[1094]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 2 13:30:07.848405 ignition[1094]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 2 13:30:07.848405 ignition[1094]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 13:30:07.848405 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:30:07.848405 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:30:07.848405 ignition[1094]: INFO : files: files passed Mar 2 13:30:07.848405 ignition[1094]: INFO : Ignition finished successfully Mar 2 13:30:07.848815 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 13:30:07.887797 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 13:30:07.902726 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 13:30:07.911742 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 13:30:07.912533 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 13:30:07.950306 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:30:07.950306 initrd-setup-root-after-ignition[1121]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:30:07.947893 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:30:07.980529 initrd-setup-root-after-ignition[1125]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:30:07.956015 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 13:30:07.992774 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 13:30:08.026143 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 13:30:08.026262 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 13:30:08.035947 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 13:30:08.045693 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 13:30:08.054768 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 13:30:08.067748 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 13:30:08.090066 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:30:08.103806 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 13:30:08.119575 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:30:08.124861 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:30:08.134623 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 13:30:08.143337 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 13:30:08.143480 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:30:08.156198 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 13:30:08.160879 systemd[1]: Stopped target basic.target - Basic System. Mar 2 13:30:08.170025 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 13:30:08.179083 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:30:08.188024 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 13:30:08.197583 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 13:30:08.206663 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:30:08.216635 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 13:30:08.225224 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 13:30:08.234782 systemd[1]: Stopped target swap.target - Swaps. Mar 2 13:30:08.242664 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 13:30:08.242795 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:30:08.254269 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:30:08.259344 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:30:08.268439 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 13:30:08.268521 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:30:08.278062 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 13:30:08.278173 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 13:30:08.291724 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 13:30:08.291838 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:30:08.297331 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 13:30:08.297426 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 13:30:08.361314 ignition[1145]: INFO : Ignition 2.19.0 Mar 2 13:30:08.361314 ignition[1145]: INFO : Stage: umount Mar 2 13:30:08.361314 ignition[1145]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:30:08.361314 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:30:08.305682 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 2 13:30:08.385574 ignition[1145]: INFO : umount: umount passed Mar 2 13:30:08.385574 ignition[1145]: INFO : Ignition finished successfully Mar 2 13:30:08.305784 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 2 13:30:08.331716 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 13:30:08.343517 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 13:30:08.348236 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:30:08.379699 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 13:30:08.390481 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 13:30:08.390674 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:30:08.408149 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 13:30:08.408272 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:30:08.416442 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 13:30:08.417509 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 13:30:08.417636 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 13:30:08.432071 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 13:30:08.432176 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 13:30:08.440394 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 13:30:08.440446 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 13:30:08.450785 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 13:30:08.450842 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 13:30:08.455800 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 2 13:30:08.455847 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 2 13:30:08.466580 systemd[1]: Stopped target network.target - Network. Mar 2 13:30:08.474705 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 13:30:08.474763 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:30:08.480774 systemd[1]: Stopped target paths.target - Path Units. Mar 2 13:30:08.489430 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 13:30:08.502292 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:30:08.507680 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 13:30:08.516822 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 13:30:08.527105 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 13:30:08.527152 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:30:08.536362 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 13:30:08.536401 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:30:08.546221 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 13:30:08.546272 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 13:30:08.554482 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 13:30:08.554524 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 13:30:08.564953 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 13:30:08.573079 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 13:30:08.586236 systemd-networkd[905]: eth0: DHCPv6 lease lost Mar 2 13:30:08.588032 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 13:30:08.588216 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 13:30:08.597529 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 13:30:08.768971 kernel: hv_netvsc 00224878-9f4f-0022-4878-9f4f00224878 eth0: Data path switched from VF: enP52931s1 Mar 2 13:30:08.597574 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:30:08.625666 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 13:30:08.636280 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 13:30:08.636348 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:30:08.645446 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:30:08.661103 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 13:30:08.661261 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 13:30:08.673933 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 13:30:08.674151 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:30:08.705973 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 13:30:08.706091 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 13:30:08.715775 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 13:30:08.715825 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:30:08.725416 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 13:30:08.725478 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:30:08.742360 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 13:30:08.742431 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 13:30:08.758161 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:30:08.758238 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:30:08.781810 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 13:30:08.796184 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:30:08.796267 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:30:08.806615 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 13:30:08.806668 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 13:30:08.817582 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 13:30:08.817631 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:30:08.827939 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 13:30:08.828006 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:30:08.842054 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:30:08.842114 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:30:08.852452 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 13:30:08.852686 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 13:30:08.861087 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 13:30:08.861175 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 13:30:08.874021 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 13:30:08.874203 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 13:30:08.882938 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 13:30:08.892262 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 13:30:08.892334 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 13:30:08.915742 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 13:30:08.937927 systemd[1]: Switching root. Mar 2 13:30:09.118672 systemd-journald[217]: Journal stopped Mar 2 13:29:58.207271 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 2 13:29:58.207298 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Mar 2 11:11:01 -00 2026 Mar 2 13:29:58.207308 kernel: KASLR enabled Mar 2 13:29:58.207315 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 2 13:29:58.207323 kernel: printk: bootconsole [pl11] enabled Mar 2 13:29:58.207329 kernel: efi: EFI v2.7 by EDK II Mar 2 13:29:58.207337 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 2 13:29:58.207344 kernel: random: crng init done Mar 2 13:29:58.207351 kernel: ACPI: Early table checksum verification disabled Mar 2 13:29:58.207358 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 2 13:29:58.207364 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:29:58.207370 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:29:58.207379 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 2 13:29:58.207386 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:29:58.207394 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:29:58.207401 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:29:58.207409 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:29:58.207417 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:29:58.207425 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:29:58.207433 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 2 13:29:58.207441 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 2 13:29:58.207449 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 2 13:29:58.207456 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 2 13:29:58.207463 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 2 13:29:58.207470 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 2 13:29:58.209536 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 2 13:29:58.209557 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 2 13:29:58.209564 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 2 13:29:58.209577 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 2 13:29:58.209584 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 2 13:29:58.209592 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 2 13:29:58.209599 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 2 13:29:58.209606 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 2 13:29:58.209612 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 2 13:29:58.209620 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 2 13:29:58.209627 kernel: Zone ranges: Mar 2 13:29:58.209635 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 2 13:29:58.209643 kernel: DMA32 empty Mar 2 13:29:58.209651 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 2 13:29:58.209659 kernel: Movable zone start for each node Mar 2 13:29:58.209672 kernel: Early memory node ranges Mar 2 13:29:58.209680 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 2 13:29:58.209689 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 2 13:29:58.209697 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 2 13:29:58.209705 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 2 13:29:58.209715 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 2 13:29:58.209723 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 2 13:29:58.209730 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 2 13:29:58.209739 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 2 13:29:58.209747 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 2 13:29:58.209754 kernel: psci: probing for conduit method from ACPI. Mar 2 13:29:58.209762 kernel: psci: PSCIv1.1 detected in firmware. Mar 2 13:29:58.209770 kernel: psci: Using standard PSCI v0.2 function IDs Mar 2 13:29:58.209777 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 2 13:29:58.209785 kernel: psci: SMC Calling Convention v1.4 Mar 2 13:29:58.209793 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 2 13:29:58.209800 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 2 13:29:58.209809 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 2 13:29:58.209817 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 2 13:29:58.209825 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 2 13:29:58.209833 kernel: Detected PIPT I-cache on CPU0 Mar 2 13:29:58.209841 kernel: CPU features: detected: GIC system register CPU interface Mar 2 13:29:58.209849 kernel: CPU features: detected: Hardware dirty bit management Mar 2 13:29:58.209857 kernel: CPU features: detected: Spectre-BHB Mar 2 13:29:58.209864 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 2 13:29:58.209872 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 2 13:29:58.209879 kernel: CPU features: detected: ARM erratum 1418040 Mar 2 13:29:58.209888 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 2 13:29:58.209897 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 2 13:29:58.209905 kernel: alternatives: applying boot alternatives Mar 2 13:29:58.209914 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7ecec6e0f4313fe7e6ab44dac0c51edbf0b22765a212833abcec729cd9dc543f Mar 2 13:29:58.209923 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 13:29:58.209931 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 13:29:58.209939 kernel: Fallback order for Node 0: 0 Mar 2 13:29:58.209946 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 2 13:29:58.209954 kernel: Policy zone: Normal Mar 2 13:29:58.209962 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 13:29:58.209969 kernel: software IO TLB: area num 2. Mar 2 13:29:58.209977 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 2 13:29:58.209987 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 2 13:29:58.209995 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 2 13:29:58.210002 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 13:29:58.210011 kernel: rcu: RCU event tracing is enabled. Mar 2 13:29:58.210020 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 2 13:29:58.210027 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 13:29:58.210035 kernel: Tracing variant of Tasks RCU enabled. Mar 2 13:29:58.210043 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 13:29:58.210050 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 2 13:29:58.210058 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 2 13:29:58.210065 kernel: GICv3: 960 SPIs implemented Mar 2 13:29:58.210075 kernel: GICv3: 0 Extended SPIs implemented Mar 2 13:29:58.210083 kernel: Root IRQ handler: gic_handle_irq Mar 2 13:29:58.210091 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 2 13:29:58.210099 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 2 13:29:58.210108 kernel: ITS: No ITS available, not enabling LPIs Mar 2 13:29:58.210116 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 13:29:58.210124 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 2 13:29:58.210131 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 2 13:29:58.210140 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 2 13:29:58.210148 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 2 13:29:58.210156 kernel: Console: colour dummy device 80x25 Mar 2 13:29:58.210165 kernel: printk: console [tty1] enabled Mar 2 13:29:58.210173 kernel: ACPI: Core revision 20230628 Mar 2 13:29:58.210182 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 2 13:29:58.210191 kernel: pid_max: default: 32768 minimum: 301 Mar 2 13:29:58.210199 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 2 13:29:58.210208 kernel: landlock: Up and running. Mar 2 13:29:58.210217 kernel: SELinux: Initializing. Mar 2 13:29:58.210225 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:29:58.210233 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:29:58.210243 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 2 13:29:58.210251 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 2 13:29:58.210260 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 2 13:29:58.210268 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 2 13:29:58.210275 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 2 13:29:58.210283 kernel: rcu: Hierarchical SRCU implementation. Mar 2 13:29:58.210292 kernel: rcu: Max phase no-delay instances is 400. Mar 2 13:29:58.210301 kernel: Remapping and enabling EFI services. Mar 2 13:29:58.210315 kernel: smp: Bringing up secondary CPUs ... Mar 2 13:29:58.210324 kernel: Detected PIPT I-cache on CPU1 Mar 2 13:29:58.210332 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 2 13:29:58.210340 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 2 13:29:58.210350 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 2 13:29:58.210359 kernel: smp: Brought up 1 node, 2 CPUs Mar 2 13:29:58.210368 kernel: SMP: Total of 2 processors activated. Mar 2 13:29:58.210376 kernel: CPU features: detected: 32-bit EL0 Support Mar 2 13:29:58.210386 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 2 13:29:58.210396 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 2 13:29:58.210404 kernel: CPU features: detected: CRC32 instructions Mar 2 13:29:58.210412 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 2 13:29:58.210420 kernel: CPU features: detected: LSE atomic instructions Mar 2 13:29:58.210429 kernel: CPU features: detected: Privileged Access Never Mar 2 13:29:58.210437 kernel: CPU: All CPU(s) started at EL1 Mar 2 13:29:58.210449 kernel: alternatives: applying system-wide alternatives Mar 2 13:29:58.210457 kernel: devtmpfs: initialized Mar 2 13:29:58.210466 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 13:29:58.210504 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 2 13:29:58.210514 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 13:29:58.210522 kernel: SMBIOS 3.1.0 present. Mar 2 13:29:58.210531 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 2 13:29:58.210540 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 13:29:58.210549 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 2 13:29:58.210558 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 2 13:29:58.210567 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 2 13:29:58.210575 kernel: audit: initializing netlink subsys (disabled) Mar 2 13:29:58.210586 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 2 13:29:58.210595 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 13:29:58.210604 kernel: cpuidle: using governor menu Mar 2 13:29:58.210612 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 2 13:29:58.210621 kernel: ASID allocator initialised with 32768 entries Mar 2 13:29:58.210630 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 13:29:58.210638 kernel: Serial: AMBA PL011 UART driver Mar 2 13:29:58.210647 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 2 13:29:58.210656 kernel: Modules: 0 pages in range for non-PLT usage Mar 2 13:29:58.210667 kernel: Modules: 509008 pages in range for PLT usage Mar 2 13:29:58.210676 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 13:29:58.210684 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 13:29:58.210693 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 2 13:29:58.210701 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 2 13:29:58.210709 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 13:29:58.210717 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 13:29:58.210725 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 2 13:29:58.210734 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 2 13:29:58.210744 kernel: ACPI: Added _OSI(Module Device) Mar 2 13:29:58.210753 kernel: ACPI: Added _OSI(Processor Device) Mar 2 13:29:58.210761 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 13:29:58.210769 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 13:29:58.210778 kernel: ACPI: Interpreter enabled Mar 2 13:29:58.210787 kernel: ACPI: Using GIC for interrupt routing Mar 2 13:29:58.210796 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 2 13:29:58.210804 kernel: printk: console [ttyAMA0] enabled Mar 2 13:29:58.210813 kernel: printk: bootconsole [pl11] disabled Mar 2 13:29:58.210824 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 2 13:29:58.210833 kernel: iommu: Default domain type: Translated Mar 2 13:29:58.210841 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 2 13:29:58.210850 kernel: efivars: Registered efivars operations Mar 2 13:29:58.210859 kernel: vgaarb: loaded Mar 2 13:29:58.210869 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 2 13:29:58.210877 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 13:29:58.210885 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 13:29:58.210894 kernel: pnp: PnP ACPI init Mar 2 13:29:58.210904 kernel: pnp: PnP ACPI: found 0 devices Mar 2 13:29:58.210912 kernel: NET: Registered PF_INET protocol family Mar 2 13:29:58.210921 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 13:29:58.210930 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 13:29:58.210939 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 13:29:58.210947 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 13:29:58.210956 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 13:29:58.210964 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 13:29:58.210973 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:29:58.210983 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:29:58.210992 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 13:29:58.211001 kernel: PCI: CLS 0 bytes, default 64 Mar 2 13:29:58.211009 kernel: kvm [1]: HYP mode not available Mar 2 13:29:58.211018 kernel: Initialise system trusted keyrings Mar 2 13:29:58.211028 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 13:29:58.211036 kernel: Key type asymmetric registered Mar 2 13:29:58.211045 kernel: Asymmetric key parser 'x509' registered Mar 2 13:29:58.211054 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 2 13:29:58.211065 kernel: io scheduler mq-deadline registered Mar 2 13:29:58.211074 kernel: io scheduler kyber registered Mar 2 13:29:58.211082 kernel: io scheduler bfq registered Mar 2 13:29:58.211091 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 13:29:58.211118 kernel: thunder_xcv, ver 1.0 Mar 2 13:29:58.211127 kernel: thunder_bgx, ver 1.0 Mar 2 13:29:58.211136 kernel: nicpf, ver 1.0 Mar 2 13:29:58.211145 kernel: nicvf, ver 1.0 Mar 2 13:29:58.211330 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 2 13:29:58.211427 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-02T13:29:57 UTC (1772458197) Mar 2 13:29:58.211439 kernel: efifb: probing for efifb Mar 2 13:29:58.211448 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 2 13:29:58.211457 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 2 13:29:58.211465 kernel: efifb: scrolling: redraw Mar 2 13:29:58.211473 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 2 13:29:58.213531 kernel: Console: switching to colour frame buffer device 128x48 Mar 2 13:29:58.213542 kernel: fb0: EFI VGA frame buffer device Mar 2 13:29:58.213556 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 2 13:29:58.213565 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 2 13:29:58.213573 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 2 13:29:58.213582 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 2 13:29:58.213590 kernel: watchdog: Hard watchdog permanently disabled Mar 2 13:29:58.213598 kernel: NET: Registered PF_INET6 protocol family Mar 2 13:29:58.213607 kernel: Segment Routing with IPv6 Mar 2 13:29:58.213615 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 13:29:58.213623 kernel: NET: Registered PF_PACKET protocol family Mar 2 13:29:58.213633 kernel: Key type dns_resolver registered Mar 2 13:29:58.213642 kernel: registered taskstats version 1 Mar 2 13:29:58.213650 kernel: Loading compiled-in X.509 certificates Mar 2 13:29:58.213658 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 888055ac257926b028c9aac8084c1e2b1bcee773' Mar 2 13:29:58.213666 kernel: Key type .fscrypt registered Mar 2 13:29:58.213674 kernel: Key type fscrypt-provisioning registered Mar 2 13:29:58.213682 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 13:29:58.213691 kernel: ima: Allocated hash algorithm: sha1 Mar 2 13:29:58.213699 kernel: ima: No architecture policies found Mar 2 13:29:58.213711 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 2 13:29:58.213719 kernel: clk: Disabling unused clocks Mar 2 13:29:58.213727 kernel: Freeing unused kernel memory: 39424K Mar 2 13:29:58.213735 kernel: Run /init as init process Mar 2 13:29:58.213743 kernel: with arguments: Mar 2 13:29:58.213751 kernel: /init Mar 2 13:29:58.213759 kernel: with environment: Mar 2 13:29:58.213767 kernel: HOME=/ Mar 2 13:29:58.213775 kernel: TERM=linux Mar 2 13:29:58.213785 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:29:58.213798 systemd[1]: Detected virtualization microsoft. Mar 2 13:29:58.213807 systemd[1]: Detected architecture arm64. Mar 2 13:29:58.213816 systemd[1]: Running in initrd. Mar 2 13:29:58.213824 systemd[1]: No hostname configured, using default hostname. Mar 2 13:29:58.213833 systemd[1]: Hostname set to . Mar 2 13:29:58.213842 systemd[1]: Initializing machine ID from random generator. Mar 2 13:29:58.213854 systemd[1]: Queued start job for default target initrd.target. Mar 2 13:29:58.213863 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:29:58.213872 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:29:58.213882 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 13:29:58.213891 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:29:58.213901 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 13:29:58.213911 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 13:29:58.213921 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 13:29:58.213932 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 13:29:58.213941 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:29:58.213950 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:29:58.213958 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:29:58.213968 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:29:58.213977 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:29:58.213985 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:29:58.213994 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:29:58.214006 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:29:58.214015 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 13:29:58.214024 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 2 13:29:58.214033 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:29:58.214042 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:29:58.214051 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:29:58.214059 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:29:58.214068 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 13:29:58.214079 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:29:58.214088 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 13:29:58.214096 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 13:29:58.214105 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:29:58.214143 systemd-journald[217]: Collecting audit messages is disabled. Mar 2 13:29:58.214168 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:29:58.214177 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:29:58.214187 systemd-journald[217]: Journal started Mar 2 13:29:58.214207 systemd-journald[217]: Runtime Journal (/run/log/journal/97b3856521b348838335108a02e9cd92) is 8.0M, max 78.5M, 70.5M free. Mar 2 13:29:58.220290 systemd-modules-load[218]: Inserted module 'overlay' Mar 2 13:29:58.240314 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:29:58.251058 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 13:29:58.256797 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 13:29:58.264166 kernel: Bridge firewalling registered Mar 2 13:29:58.261550 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 2 13:29:58.269149 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:29:58.279250 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 13:29:58.287661 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:29:58.295885 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:29:58.315792 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:29:58.329353 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:29:58.342572 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:29:58.357739 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:29:58.370230 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:29:58.380672 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:29:58.393048 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:29:58.399111 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:29:58.426810 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 13:29:58.434709 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:29:58.453697 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:29:58.470691 dracut-cmdline[250]: dracut-dracut-053 Mar 2 13:29:58.482074 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7ecec6e0f4313fe7e6ab44dac0c51edbf0b22765a212833abcec729cd9dc543f Mar 2 13:29:58.486842 systemd-resolved[251]: Positive Trust Anchors: Mar 2 13:29:58.486852 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:29:58.486882 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:29:58.489060 systemd-resolved[251]: Defaulting to hostname 'linux'. Mar 2 13:29:58.507316 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:29:58.514864 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:29:58.532078 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:29:58.614509 kernel: SCSI subsystem initialized Mar 2 13:29:58.621490 kernel: Loading iSCSI transport class v2.0-870. Mar 2 13:29:58.630489 kernel: iscsi: registered transport (tcp) Mar 2 13:29:58.647994 kernel: iscsi: registered transport (qla4xxx) Mar 2 13:29:58.648047 kernel: QLogic iSCSI HBA Driver Mar 2 13:29:58.683386 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 13:29:58.703605 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 13:29:58.732512 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 13:29:58.732578 kernel: device-mapper: uevent: version 1.0.3 Mar 2 13:29:58.738103 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 2 13:29:58.787495 kernel: raid6: neonx8 gen() 15804 MB/s Mar 2 13:29:58.806508 kernel: raid6: neonx4 gen() 15692 MB/s Mar 2 13:29:58.825519 kernel: raid6: neonx2 gen() 13221 MB/s Mar 2 13:29:58.845489 kernel: raid6: neonx1 gen() 10486 MB/s Mar 2 13:29:58.864485 kernel: raid6: int64x8 gen() 6978 MB/s Mar 2 13:29:58.883486 kernel: raid6: int64x4 gen() 7363 MB/s Mar 2 13:29:58.903486 kernel: raid6: int64x2 gen() 6143 MB/s Mar 2 13:29:58.925489 kernel: raid6: int64x1 gen() 5066 MB/s Mar 2 13:29:58.925500 kernel: raid6: using algorithm neonx8 gen() 15804 MB/s Mar 2 13:29:58.948221 kernel: raid6: .... xor() 12030 MB/s, rmw enabled Mar 2 13:29:58.948233 kernel: raid6: using neon recovery algorithm Mar 2 13:29:58.959381 kernel: xor: measuring software checksum speed Mar 2 13:29:58.959400 kernel: 8regs : 19613 MB/sec Mar 2 13:29:58.962171 kernel: 32regs : 19669 MB/sec Mar 2 13:29:58.968041 kernel: arm64_neon : 26188 MB/sec Mar 2 13:29:58.968053 kernel: xor: using function: arm64_neon (26188 MB/sec) Mar 2 13:29:59.018547 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 13:29:59.030232 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:29:59.043672 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:29:59.070887 systemd-udevd[436]: Using default interface naming scheme 'v255'. Mar 2 13:29:59.074394 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:29:59.094707 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 13:29:59.115131 dracut-pre-trigger[446]: rd.md=0: removing MD RAID activation Mar 2 13:29:59.146230 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:29:59.158750 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:29:59.202756 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:29:59.222693 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 13:29:59.244430 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 13:29:59.252418 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:29:59.267345 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:29:59.284459 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:29:59.308807 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 13:29:59.332011 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:29:59.346644 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:29:59.360583 kernel: hv_vmbus: Vmbus version:5.3 Mar 2 13:29:59.346766 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:29:59.367274 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:29:59.379637 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:29:59.379842 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:29:59.426894 kernel: hv_vmbus: registering driver hid_hyperv Mar 2 13:29:59.426920 kernel: hv_vmbus: registering driver hv_netvsc Mar 2 13:29:59.426931 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 2 13:29:59.426942 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 2 13:29:59.426953 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 2 13:29:59.426965 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 2 13:29:59.427133 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 2 13:29:59.417412 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:29:59.455777 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 2 13:29:59.455808 kernel: PTP clock support registered Mar 2 13:29:59.454778 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:29:59.491446 kernel: hv_utils: Registering HyperV Utility Driver Mar 2 13:29:59.491512 kernel: hv_vmbus: registering driver hv_storvsc Mar 2 13:29:59.491525 kernel: hv_vmbus: registering driver hv_utils Mar 2 13:29:59.491536 kernel: hv_utils: Heartbeat IC version 3.0 Mar 2 13:29:59.491546 kernel: hv_utils: Shutdown IC version 3.2 Mar 2 13:29:59.491557 kernel: hv_utils: TimeSync IC version 4.0 Mar 2 13:29:59.492546 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:29:59.545197 kernel: scsi host1: storvsc_host_t Mar 2 13:29:59.545392 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 2 13:29:59.545419 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 2 13:29:59.524788 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:29:59.524822 systemd-resolved[251]: Clock change detected. Flushing caches. Mar 2 13:29:59.566578 kernel: scsi host0: storvsc_host_t Mar 2 13:29:59.566780 kernel: hv_netvsc 00224878-9f4f-0022-4878-9f4f00224878 eth0: VF slot 1 added Mar 2 13:29:59.559803 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:29:59.587958 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:29:59.613033 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Mar 2 13:29:59.613246 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 13:29:59.613258 kernel: hv_vmbus: registering driver hv_pci Mar 2 13:29:59.613269 kernel: hv_pci 990cecc0-cec3-4e3c-a2d7-9cf7b329c18b: PCI VMBus probing: Using version 0x10004 Mar 2 13:29:59.613766 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:29:59.642785 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Mar 2 13:29:59.642970 kernel: hv_pci 990cecc0-cec3-4e3c-a2d7-9cf7b329c18b: PCI host bridge to bus cec3:00 Mar 2 13:29:59.643068 kernel: pci_bus cec3:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 2 13:29:59.650810 kernel: pci_bus cec3:00: No busn resource found for root bus, will use [bus 00-ff] Mar 2 13:29:59.651035 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#88 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 2 13:29:59.662614 kernel: pci cec3:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 2 13:29:59.669793 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 2 13:29:59.670060 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Mar 2 13:29:59.680301 kernel: sd 1:0:0:0: [sda] Write Protect is off Mar 2 13:29:59.680550 kernel: pci cec3:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 2 13:29:59.680583 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 2 13:29:59.676382 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:29:59.706700 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 2 13:29:59.706863 kernel: pci cec3:00:02.0: enabling Extended Tags Mar 2 13:29:59.706887 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:29:59.711492 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Mar 2 13:29:59.726527 kernel: pci cec3:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at cec3:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 2 13:29:59.736801 kernel: pci_bus cec3:00: busn_res: [bus 00-ff] end is updated to 00 Mar 2 13:29:59.737007 kernel: pci cec3:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 2 13:29:59.744598 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#47 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 2 13:29:59.784172 kernel: mlx5_core cec3:00:02.0: enabling device (0000 -> 0002) Mar 2 13:29:59.789479 kernel: mlx5_core cec3:00:02.0: firmware version: 16.30.5026 Mar 2 13:29:59.985106 kernel: hv_netvsc 00224878-9f4f-0022-4878-9f4f00224878 eth0: VF registering: eth1 Mar 2 13:29:59.985390 kernel: mlx5_core cec3:00:02.0 eth1: joined to eth0 Mar 2 13:29:59.992489 kernel: mlx5_core cec3:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 2 13:30:00.002495 kernel: mlx5_core cec3:00:02.0 enP52931s1: renamed from eth1 Mar 2 13:30:00.291669 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 2 13:30:00.309492 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (508) Mar 2 13:30:00.318496 kernel: BTRFS: device fsid 0d0ab669-47ba-4267-b368-82e952673c8e devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (489) Mar 2 13:30:00.327417 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 2 13:30:00.345873 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 2 13:30:00.358798 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 2 13:30:00.374328 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 2 13:30:00.399667 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 13:30:00.425494 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:30:00.433489 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:30:00.443681 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:30:01.444503 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 2 13:30:01.446483 disk-uuid[608]: The operation has completed successfully. Mar 2 13:30:01.526770 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 13:30:01.528490 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 13:30:01.552600 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 13:30:01.564297 sh[721]: Success Mar 2 13:30:01.601558 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 2 13:30:01.917745 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 13:30:01.932646 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 13:30:01.936450 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 13:30:01.971803 kernel: BTRFS info (device dm-0): first mount of filesystem 0d0ab669-47ba-4267-b368-82e952673c8e Mar 2 13:30:01.971856 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:30:01.978000 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 2 13:30:01.983157 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 2 13:30:01.986736 kernel: BTRFS info (device dm-0): using free space tree Mar 2 13:30:02.558359 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 13:30:02.562920 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 13:30:02.579717 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 13:30:02.584629 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 13:30:02.624068 kernel: BTRFS info (device sda6): first mount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:30:02.624128 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:30:02.624140 kernel: BTRFS info (device sda6): using free space tree Mar 2 13:30:02.665488 kernel: BTRFS info (device sda6): auto enabling async discard Mar 2 13:30:02.680602 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 2 13:30:02.684970 kernel: BTRFS info (device sda6): last unmount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:30:02.691359 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 13:30:02.699852 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:30:02.718774 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 13:30:02.730009 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:30:02.767934 systemd-networkd[905]: lo: Link UP Mar 2 13:30:02.767942 systemd-networkd[905]: lo: Gained carrier Mar 2 13:30:02.772373 systemd-networkd[905]: Enumeration completed Mar 2 13:30:02.772790 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:30:02.773107 systemd-networkd[905]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:30:02.773110 systemd-networkd[905]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:30:02.781819 systemd[1]: Reached target network.target - Network. Mar 2 13:30:02.856483 kernel: mlx5_core cec3:00:02.0 enP52931s1: Link up Mar 2 13:30:02.894739 kernel: hv_netvsc 00224878-9f4f-0022-4878-9f4f00224878 eth0: Data path switched to VF: enP52931s1 Mar 2 13:30:02.894976 systemd-networkd[905]: enP52931s1: Link UP Mar 2 13:30:02.895088 systemd-networkd[905]: eth0: Link UP Mar 2 13:30:02.895195 systemd-networkd[905]: eth0: Gained carrier Mar 2 13:30:02.895205 systemd-networkd[905]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:30:02.904596 systemd-networkd[905]: enP52931s1: Gained carrier Mar 2 13:30:02.924508 systemd-networkd[905]: eth0: DHCPv4 address 10.200.20.22/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 2 13:30:03.861739 ignition[904]: Ignition 2.19.0 Mar 2 13:30:03.861751 ignition[904]: Stage: fetch-offline Mar 2 13:30:03.863836 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:30:03.861786 ignition[904]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:30:03.861794 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:30:03.861888 ignition[904]: parsed url from cmdline: "" Mar 2 13:30:03.861892 ignition[904]: no config URL provided Mar 2 13:30:03.861896 ignition[904]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:30:03.891647 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 2 13:30:03.861903 ignition[904]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:30:03.861908 ignition[904]: failed to fetch config: resource requires networking Mar 2 13:30:03.862088 ignition[904]: Ignition finished successfully Mar 2 13:30:03.918708 ignition[914]: Ignition 2.19.0 Mar 2 13:30:03.918714 ignition[914]: Stage: fetch Mar 2 13:30:03.918882 ignition[914]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:30:03.918891 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:30:03.918981 ignition[914]: parsed url from cmdline: "" Mar 2 13:30:03.918984 ignition[914]: no config URL provided Mar 2 13:30:03.918989 ignition[914]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:30:03.918996 ignition[914]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:30:03.919015 ignition[914]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 2 13:30:04.015663 ignition[914]: GET result: OK Mar 2 13:30:04.015718 ignition[914]: config has been read from IMDS userdata Mar 2 13:30:04.015762 ignition[914]: parsing config with SHA512: f035c3c22a15e3dd44064bf04e8d3df9eead548a283a96af48edb60cfd7066ac3a019e816677b5bb3ead1505ff3ba1411931d2748eab554cbb65787b3d65db32 Mar 2 13:30:04.020505 unknown[914]: fetched base config from "system" Mar 2 13:30:04.021017 ignition[914]: fetch: fetch complete Mar 2 13:30:04.020514 unknown[914]: fetched base config from "system" Mar 2 13:30:04.021022 ignition[914]: fetch: fetch passed Mar 2 13:30:04.020520 unknown[914]: fetched user config from "azure" Mar 2 13:30:04.021071 ignition[914]: Ignition finished successfully Mar 2 13:30:04.025305 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 2 13:30:04.043797 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 13:30:04.064330 ignition[920]: Ignition 2.19.0 Mar 2 13:30:04.064341 ignition[920]: Stage: kargs Mar 2 13:30:04.068548 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 13:30:04.064602 ignition[920]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:30:04.064612 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:30:04.065733 ignition[920]: kargs: kargs passed Mar 2 13:30:04.085772 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 13:30:04.065802 ignition[920]: Ignition finished successfully Mar 2 13:30:04.108922 ignition[926]: Ignition 2.19.0 Mar 2 13:30:04.108939 ignition[926]: Stage: disks Mar 2 13:30:04.113061 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 13:30:04.109122 ignition[926]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:30:04.119341 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 13:30:04.109137 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:30:04.127786 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 13:30:04.110210 ignition[926]: disks: disks passed Mar 2 13:30:04.136735 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:30:04.110298 ignition[926]: Ignition finished successfully Mar 2 13:30:04.145446 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:30:04.154571 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:30:04.181717 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 13:30:04.253446 systemd-fsck[935]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 2 13:30:04.263142 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 13:30:04.284612 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 13:30:04.343666 kernel: EXT4-fs (sda9): mounted filesystem a5f5c21d-8a27-4a94-875f-5735c39d000b r/w with ordered data mode. Quota mode: none. Mar 2 13:30:04.344214 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 13:30:04.348755 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 13:30:04.393550 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:30:04.413483 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (946) Mar 2 13:30:04.425489 kernel: BTRFS info (device sda6): first mount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:30:04.425547 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:30:04.425575 kernel: BTRFS info (device sda6): using free space tree Mar 2 13:30:04.433696 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 13:30:04.442589 kernel: BTRFS info (device sda6): auto enabling async discard Mar 2 13:30:04.446247 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 2 13:30:04.452586 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 13:30:04.452621 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:30:04.474768 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:30:04.482119 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 13:30:04.492685 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 13:30:04.829583 systemd-networkd[905]: eth0: Gained IPv6LL Mar 2 13:30:05.069293 coreos-metadata[963]: Mar 02 13:30:05.069 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 2 13:30:05.078176 coreos-metadata[963]: Mar 02 13:30:05.078 INFO Fetch successful Mar 2 13:30:05.078176 coreos-metadata[963]: Mar 02 13:30:05.078 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 2 13:30:05.094301 coreos-metadata[963]: Mar 02 13:30:05.094 INFO Fetch successful Mar 2 13:30:05.111149 coreos-metadata[963]: Mar 02 13:30:05.110 INFO wrote hostname ci-4081.3.101-160832fd4e to /sysroot/etc/hostname Mar 2 13:30:05.118961 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 2 13:30:05.271805 initrd-setup-root[975]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 13:30:05.311441 initrd-setup-root[982]: cut: /sysroot/etc/group: No such file or directory Mar 2 13:30:05.320145 initrd-setup-root[989]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 13:30:05.326422 initrd-setup-root[996]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 13:30:06.452450 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 13:30:06.465692 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 13:30:06.477393 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 13:30:06.493601 kernel: BTRFS info (device sda6): last unmount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:30:06.488799 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 13:30:06.517386 ignition[1064]: INFO : Ignition 2.19.0 Mar 2 13:30:06.517386 ignition[1064]: INFO : Stage: mount Mar 2 13:30:06.529637 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:30:06.529637 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:30:06.529637 ignition[1064]: INFO : mount: mount passed Mar 2 13:30:06.529637 ignition[1064]: INFO : Ignition finished successfully Mar 2 13:30:06.522255 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 13:30:06.535726 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 13:30:06.552964 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 13:30:06.581757 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:30:06.601488 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1076) Mar 2 13:30:06.601534 kernel: BTRFS info (device sda6): first mount of filesystem 86492f98-8fd6-4311-9de7-7dd8660c41f3 Mar 2 13:30:06.611113 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 2 13:30:06.614425 kernel: BTRFS info (device sda6): using free space tree Mar 2 13:30:06.621480 kernel: BTRFS info (device sda6): auto enabling async discard Mar 2 13:30:06.623856 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:30:06.649110 ignition[1094]: INFO : Ignition 2.19.0 Mar 2 13:30:06.649110 ignition[1094]: INFO : Stage: files Mar 2 13:30:06.655494 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:30:06.655494 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:30:06.655494 ignition[1094]: DEBUG : files: compiled without relabeling support, skipping Mar 2 13:30:06.655494 ignition[1094]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 13:30:06.655494 ignition[1094]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 13:30:06.708728 ignition[1094]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 13:30:06.714509 ignition[1094]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 13:30:06.714509 ignition[1094]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 13:30:06.710013 unknown[1094]: wrote ssh authorized keys file for user: core Mar 2 13:30:06.731422 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 2 13:30:06.740159 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 2 13:30:06.797536 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 2 13:30:07.082462 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 2 13:30:07.091653 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 2 13:30:07.635680 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 2 13:30:07.813174 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 2 13:30:07.813174 ignition[1094]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 2 13:30:07.838977 ignition[1094]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:30:07.848405 ignition[1094]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:30:07.848405 ignition[1094]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 2 13:30:07.848405 ignition[1094]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 2 13:30:07.848405 ignition[1094]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 13:30:07.848405 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:30:07.848405 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:30:07.848405 ignition[1094]: INFO : files: files passed Mar 2 13:30:07.848405 ignition[1094]: INFO : Ignition finished successfully Mar 2 13:30:07.848815 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 13:30:07.887797 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 13:30:07.902726 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 13:30:07.911742 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 13:30:07.912533 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 13:30:07.950306 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:30:07.950306 initrd-setup-root-after-ignition[1121]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:30:07.947893 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:30:07.980529 initrd-setup-root-after-ignition[1125]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:30:07.956015 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 13:30:07.992774 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 13:30:08.026143 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 13:30:08.026262 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 13:30:08.035947 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 13:30:08.045693 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 13:30:08.054768 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 13:30:08.067748 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 13:30:08.090066 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:30:08.103806 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 13:30:08.119575 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:30:08.124861 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:30:08.134623 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 13:30:08.143337 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 13:30:08.143480 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:30:08.156198 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 13:30:08.160879 systemd[1]: Stopped target basic.target - Basic System. Mar 2 13:30:08.170025 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 13:30:08.179083 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:30:08.188024 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 13:30:08.197583 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 13:30:08.206663 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:30:08.216635 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 13:30:08.225224 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 13:30:08.234782 systemd[1]: Stopped target swap.target - Swaps. Mar 2 13:30:08.242664 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 13:30:08.242795 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:30:08.254269 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:30:08.259344 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:30:08.268439 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 13:30:08.268521 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:30:08.278062 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 13:30:08.278173 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 13:30:08.291724 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 13:30:08.291838 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:30:08.297331 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 13:30:08.297426 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 13:30:08.361314 ignition[1145]: INFO : Ignition 2.19.0 Mar 2 13:30:08.361314 ignition[1145]: INFO : Stage: umount Mar 2 13:30:08.361314 ignition[1145]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:30:08.361314 ignition[1145]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 2 13:30:08.305682 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 2 13:30:08.385574 ignition[1145]: INFO : umount: umount passed Mar 2 13:30:08.385574 ignition[1145]: INFO : Ignition finished successfully Mar 2 13:30:08.305784 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 2 13:30:08.331716 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 13:30:08.343517 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 13:30:08.348236 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:30:08.379699 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 13:30:08.390481 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 13:30:08.390674 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:30:08.408149 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 13:30:08.408272 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:30:08.416442 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 13:30:08.417509 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 13:30:08.417636 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 13:30:08.432071 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 13:30:08.432176 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 13:30:08.440394 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 13:30:08.440446 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 13:30:08.450785 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 13:30:08.450842 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 13:30:08.455800 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 2 13:30:08.455847 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 2 13:30:08.466580 systemd[1]: Stopped target network.target - Network. Mar 2 13:30:08.474705 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 13:30:08.474763 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:30:08.480774 systemd[1]: Stopped target paths.target - Path Units. Mar 2 13:30:08.489430 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 13:30:08.502292 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:30:08.507680 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 13:30:08.516822 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 13:30:08.527105 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 13:30:08.527152 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:30:08.536362 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 13:30:08.536401 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:30:08.546221 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 13:30:08.546272 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 13:30:08.554482 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 13:30:08.554524 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 13:30:08.564953 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 13:30:08.573079 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 13:30:08.586236 systemd-networkd[905]: eth0: DHCPv6 lease lost Mar 2 13:30:08.588032 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 13:30:08.588216 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 13:30:08.597529 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 13:30:08.768971 kernel: hv_netvsc 00224878-9f4f-0022-4878-9f4f00224878 eth0: Data path switched from VF: enP52931s1 Mar 2 13:30:08.597574 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:30:08.625666 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 13:30:08.636280 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 13:30:08.636348 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:30:08.645446 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:30:08.661103 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 13:30:08.661261 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 13:30:08.673933 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 13:30:08.674151 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:30:08.705973 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 13:30:08.706091 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 13:30:08.715775 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 13:30:08.715825 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:30:08.725416 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 13:30:08.725478 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:30:08.742360 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 13:30:08.742431 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 13:30:08.758161 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:30:08.758238 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:30:08.781810 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 13:30:08.796184 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:30:08.796267 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:30:08.806615 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 13:30:08.806668 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 13:30:08.817582 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 13:30:08.817631 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:30:08.827939 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 13:30:08.828006 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:30:08.842054 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:30:08.842114 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:30:08.852452 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 13:30:08.852686 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 13:30:08.861087 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 13:30:08.861175 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 13:30:08.874021 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 13:30:08.874203 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 13:30:08.882938 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 13:30:08.892262 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 13:30:08.892334 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 13:30:08.915742 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 13:30:08.937927 systemd[1]: Switching root. Mar 2 13:30:09.118672 systemd-journald[217]: Journal stopped Mar 2 13:30:13.955547 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Mar 2 13:30:13.955582 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 13:30:13.955593 kernel: SELinux: policy capability open_perms=1 Mar 2 13:30:13.955605 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 13:30:13.955613 kernel: SELinux: policy capability always_check_network=0 Mar 2 13:30:13.955621 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 13:30:13.955630 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 13:30:13.955638 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 13:30:13.955646 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 13:30:13.955655 systemd[1]: Successfully loaded SELinux policy in 161.757ms. Mar 2 13:30:13.955666 kernel: audit: type=1403 audit(1772458210.483:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 13:30:13.955675 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.192ms. Mar 2 13:30:13.955685 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:30:13.955694 systemd[1]: Detected virtualization microsoft. Mar 2 13:30:13.955704 systemd[1]: Detected architecture arm64. Mar 2 13:30:13.955714 systemd[1]: Detected first boot. Mar 2 13:30:13.955724 systemd[1]: Hostname set to . Mar 2 13:30:13.955733 systemd[1]: Initializing machine ID from random generator. Mar 2 13:30:13.955742 zram_generator::config[1186]: No configuration found. Mar 2 13:30:13.955752 systemd[1]: Populated /etc with preset unit settings. Mar 2 13:30:13.955761 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 2 13:30:13.955772 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 2 13:30:13.955781 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 2 13:30:13.955793 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 13:30:13.955803 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 13:30:13.955813 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 13:30:13.955822 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 13:30:13.955832 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 13:30:13.955842 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 13:30:13.955852 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 13:30:13.955861 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 13:30:13.955871 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:30:13.955880 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:30:13.955889 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 13:30:13.955898 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 13:30:13.955908 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 13:30:13.955917 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:30:13.955928 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 2 13:30:13.955937 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:30:13.955947 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 2 13:30:13.955958 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 2 13:30:13.955968 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 2 13:30:13.955978 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 13:30:13.955987 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:30:13.955999 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:30:13.956008 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:30:13.956018 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:30:13.956027 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 13:30:13.956037 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 13:30:13.956046 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:30:13.956055 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:30:13.956067 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:30:13.956077 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 13:30:13.956087 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 13:30:13.956096 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 13:30:13.956106 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 13:30:13.956115 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 13:30:13.956126 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 13:30:13.956136 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 13:30:13.956146 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 13:30:13.956156 systemd[1]: Reached target machines.target - Containers. Mar 2 13:30:13.956165 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 13:30:13.956175 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:30:13.956185 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:30:13.956195 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 13:30:13.956207 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:30:13.956217 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:30:13.956227 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:30:13.956237 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 13:30:13.956247 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:30:13.956257 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 13:30:13.956267 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 2 13:30:13.956276 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 2 13:30:13.956286 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 2 13:30:13.956297 systemd[1]: Stopped systemd-fsck-usr.service. Mar 2 13:30:13.956307 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:30:13.956316 kernel: loop: module loaded Mar 2 13:30:13.956325 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:30:13.956334 kernel: fuse: init (API version 7.39) Mar 2 13:30:13.956343 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 13:30:13.956353 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 13:30:13.956387 systemd-journald[1268]: Collecting audit messages is disabled. Mar 2 13:30:13.956410 systemd-journald[1268]: Journal started Mar 2 13:30:13.956430 systemd-journald[1268]: Runtime Journal (/run/log/journal/d970bbd0a2fa40f3a9969f220244f1dc) is 8.0M, max 78.5M, 70.5M free. Mar 2 13:30:13.967542 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:30:13.967605 systemd[1]: verity-setup.service: Deactivated successfully. Mar 2 13:30:13.099896 systemd[1]: Queued start job for default target multi-user.target. Mar 2 13:30:13.228808 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 2 13:30:13.229217 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 2 13:30:13.229589 systemd[1]: systemd-journald.service: Consumed 2.542s CPU time. Mar 2 13:30:13.974438 kernel: ACPI: bus type drm_connector registered Mar 2 13:30:13.974507 systemd[1]: Stopped verity-setup.service. Mar 2 13:30:13.994662 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:30:13.998808 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 13:30:14.003828 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 13:30:14.009557 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 13:30:14.014031 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 13:30:14.018880 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 13:30:14.025305 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 13:30:14.030925 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 13:30:14.036770 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:30:14.042892 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 13:30:14.043134 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 13:30:14.048882 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:30:14.049098 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:30:14.054309 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:30:14.054573 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:30:14.060002 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:30:14.060224 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:30:14.065977 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 13:30:14.066194 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 13:30:14.071411 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:30:14.071668 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:30:14.077283 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:30:14.082821 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 13:30:14.089063 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 13:30:14.095443 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:30:14.109251 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 13:30:14.118538 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 13:30:14.126244 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 13:30:14.133539 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 13:30:14.133582 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:30:14.139713 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 2 13:30:14.146072 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 13:30:14.151880 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 13:30:14.156383 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:30:14.159644 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 13:30:14.165347 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 13:30:14.170781 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:30:14.172678 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 13:30:14.179210 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:30:14.182724 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:30:14.198666 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 13:30:14.208658 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 13:30:14.217643 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 2 13:30:14.229449 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 13:30:14.234889 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 13:30:14.241218 systemd-journald[1268]: Time spent on flushing to /var/log/journal/d970bbd0a2fa40f3a9969f220244f1dc is 18.813ms for 895 entries. Mar 2 13:30:14.241218 systemd-journald[1268]: System Journal (/var/log/journal/d970bbd0a2fa40f3a9969f220244f1dc) is 8.0M, max 2.6G, 2.6G free. Mar 2 13:30:14.302802 systemd-journald[1268]: Received client request to flush runtime journal. Mar 2 13:30:14.302871 kernel: loop0: detected capacity change from 0 to 114432 Mar 2 13:30:14.240819 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 13:30:14.252276 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 13:30:14.263635 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 13:30:14.275172 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 2 13:30:14.280866 udevadm[1323]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 2 13:30:14.294873 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:30:14.308344 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 13:30:14.329617 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 13:30:14.332562 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 2 13:30:14.374559 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 13:30:14.389690 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:30:14.474180 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Mar 2 13:30:14.474531 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Mar 2 13:30:14.479412 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:30:14.648495 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 13:30:14.689865 kernel: loop1: detected capacity change from 0 to 31320 Mar 2 13:30:15.052593 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 13:30:15.063627 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:30:15.093502 kernel: loop2: detected capacity change from 0 to 114328 Mar 2 13:30:15.100356 systemd-udevd[1343]: Using default interface naming scheme 'v255'. Mar 2 13:30:15.259134 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:30:15.274725 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:30:15.343764 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 13:30:15.351700 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 2 13:30:15.398772 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 13:30:15.441565 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 13:30:15.454492 kernel: loop3: detected capacity change from 0 to 209336 Mar 2 13:30:15.476490 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#90 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 2 13:30:15.502547 kernel: loop4: detected capacity change from 0 to 114432 Mar 2 13:30:15.516409 systemd-networkd[1352]: lo: Link UP Mar 2 13:30:15.516423 systemd-networkd[1352]: lo: Gained carrier Mar 2 13:30:15.523598 kernel: hv_vmbus: registering driver hv_balloon Mar 2 13:30:15.523689 kernel: loop5: detected capacity change from 0 to 31320 Mar 2 13:30:15.523713 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 2 13:30:15.519574 systemd-networkd[1352]: Enumeration completed Mar 2 13:30:15.519676 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:30:15.520070 systemd-networkd[1352]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:30:15.520074 systemd-networkd[1352]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:30:15.537705 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 2 13:30:15.537809 kernel: loop6: detected capacity change from 0 to 114328 Mar 2 13:30:15.547016 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 13:30:15.572243 kernel: hv_vmbus: registering driver hyperv_fb Mar 2 13:30:15.572498 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 2 13:30:15.581789 kernel: loop7: detected capacity change from 0 to 209336 Mar 2 13:30:15.581882 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 2 13:30:15.589195 kernel: Console: switching to colour dummy device 80x25 Mar 2 13:30:15.597354 kernel: Console: switching to colour frame buffer device 128x48 Mar 2 13:30:15.601336 (sd-merge)[1387]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 2 13:30:15.602649 (sd-merge)[1387]: Merged extensions into '/usr'. Mar 2 13:30:15.629685 systemd[1]: Reloading requested from client PID 1321 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 13:30:15.629707 systemd[1]: Reloading... Mar 2 13:30:15.645563 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1356) Mar 2 13:30:15.652653 kernel: mlx5_core cec3:00:02.0 enP52931s1: Link up Mar 2 13:30:15.684495 kernel: hv_netvsc 00224878-9f4f-0022-4878-9f4f00224878 eth0: Data path switched to VF: enP52931s1 Mar 2 13:30:15.685127 systemd-networkd[1352]: enP52931s1: Link UP Mar 2 13:30:15.685258 systemd-networkd[1352]: eth0: Link UP Mar 2 13:30:15.685265 systemd-networkd[1352]: eth0: Gained carrier Mar 2 13:30:15.685280 systemd-networkd[1352]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:30:15.699747 systemd-networkd[1352]: enP52931s1: Gained carrier Mar 2 13:30:15.705564 systemd-networkd[1352]: eth0: DHCPv4 address 10.200.20.22/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 2 13:30:15.753503 zram_generator::config[1459]: No configuration found. Mar 2 13:30:15.868104 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:30:15.945610 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 2 13:30:15.951489 systemd[1]: Reloading finished in 321 ms. Mar 2 13:30:15.994734 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 13:30:16.027792 systemd[1]: Starting ensure-sysext.service... Mar 2 13:30:16.033792 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 13:30:16.049860 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:30:16.057775 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:30:16.070121 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 13:30:16.070809 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 13:30:16.072068 systemd[1]: Reloading requested from client PID 1513 ('systemctl') (unit ensure-sysext.service)... Mar 2 13:30:16.072086 systemd[1]: Reloading... Mar 2 13:30:16.072887 systemd-tmpfiles[1515]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 13:30:16.073229 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Mar 2 13:30:16.073354 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Mar 2 13:30:16.108769 systemd-tmpfiles[1515]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:30:16.108780 systemd-tmpfiles[1515]: Skipping /boot Mar 2 13:30:16.121654 systemd-tmpfiles[1515]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:30:16.121821 systemd-tmpfiles[1515]: Skipping /boot Mar 2 13:30:16.164494 zram_generator::config[1552]: No configuration found. Mar 2 13:30:16.280700 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:30:16.357934 systemd[1]: Reloading finished in 285 ms. Mar 2 13:30:16.374380 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 2 13:30:16.390917 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 13:30:16.396779 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:30:16.413686 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 13:30:16.441796 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 13:30:16.448422 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 2 13:30:16.455191 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 13:30:16.465708 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:30:16.480801 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 13:30:16.489306 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:30:16.492839 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:30:16.499935 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:30:16.507894 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:30:16.518206 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:30:16.518999 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:30:16.519165 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:30:16.528072 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:30:16.528748 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:30:16.542148 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:30:16.548524 lvm[1615]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:30:16.551785 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:30:16.560271 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:30:16.567205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:30:16.567999 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 13:30:16.577233 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:30:16.577386 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:30:16.587998 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 13:30:16.601168 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:30:16.604766 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:30:16.615876 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:30:16.622659 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:30:16.622864 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 13:30:16.628223 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:30:16.634457 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 2 13:30:16.642139 augenrules[1647]: No rules Mar 2 13:30:16.642727 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:30:16.642873 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:30:16.648441 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 13:30:16.654050 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:30:16.654193 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:30:16.659387 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:30:16.659538 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:30:16.665102 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:30:16.666501 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:30:16.678625 systemd[1]: Finished ensure-sysext.service. Mar 2 13:30:16.680418 systemd-resolved[1622]: Positive Trust Anchors: Mar 2 13:30:16.680722 systemd-resolved[1622]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:30:16.680757 systemd-resolved[1622]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:30:16.686694 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:30:16.697630 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 2 13:30:16.702076 lvm[1660]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:30:16.702673 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:30:16.702745 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:30:16.721361 systemd-resolved[1622]: Using system hostname 'ci-4081.3.101-160832fd4e'. Mar 2 13:30:16.723174 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:30:16.728321 systemd[1]: Reached target network.target - Network. Mar 2 13:30:16.731990 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:30:16.739262 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 2 13:30:17.053595 systemd-networkd[1352]: eth0: Gained IPv6LL Mar 2 13:30:17.059358 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 13:30:17.065459 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 13:30:17.094135 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 13:30:17.100231 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 13:30:19.768013 ldconfig[1315]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 13:30:19.792568 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 13:30:19.802747 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 13:30:19.816839 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 13:30:19.822534 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:30:19.827953 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 13:30:19.833812 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 13:30:19.839666 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 13:30:19.844394 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 13:30:19.849902 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 13:30:19.855309 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 13:30:19.855346 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:30:19.859278 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:30:19.865528 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 13:30:19.872082 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 13:30:19.881686 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 13:30:19.886966 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 13:30:19.891771 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:30:19.895750 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:30:19.899772 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:30:19.899801 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:30:19.910594 systemd[1]: Starting chronyd.service - NTP client/server... Mar 2 13:30:19.917649 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 13:30:19.935369 (chronyd)[1669]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 2 13:30:19.938677 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 2 13:30:19.945604 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 13:30:19.951678 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 13:30:19.960839 chronyd[1677]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 2 13:30:19.963663 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 13:30:19.968012 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 13:30:19.968059 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 2 13:30:19.969701 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 2 13:30:19.976806 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 2 13:30:19.977666 jq[1675]: false Mar 2 13:30:19.979842 chronyd[1677]: Timezone right/UTC failed leap second check, ignoring Mar 2 13:30:19.980043 chronyd[1677]: Loaded seccomp filter (level 2) Mar 2 13:30:19.985636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:30:19.988315 KVP[1679]: KVP starting; pid is:1679 Mar 2 13:30:19.995022 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 13:30:20.008131 kernel: hv_utils: KVP IC version 4.0 Mar 2 13:30:20.004363 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 13:30:20.008318 KVP[1679]: KVP LIC Version: 3.1 Mar 2 13:30:20.018622 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 2 13:30:20.023844 extend-filesystems[1678]: Found loop4 Mar 2 13:30:20.032967 extend-filesystems[1678]: Found loop5 Mar 2 13:30:20.032967 extend-filesystems[1678]: Found loop6 Mar 2 13:30:20.032967 extend-filesystems[1678]: Found loop7 Mar 2 13:30:20.032967 extend-filesystems[1678]: Found sda Mar 2 13:30:20.032967 extend-filesystems[1678]: Found sda1 Mar 2 13:30:20.032967 extend-filesystems[1678]: Found sda2 Mar 2 13:30:20.032967 extend-filesystems[1678]: Found sda3 Mar 2 13:30:20.032967 extend-filesystems[1678]: Found usr Mar 2 13:30:20.032967 extend-filesystems[1678]: Found sda4 Mar 2 13:30:20.032967 extend-filesystems[1678]: Found sda6 Mar 2 13:30:20.032967 extend-filesystems[1678]: Found sda7 Mar 2 13:30:20.032967 extend-filesystems[1678]: Found sda9 Mar 2 13:30:20.032967 extend-filesystems[1678]: Checking size of /dev/sda9 Mar 2 13:30:20.235893 extend-filesystems[1678]: Old size kept for /dev/sda9 Mar 2 13:30:20.235893 extend-filesystems[1678]: Found sr0 Mar 2 13:30:20.242472 coreos-metadata[1671]: Mar 02 13:30:20.159 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 2 13:30:20.242472 coreos-metadata[1671]: Mar 02 13:30:20.166 INFO Fetch successful Mar 2 13:30:20.242472 coreos-metadata[1671]: Mar 02 13:30:20.166 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 2 13:30:20.242472 coreos-metadata[1671]: Mar 02 13:30:20.173 INFO Fetch successful Mar 2 13:30:20.242472 coreos-metadata[1671]: Mar 02 13:30:20.176 INFO Fetching http://168.63.129.16/machine/73329df8-8987-443a-913a-f700887da06f/c910f544%2D925d%2D4851%2D83ea%2D3abab14f758a.%5Fci%2D4081.3.101%2D160832fd4e?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 2 13:30:20.242472 coreos-metadata[1671]: Mar 02 13:30:20.184 INFO Fetch successful Mar 2 13:30:20.242472 coreos-metadata[1671]: Mar 02 13:30:20.184 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 2 13:30:20.242472 coreos-metadata[1671]: Mar 02 13:30:20.200 INFO Fetch successful Mar 2 13:30:20.036738 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 13:30:20.063937 dbus-daemon[1672]: [system] SELinux support is enabled Mar 2 13:30:20.046526 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 13:30:20.069823 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 13:30:20.082838 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 13:30:20.083305 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 13:30:20.085684 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 13:30:20.256188 jq[1702]: true Mar 2 13:30:20.102645 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 13:30:20.256423 update_engine[1696]: I20260302 13:30:20.155085 1696 main.cc:92] Flatcar Update Engine starting Mar 2 13:30:20.256423 update_engine[1696]: I20260302 13:30:20.161950 1696 update_check_scheduler.cc:74] Next update check in 8m13s Mar 2 13:30:20.109793 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 13:30:20.120310 systemd[1]: Started chronyd.service - NTP client/server. Mar 2 13:30:20.133025 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 13:30:20.133211 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 13:30:20.133482 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 13:30:20.133977 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 13:30:20.166936 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 13:30:20.167114 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 13:30:20.174903 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 13:30:20.199954 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 13:30:20.200166 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 13:30:20.218344 systemd-logind[1692]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Mar 2 13:30:20.219644 systemd-logind[1692]: New seat seat0. Mar 2 13:30:20.249900 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 13:30:20.292495 jq[1726]: true Mar 2 13:30:20.293276 (ntainerd)[1729]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 13:30:20.299327 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 2 13:30:20.304160 dbus-daemon[1672]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 2 13:30:20.308812 systemd[1]: Started update-engine.service - Update Engine. Mar 2 13:30:20.320519 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 13:30:20.320731 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 13:30:20.321082 tar[1715]: linux-arm64/LICENSE Mar 2 13:30:20.321082 tar[1715]: linux-arm64/helm Mar 2 13:30:20.320871 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 13:30:20.330254 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 13:30:20.330367 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 13:30:20.343495 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 13:30:20.392748 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1719) Mar 2 13:30:20.554550 bash[1778]: Updated "/home/core/.ssh/authorized_keys" Mar 2 13:30:20.555552 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 13:30:20.566958 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 2 13:30:20.636596 locksmithd[1752]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 13:30:20.983305 tar[1715]: linux-arm64/README.md Mar 2 13:30:20.996520 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 2 13:30:21.154664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:30:21.157487 containerd[1729]: time="2026-03-02T13:30:21.155638180Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 2 13:30:21.161218 (kubelet)[1805]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:30:21.217608 containerd[1729]: time="2026-03-02T13:30:21.217550500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:30:21.222160 containerd[1729]: time="2026-03-02T13:30:21.222097700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:30:21.222404 containerd[1729]: time="2026-03-02T13:30:21.222383420Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 2 13:30:21.222514 containerd[1729]: time="2026-03-02T13:30:21.222464700Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 2 13:30:21.223924 containerd[1729]: time="2026-03-02T13:30:21.223182580Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 2 13:30:21.223924 containerd[1729]: time="2026-03-02T13:30:21.223707460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 2 13:30:21.223924 containerd[1729]: time="2026-03-02T13:30:21.223812900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:30:21.223924 containerd[1729]: time="2026-03-02T13:30:21.223829180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:30:21.225485 containerd[1729]: time="2026-03-02T13:30:21.224627380Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:30:21.225485 containerd[1729]: time="2026-03-02T13:30:21.224654300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 2 13:30:21.225485 containerd[1729]: time="2026-03-02T13:30:21.224682540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:30:21.225485 containerd[1729]: time="2026-03-02T13:30:21.224695460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 2 13:30:21.225485 containerd[1729]: time="2026-03-02T13:30:21.224791860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:30:21.225485 containerd[1729]: time="2026-03-02T13:30:21.225015180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:30:21.226189 containerd[1729]: time="2026-03-02T13:30:21.226162180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:30:21.226279 containerd[1729]: time="2026-03-02T13:30:21.226264940Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 2 13:30:21.226449 containerd[1729]: time="2026-03-02T13:30:21.226429820Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 2 13:30:21.227364 containerd[1729]: time="2026-03-02T13:30:21.227169260Z" level=info msg="metadata content store policy set" policy=shared Mar 2 13:30:21.244053 containerd[1729]: time="2026-03-02T13:30:21.243818180Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 2 13:30:21.244053 containerd[1729]: time="2026-03-02T13:30:21.243890180Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 2 13:30:21.244053 containerd[1729]: time="2026-03-02T13:30:21.243914300Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 2 13:30:21.244053 containerd[1729]: time="2026-03-02T13:30:21.243931380Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 2 13:30:21.244499 containerd[1729]: time="2026-03-02T13:30:21.243949020Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 2 13:30:21.244499 containerd[1729]: time="2026-03-02T13:30:21.244408540Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 2 13:30:21.246494 containerd[1729]: time="2026-03-02T13:30:21.245451660Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 2 13:30:21.246494 containerd[1729]: time="2026-03-02T13:30:21.245631700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 2 13:30:21.246494 containerd[1729]: time="2026-03-02T13:30:21.245651500Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 2 13:30:21.246494 containerd[1729]: time="2026-03-02T13:30:21.245665540Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 2 13:30:21.246494 containerd[1729]: time="2026-03-02T13:30:21.245679700Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 2 13:30:21.246494 containerd[1729]: time="2026-03-02T13:30:21.245694820Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 2 13:30:21.246494 containerd[1729]: time="2026-03-02T13:30:21.245707500Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 2 13:30:21.246494 containerd[1729]: time="2026-03-02T13:30:21.245722140Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 2 13:30:21.246494 containerd[1729]: time="2026-03-02T13:30:21.245737380Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 2 13:30:21.246494 containerd[1729]: time="2026-03-02T13:30:21.245750100Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 2 13:30:21.246494 containerd[1729]: time="2026-03-02T13:30:21.245763900Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 2 13:30:21.246494 containerd[1729]: time="2026-03-02T13:30:21.245776060Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 2 13:30:21.246494 containerd[1729]: time="2026-03-02T13:30:21.245797660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 2 13:30:21.246494 containerd[1729]: time="2026-03-02T13:30:21.245812660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 2 13:30:21.246795 containerd[1729]: time="2026-03-02T13:30:21.245825540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 2 13:30:21.246795 containerd[1729]: time="2026-03-02T13:30:21.245839220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 2 13:30:21.246795 containerd[1729]: time="2026-03-02T13:30:21.245851500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 2 13:30:21.246795 containerd[1729]: time="2026-03-02T13:30:21.245881380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 2 13:30:21.246795 containerd[1729]: time="2026-03-02T13:30:21.245893700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 2 13:30:21.246795 containerd[1729]: time="2026-03-02T13:30:21.245907380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 2 13:30:21.246795 containerd[1729]: time="2026-03-02T13:30:21.245919660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 2 13:30:21.246795 containerd[1729]: time="2026-03-02T13:30:21.245933820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 2 13:30:21.246795 containerd[1729]: time="2026-03-02T13:30:21.245945620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 2 13:30:21.246795 containerd[1729]: time="2026-03-02T13:30:21.245958900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 2 13:30:21.246795 containerd[1729]: time="2026-03-02T13:30:21.245971820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 2 13:30:21.246795 containerd[1729]: time="2026-03-02T13:30:21.245988980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 2 13:30:21.246795 containerd[1729]: time="2026-03-02T13:30:21.246010220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 2 13:30:21.246795 containerd[1729]: time="2026-03-02T13:30:21.246022380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 2 13:30:21.246795 containerd[1729]: time="2026-03-02T13:30:21.246037060Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 2 13:30:21.248205 containerd[1729]: time="2026-03-02T13:30:21.247438060Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 2 13:30:21.248205 containerd[1729]: time="2026-03-02T13:30:21.247496620Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 2 13:30:21.248205 containerd[1729]: time="2026-03-02T13:30:21.247510340Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 2 13:30:21.248205 containerd[1729]: time="2026-03-02T13:30:21.247523860Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 2 13:30:21.248205 containerd[1729]: time="2026-03-02T13:30:21.247534300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 2 13:30:21.248205 containerd[1729]: time="2026-03-02T13:30:21.247551380Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 2 13:30:21.248205 containerd[1729]: time="2026-03-02T13:30:21.247562340Z" level=info msg="NRI interface is disabled by configuration." Mar 2 13:30:21.248205 containerd[1729]: time="2026-03-02T13:30:21.247573380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 2 13:30:21.248779 containerd[1729]: time="2026-03-02T13:30:21.248711580Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 2 13:30:21.249887 containerd[1729]: time="2026-03-02T13:30:21.248930380Z" level=info msg="Connect containerd service" Mar 2 13:30:21.249887 containerd[1729]: time="2026-03-02T13:30:21.248989700Z" level=info msg="using legacy CRI server" Mar 2 13:30:21.249887 containerd[1729]: time="2026-03-02T13:30:21.248998380Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 13:30:21.249887 containerd[1729]: time="2026-03-02T13:30:21.249101980Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 2 13:30:21.250928 containerd[1729]: time="2026-03-02T13:30:21.250868220Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:30:21.251155 containerd[1729]: time="2026-03-02T13:30:21.251106340Z" level=info msg="Start subscribing containerd event" Mar 2 13:30:21.251493 containerd[1729]: time="2026-03-02T13:30:21.251238780Z" level=info msg="Start recovering state" Mar 2 13:30:21.251493 containerd[1729]: time="2026-03-02T13:30:21.251313220Z" level=info msg="Start event monitor" Mar 2 13:30:21.251493 containerd[1729]: time="2026-03-02T13:30:21.251323980Z" level=info msg="Start snapshots syncer" Mar 2 13:30:21.251493 containerd[1729]: time="2026-03-02T13:30:21.251340780Z" level=info msg="Start cni network conf syncer for default" Mar 2 13:30:21.251493 containerd[1729]: time="2026-03-02T13:30:21.251348940Z" level=info msg="Start streaming server" Mar 2 13:30:21.252560 containerd[1729]: time="2026-03-02T13:30:21.252497300Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 13:30:21.252560 containerd[1729]: time="2026-03-02T13:30:21.252561820Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 13:30:21.252706 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 13:30:21.262586 containerd[1729]: time="2026-03-02T13:30:21.262541420Z" level=info msg="containerd successfully booted in 0.112074s" Mar 2 13:30:21.598755 kubelet[1805]: E0302 13:30:21.598646 1805 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:30:21.601802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:30:21.601958 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:30:21.936189 sshd_keygen[1708]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 13:30:21.956083 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 13:30:21.966877 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 13:30:21.977633 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 2 13:30:21.984839 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 13:30:21.985037 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 13:30:22.001173 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 13:30:22.011669 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 2 13:30:22.021515 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 13:30:22.032847 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 13:30:22.044788 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 2 13:30:22.050225 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 13:30:22.054496 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 13:30:22.062542 systemd[1]: Startup finished in 635ms (kernel) + 12.569s (initrd) + 11.738s (userspace) = 24.944s. Mar 2 13:30:22.349446 login[1836]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 2 13:30:22.351694 login[1837]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:30:22.363227 systemd-logind[1692]: New session 1 of user core. Mar 2 13:30:22.364618 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 13:30:22.370698 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 13:30:22.399649 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 13:30:22.406036 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 13:30:22.436654 (systemd)[1844]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 13:30:22.559200 systemd[1844]: Queued start job for default target default.target. Mar 2 13:30:22.567915 systemd[1844]: Created slice app.slice - User Application Slice. Mar 2 13:30:22.567947 systemd[1844]: Reached target paths.target - Paths. Mar 2 13:30:22.567960 systemd[1844]: Reached target timers.target - Timers. Mar 2 13:30:22.569293 systemd[1844]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 13:30:22.580503 systemd[1844]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 13:30:22.580621 systemd[1844]: Reached target sockets.target - Sockets. Mar 2 13:30:22.580634 systemd[1844]: Reached target basic.target - Basic System. Mar 2 13:30:22.580674 systemd[1844]: Reached target default.target - Main User Target. Mar 2 13:30:22.580701 systemd[1844]: Startup finished in 136ms. Mar 2 13:30:22.580772 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 13:30:22.582063 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 13:30:23.349777 login[1836]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:30:23.354044 systemd-logind[1692]: New session 2 of user core. Mar 2 13:30:23.360621 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 13:30:23.749120 waagent[1833]: 2026-03-02T13:30:23.744410Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 2 13:30:23.749760 waagent[1833]: 2026-03-02T13:30:23.749675Z INFO Daemon Daemon OS: flatcar 4081.3.101 Mar 2 13:30:23.753578 waagent[1833]: 2026-03-02T13:30:23.753505Z INFO Daemon Daemon Python: 3.11.9 Mar 2 13:30:23.757500 waagent[1833]: 2026-03-02T13:30:23.757394Z INFO Daemon Daemon Run daemon Mar 2 13:30:23.762994 waagent[1833]: 2026-03-02T13:30:23.762935Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.101' Mar 2 13:30:23.770473 waagent[1833]: 2026-03-02T13:30:23.770395Z INFO Daemon Daemon Using waagent for provisioning Mar 2 13:30:23.775228 waagent[1833]: 2026-03-02T13:30:23.775166Z INFO Daemon Daemon Activate resource disk Mar 2 13:30:23.778932 waagent[1833]: 2026-03-02T13:30:23.778877Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 2 13:30:23.788653 waagent[1833]: 2026-03-02T13:30:23.788587Z INFO Daemon Daemon Found device: None Mar 2 13:30:23.792702 waagent[1833]: 2026-03-02T13:30:23.792646Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 2 13:30:23.799590 waagent[1833]: 2026-03-02T13:30:23.799539Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 2 13:30:23.810837 waagent[1833]: 2026-03-02T13:30:23.810776Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 2 13:30:23.815750 waagent[1833]: 2026-03-02T13:30:23.815696Z INFO Daemon Daemon Running default provisioning handler Mar 2 13:30:23.826808 waagent[1833]: 2026-03-02T13:30:23.826722Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 2 13:30:23.838349 waagent[1833]: 2026-03-02T13:30:23.838277Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 2 13:30:23.846064 waagent[1833]: 2026-03-02T13:30:23.846007Z INFO Daemon Daemon cloud-init is enabled: False Mar 2 13:30:23.850111 waagent[1833]: 2026-03-02T13:30:23.850060Z INFO Daemon Daemon Copying ovf-env.xml Mar 2 13:30:23.966115 waagent[1833]: 2026-03-02T13:30:23.964543Z INFO Daemon Daemon Successfully mounted dvd Mar 2 13:30:24.001132 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 2 13:30:24.004382 waagent[1833]: 2026-03-02T13:30:24.004296Z INFO Daemon Daemon Detect protocol endpoint Mar 2 13:30:24.008230 waagent[1833]: 2026-03-02T13:30:24.008176Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 2 13:30:24.012800 waagent[1833]: 2026-03-02T13:30:24.012755Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 2 13:30:24.017963 waagent[1833]: 2026-03-02T13:30:24.017922Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 2 13:30:24.022330 waagent[1833]: 2026-03-02T13:30:24.022271Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 2 13:30:24.026780 waagent[1833]: 2026-03-02T13:30:24.026727Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 2 13:30:24.060203 waagent[1833]: 2026-03-02T13:30:24.060140Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 2 13:30:24.066262 waagent[1833]: 2026-03-02T13:30:24.066233Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 2 13:30:24.070686 waagent[1833]: 2026-03-02T13:30:24.070640Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 2 13:30:24.225494 waagent[1833]: 2026-03-02T13:30:24.221076Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 2 13:30:24.230550 waagent[1833]: 2026-03-02T13:30:24.226225Z INFO Daemon Daemon Forcing an update of the goal state. Mar 2 13:30:24.234450 waagent[1833]: 2026-03-02T13:30:24.234379Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 2 13:30:24.254218 waagent[1833]: 2026-03-02T13:30:24.254134Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 2 13:30:24.259087 waagent[1833]: 2026-03-02T13:30:24.259038Z INFO Daemon Mar 2 13:30:24.261422 waagent[1833]: 2026-03-02T13:30:24.261380Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: e50cdc3d-ac2f-4661-b5e8-c8ddb615a320 eTag: 9498394073503225568 source: Fabric] Mar 2 13:30:24.270641 waagent[1833]: 2026-03-02T13:30:24.270588Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 2 13:30:24.276727 waagent[1833]: 2026-03-02T13:30:24.276669Z INFO Daemon Mar 2 13:30:24.279258 waagent[1833]: 2026-03-02T13:30:24.279205Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 2 13:30:24.289844 waagent[1833]: 2026-03-02T13:30:24.289802Z INFO Daemon Daemon Downloading artifacts profile blob Mar 2 13:30:24.457087 waagent[1833]: 2026-03-02T13:30:24.456992Z INFO Daemon Downloaded certificate {'thumbprint': '202E448B6B0C3A8AF18D3EFF25F6303A7E5C42AE', 'hasPrivateKey': True} Mar 2 13:30:24.465115 waagent[1833]: 2026-03-02T13:30:24.465062Z INFO Daemon Fetch goal state completed Mar 2 13:30:24.475627 waagent[1833]: 2026-03-02T13:30:24.475582Z INFO Daemon Daemon Starting provisioning Mar 2 13:30:24.479674 waagent[1833]: 2026-03-02T13:30:24.479621Z INFO Daemon Daemon Handle ovf-env.xml. Mar 2 13:30:24.483220 waagent[1833]: 2026-03-02T13:30:24.483181Z INFO Daemon Daemon Set hostname [ci-4081.3.101-160832fd4e] Mar 2 13:30:24.490265 waagent[1833]: 2026-03-02T13:30:24.490194Z INFO Daemon Daemon Publish hostname [ci-4081.3.101-160832fd4e] Mar 2 13:30:24.495808 waagent[1833]: 2026-03-02T13:30:24.495742Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 2 13:30:24.501255 waagent[1833]: 2026-03-02T13:30:24.501198Z INFO Daemon Daemon Primary interface is [eth0] Mar 2 13:30:24.530155 systemd-networkd[1352]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:30:24.530162 systemd-networkd[1352]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:30:24.530210 systemd-networkd[1352]: eth0: DHCP lease lost Mar 2 13:30:24.531968 waagent[1833]: 2026-03-02T13:30:24.531864Z INFO Daemon Daemon Create user account if not exists Mar 2 13:30:24.536914 waagent[1833]: 2026-03-02T13:30:24.536850Z INFO Daemon Daemon User core already exists, skip useradd Mar 2 13:30:24.541599 waagent[1833]: 2026-03-02T13:30:24.541535Z INFO Daemon Daemon Configure sudoer Mar 2 13:30:24.543530 systemd-networkd[1352]: eth0: DHCPv6 lease lost Mar 2 13:30:24.545485 waagent[1833]: 2026-03-02T13:30:24.545407Z INFO Daemon Daemon Configure sshd Mar 2 13:30:24.549389 waagent[1833]: 2026-03-02T13:30:24.549325Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 2 13:30:24.560068 waagent[1833]: 2026-03-02T13:30:24.560016Z INFO Daemon Daemon Deploy ssh public key. Mar 2 13:30:24.573583 systemd-networkd[1352]: eth0: DHCPv4 address 10.200.20.22/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 2 13:30:25.675874 waagent[1833]: 2026-03-02T13:30:25.675820Z INFO Daemon Daemon Provisioning complete Mar 2 13:30:25.693409 waagent[1833]: 2026-03-02T13:30:25.693349Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 2 13:30:25.698536 waagent[1833]: 2026-03-02T13:30:25.698458Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 2 13:30:25.706508 waagent[1833]: 2026-03-02T13:30:25.706450Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 2 13:30:25.852075 waagent[1894]: 2026-03-02T13:30:25.851346Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 2 13:30:25.852075 waagent[1894]: 2026-03-02T13:30:25.851539Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.101 Mar 2 13:30:25.852075 waagent[1894]: 2026-03-02T13:30:25.851600Z INFO ExtHandler ExtHandler Python: 3.11.9 Mar 2 13:30:26.120512 waagent[1894]: 2026-03-02T13:30:26.120301Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.101; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 2 13:30:26.120817 waagent[1894]: 2026-03-02T13:30:26.120726Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 2 13:30:26.120968 waagent[1894]: 2026-03-02T13:30:26.120911Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 2 13:30:26.129148 waagent[1894]: 2026-03-02T13:30:26.129083Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 2 13:30:26.134861 waagent[1894]: 2026-03-02T13:30:26.134822Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 2 13:30:26.135355 waagent[1894]: 2026-03-02T13:30:26.135316Z INFO ExtHandler Mar 2 13:30:26.135426 waagent[1894]: 2026-03-02T13:30:26.135399Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 9c6469c5-c248-4c20-9c2b-46724d219d15 eTag: 9498394073503225568 source: Fabric] Mar 2 13:30:26.135733 waagent[1894]: 2026-03-02T13:30:26.135694Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 2 13:30:26.136316 waagent[1894]: 2026-03-02T13:30:26.136272Z INFO ExtHandler Mar 2 13:30:26.136375 waagent[1894]: 2026-03-02T13:30:26.136350Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 2 13:30:26.140064 waagent[1894]: 2026-03-02T13:30:26.140034Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 2 13:30:26.211783 waagent[1894]: 2026-03-02T13:30:26.211685Z INFO ExtHandler Downloaded certificate {'thumbprint': '202E448B6B0C3A8AF18D3EFF25F6303A7E5C42AE', 'hasPrivateKey': True} Mar 2 13:30:26.212343 waagent[1894]: 2026-03-02T13:30:26.212297Z INFO ExtHandler Fetch goal state completed Mar 2 13:30:26.227571 waagent[1894]: 2026-03-02T13:30:26.227509Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1894 Mar 2 13:30:26.227740 waagent[1894]: 2026-03-02T13:30:26.227706Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 2 13:30:26.229457 waagent[1894]: 2026-03-02T13:30:26.229413Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.101', '', 'Flatcar Container Linux by Kinvolk'] Mar 2 13:30:26.229849 waagent[1894]: 2026-03-02T13:30:26.229802Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 2 13:30:26.252089 waagent[1894]: 2026-03-02T13:30:26.252045Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 2 13:30:26.256204 waagent[1894]: 2026-03-02T13:30:26.256143Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 2 13:30:26.263061 waagent[1894]: 2026-03-02T13:30:26.263017Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 2 13:30:26.270114 systemd[1]: Reloading requested from client PID 1907 ('systemctl') (unit waagent.service)... Mar 2 13:30:26.270369 systemd[1]: Reloading... Mar 2 13:30:26.359501 zram_generator::config[1955]: No configuration found. Mar 2 13:30:26.449700 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:30:26.529119 systemd[1]: Reloading finished in 258 ms. Mar 2 13:30:26.554127 waagent[1894]: 2026-03-02T13:30:26.553985Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 2 13:30:26.560984 systemd[1]: Reloading requested from client PID 1997 ('systemctl') (unit waagent.service)... Mar 2 13:30:26.561000 systemd[1]: Reloading... Mar 2 13:30:26.648508 zram_generator::config[2031]: No configuration found. Mar 2 13:30:26.757603 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:30:26.832980 systemd[1]: Reloading finished in 271 ms. Mar 2 13:30:26.860241 waagent[1894]: 2026-03-02T13:30:26.859463Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 2 13:30:26.860241 waagent[1894]: 2026-03-02T13:30:26.859633Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 2 13:30:27.267064 waagent[1894]: 2026-03-02T13:30:27.266981Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 2 13:30:27.267666 waagent[1894]: 2026-03-02T13:30:27.267614Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 2 13:30:27.268514 waagent[1894]: 2026-03-02T13:30:27.268412Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 2 13:30:27.268883 waagent[1894]: 2026-03-02T13:30:27.268795Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 2 13:30:27.269500 waagent[1894]: 2026-03-02T13:30:27.269092Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 2 13:30:27.269500 waagent[1894]: 2026-03-02T13:30:27.269190Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 2 13:30:27.269500 waagent[1894]: 2026-03-02T13:30:27.269388Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 2 13:30:27.269619 waagent[1894]: 2026-03-02T13:30:27.269584Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 2 13:30:27.269619 waagent[1894]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 2 13:30:27.269619 waagent[1894]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 2 13:30:27.269619 waagent[1894]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 2 13:30:27.269619 waagent[1894]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 2 13:30:27.269619 waagent[1894]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 2 13:30:27.269619 waagent[1894]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 2 13:30:27.270028 waagent[1894]: 2026-03-02T13:30:27.269978Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 2 13:30:27.270369 waagent[1894]: 2026-03-02T13:30:27.270068Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 2 13:30:27.270369 waagent[1894]: 2026-03-02T13:30:27.270187Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 2 13:30:27.270557 waagent[1894]: 2026-03-02T13:30:27.270511Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 2 13:30:27.270615 waagent[1894]: 2026-03-02T13:30:27.270337Z INFO EnvHandler ExtHandler Configure routes Mar 2 13:30:27.271051 waagent[1894]: 2026-03-02T13:30:27.270998Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 2 13:30:27.271198 waagent[1894]: 2026-03-02T13:30:27.271152Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 2 13:30:27.271289 waagent[1894]: 2026-03-02T13:30:27.271254Z INFO EnvHandler ExtHandler Gateway:None Mar 2 13:30:27.271375 waagent[1894]: 2026-03-02T13:30:27.271339Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 2 13:30:27.271753 waagent[1894]: 2026-03-02T13:30:27.271708Z INFO EnvHandler ExtHandler Routes:None Mar 2 13:30:27.278052 waagent[1894]: 2026-03-02T13:30:27.278002Z INFO ExtHandler ExtHandler Mar 2 13:30:27.278435 waagent[1894]: 2026-03-02T13:30:27.278383Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 2a1b9628-f9b0-4c9e-bf9a-3077de9b4a8b correlation b58d1452-5dd4-4320-a347-40f23acfa7be created: 2026-03-02T13:29:18.683647Z] Mar 2 13:30:27.279642 waagent[1894]: 2026-03-02T13:30:27.279588Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 2 13:30:27.280302 waagent[1894]: 2026-03-02T13:30:27.280258Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Mar 2 13:30:27.317266 waagent[1894]: 2026-03-02T13:30:27.317193Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: CB604CDD-D087-4B3B-A6F8-749AB797773F;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 2 13:30:27.337563 waagent[1894]: 2026-03-02T13:30:27.337481Z INFO MonitorHandler ExtHandler Network interfaces: Mar 2 13:30:27.337563 waagent[1894]: Executing ['ip', '-a', '-o', 'link']: Mar 2 13:30:27.337563 waagent[1894]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 2 13:30:27.337563 waagent[1894]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:78:9f:4f brd ff:ff:ff:ff:ff:ff Mar 2 13:30:27.337563 waagent[1894]: 3: enP52931s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:78:9f:4f brd ff:ff:ff:ff:ff:ff\ altname enP52931p0s2 Mar 2 13:30:27.337563 waagent[1894]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 2 13:30:27.337563 waagent[1894]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 2 13:30:27.337563 waagent[1894]: 2: eth0 inet 10.200.20.22/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 2 13:30:27.337563 waagent[1894]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 2 13:30:27.337563 waagent[1894]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 2 13:30:27.337563 waagent[1894]: 2: eth0 inet6 fe80::222:48ff:fe78:9f4f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 2 13:30:27.371501 waagent[1894]: 2026-03-02T13:30:27.370660Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 2 13:30:27.371501 waagent[1894]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 2 13:30:27.371501 waagent[1894]: pkts bytes target prot opt in out source destination Mar 2 13:30:27.371501 waagent[1894]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 2 13:30:27.371501 waagent[1894]: pkts bytes target prot opt in out source destination Mar 2 13:30:27.371501 waagent[1894]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 2 13:30:27.371501 waagent[1894]: pkts bytes target prot opt in out source destination Mar 2 13:30:27.371501 waagent[1894]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 2 13:30:27.371501 waagent[1894]: 8 998 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 2 13:30:27.371501 waagent[1894]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 2 13:30:27.375122 waagent[1894]: 2026-03-02T13:30:27.375051Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 2 13:30:27.375122 waagent[1894]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 2 13:30:27.375122 waagent[1894]: pkts bytes target prot opt in out source destination Mar 2 13:30:27.375122 waagent[1894]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 2 13:30:27.375122 waagent[1894]: pkts bytes target prot opt in out source destination Mar 2 13:30:27.375122 waagent[1894]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 2 13:30:27.375122 waagent[1894]: pkts bytes target prot opt in out source destination Mar 2 13:30:27.375122 waagent[1894]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 2 13:30:27.375122 waagent[1894]: 12 1413 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 2 13:30:27.375122 waagent[1894]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 2 13:30:27.375424 waagent[1894]: 2026-03-02T13:30:27.375360Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 2 13:30:31.825293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 13:30:31.830685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:30:31.961501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:30:31.972783 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:30:32.035984 kubelet[2124]: E0302 13:30:32.035898 2124 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:30:32.039798 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:30:32.040081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:30:34.773733 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 13:30:34.775744 systemd[1]: Started sshd@0-10.200.20.22:22-10.200.16.10:57532.service - OpenSSH per-connection server daemon (10.200.16.10:57532). Mar 2 13:30:35.322505 sshd[2132]: Accepted publickey for core from 10.200.16.10 port 57532 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:30:35.323949 sshd[2132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:30:35.328706 systemd-logind[1692]: New session 3 of user core. Mar 2 13:30:35.334621 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 13:30:35.757355 systemd[1]: Started sshd@1-10.200.20.22:22-10.200.16.10:57536.service - OpenSSH per-connection server daemon (10.200.16.10:57536). Mar 2 13:30:36.244038 sshd[2137]: Accepted publickey for core from 10.200.16.10 port 57536 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:30:36.245412 sshd[2137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:30:36.249386 systemd-logind[1692]: New session 4 of user core. Mar 2 13:30:36.255644 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 13:30:36.595617 sshd[2137]: pam_unix(sshd:session): session closed for user core Mar 2 13:30:36.598968 systemd[1]: sshd@1-10.200.20.22:22-10.200.16.10:57536.service: Deactivated successfully. Mar 2 13:30:36.601004 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 13:30:36.601645 systemd-logind[1692]: Session 4 logged out. Waiting for processes to exit. Mar 2 13:30:36.602432 systemd-logind[1692]: Removed session 4. Mar 2 13:30:36.683241 systemd[1]: Started sshd@2-10.200.20.22:22-10.200.16.10:57538.service - OpenSSH per-connection server daemon (10.200.16.10:57538). Mar 2 13:30:37.170497 sshd[2144]: Accepted publickey for core from 10.200.16.10 port 57538 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:30:37.171373 sshd[2144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:30:37.176132 systemd-logind[1692]: New session 5 of user core. Mar 2 13:30:37.181629 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 13:30:37.517190 sshd[2144]: pam_unix(sshd:session): session closed for user core Mar 2 13:30:37.521518 systemd-logind[1692]: Session 5 logged out. Waiting for processes to exit. Mar 2 13:30:37.521994 systemd[1]: sshd@2-10.200.20.22:22-10.200.16.10:57538.service: Deactivated successfully. Mar 2 13:30:37.523864 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 13:30:37.526174 systemd-logind[1692]: Removed session 5. Mar 2 13:30:37.603644 systemd[1]: Started sshd@3-10.200.20.22:22-10.200.16.10:57550.service - OpenSSH per-connection server daemon (10.200.16.10:57550). Mar 2 13:30:38.092509 sshd[2151]: Accepted publickey for core from 10.200.16.10 port 57550 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:30:38.093551 sshd[2151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:30:38.098180 systemd-logind[1692]: New session 6 of user core. Mar 2 13:30:38.108664 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 13:30:38.443342 sshd[2151]: pam_unix(sshd:session): session closed for user core Mar 2 13:30:38.447048 systemd[1]: sshd@3-10.200.20.22:22-10.200.16.10:57550.service: Deactivated successfully. Mar 2 13:30:38.448961 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 13:30:38.450448 systemd-logind[1692]: Session 6 logged out. Waiting for processes to exit. Mar 2 13:30:38.451934 systemd-logind[1692]: Removed session 6. Mar 2 13:30:38.536722 systemd[1]: Started sshd@4-10.200.20.22:22-10.200.16.10:57554.service - OpenSSH per-connection server daemon (10.200.16.10:57554). Mar 2 13:30:39.022566 sshd[2158]: Accepted publickey for core from 10.200.16.10 port 57554 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:30:39.024494 sshd[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:30:39.029633 systemd-logind[1692]: New session 7 of user core. Mar 2 13:30:39.036646 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 13:30:39.431559 sudo[2161]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 2 13:30:39.431846 sudo[2161]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:30:39.447720 sudo[2161]: pam_unix(sudo:session): session closed for user root Mar 2 13:30:39.525243 sshd[2158]: pam_unix(sshd:session): session closed for user core Mar 2 13:30:39.530541 systemd[1]: sshd@4-10.200.20.22:22-10.200.16.10:57554.service: Deactivated successfully. Mar 2 13:30:39.532655 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 13:30:39.533857 systemd-logind[1692]: Session 7 logged out. Waiting for processes to exit. Mar 2 13:30:39.535105 systemd-logind[1692]: Removed session 7. Mar 2 13:30:39.620343 systemd[1]: Started sshd@5-10.200.20.22:22-10.200.16.10:57558.service - OpenSSH per-connection server daemon (10.200.16.10:57558). Mar 2 13:30:40.104876 sshd[2166]: Accepted publickey for core from 10.200.16.10 port 57558 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:30:40.106300 sshd[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:30:40.111030 systemd-logind[1692]: New session 8 of user core. Mar 2 13:30:40.117668 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 13:30:40.380698 sudo[2170]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 2 13:30:40.381230 sudo[2170]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:30:40.384816 sudo[2170]: pam_unix(sudo:session): session closed for user root Mar 2 13:30:40.390374 sudo[2169]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 2 13:30:40.391189 sudo[2169]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:30:40.404724 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 2 13:30:40.406584 auditctl[2173]: No rules Mar 2 13:30:40.407090 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 13:30:40.407265 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 2 13:30:40.410897 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 13:30:40.441079 augenrules[2191]: No rules Mar 2 13:30:40.444526 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 13:30:40.446285 sudo[2169]: pam_unix(sudo:session): session closed for user root Mar 2 13:30:40.524296 sshd[2166]: pam_unix(sshd:session): session closed for user core Mar 2 13:30:40.527759 systemd[1]: sshd@5-10.200.20.22:22-10.200.16.10:57558.service: Deactivated successfully. Mar 2 13:30:40.529535 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 13:30:40.530792 systemd-logind[1692]: Session 8 logged out. Waiting for processes to exit. Mar 2 13:30:40.531942 systemd-logind[1692]: Removed session 8. Mar 2 13:30:40.615458 systemd[1]: Started sshd@6-10.200.20.22:22-10.200.16.10:45350.service - OpenSSH per-connection server daemon (10.200.16.10:45350). Mar 2 13:30:41.106499 sshd[2199]: Accepted publickey for core from 10.200.16.10 port 45350 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:30:41.107355 sshd[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:30:41.112127 systemd-logind[1692]: New session 9 of user core. Mar 2 13:30:41.117684 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 13:30:41.381901 sudo[2202]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 13:30:41.382175 sudo[2202]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:30:42.075235 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 2 13:30:42.081688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:30:42.190624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:30:42.203949 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:30:42.342669 kubelet[2220]: E0302 13:30:42.342536 2220 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:30:42.345953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:30:42.346114 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:30:43.142753 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 2 13:30:43.142906 (dockerd)[2233]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 2 13:30:43.779680 chronyd[1677]: Selected source PHC0 Mar 2 13:30:43.876008 dockerd[2233]: time="2026-03-02T13:30:43.875942827Z" level=info msg="Starting up" Mar 2 13:30:44.283171 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3349456058-merged.mount: Deactivated successfully. Mar 2 13:30:44.361452 dockerd[2233]: time="2026-03-02T13:30:44.361071201Z" level=info msg="Loading containers: start." Mar 2 13:30:44.535693 kernel: Initializing XFRM netlink socket Mar 2 13:30:44.688114 systemd-networkd[1352]: docker0: Link UP Mar 2 13:30:44.713904 dockerd[2233]: time="2026-03-02T13:30:44.713852479Z" level=info msg="Loading containers: done." Mar 2 13:30:44.727286 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3903589673-merged.mount: Deactivated successfully. Mar 2 13:30:44.740911 dockerd[2233]: time="2026-03-02T13:30:44.740650205Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 2 13:30:44.740911 dockerd[2233]: time="2026-03-02T13:30:44.740787810Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 2 13:30:44.741062 dockerd[2233]: time="2026-03-02T13:30:44.740933895Z" level=info msg="Daemon has completed initialization" Mar 2 13:30:44.807419 dockerd[2233]: time="2026-03-02T13:30:44.806936309Z" level=info msg="API listen on /run/docker.sock" Mar 2 13:30:44.807756 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 2 13:30:45.203310 containerd[1729]: time="2026-03-02T13:30:45.202890736Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 2 13:30:46.191100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount191372685.mount: Deactivated successfully. Mar 2 13:30:48.201606 containerd[1729]: time="2026-03-02T13:30:48.200741664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:48.203221 containerd[1729]: time="2026-03-02T13:30:48.203175706Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390174" Mar 2 13:30:48.209870 containerd[1729]: time="2026-03-02T13:30:48.209805272Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:48.215105 containerd[1729]: time="2026-03-02T13:30:48.214739196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:48.215848 containerd[1729]: time="2026-03-02T13:30:48.215813957Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 3.012878381s" Mar 2 13:30:48.215905 containerd[1729]: time="2026-03-02T13:30:48.215848717Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 2 13:30:48.216614 containerd[1729]: time="2026-03-02T13:30:48.216587838Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 2 13:30:50.389617 containerd[1729]: time="2026-03-02T13:30:50.388703520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:50.391201 containerd[1729]: time="2026-03-02T13:30:50.391170642Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552106" Mar 2 13:30:50.398762 containerd[1729]: time="2026-03-02T13:30:50.398726969Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:50.404267 containerd[1729]: time="2026-03-02T13:30:50.404223534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:50.406518 containerd[1729]: time="2026-03-02T13:30:50.406477256Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 2.189847178s" Mar 2 13:30:50.406561 containerd[1729]: time="2026-03-02T13:30:50.406523056Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 2 13:30:50.407013 containerd[1729]: time="2026-03-02T13:30:50.406986096Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 2 13:30:52.138501 containerd[1729]: time="2026-03-02T13:30:52.138443328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:52.141000 containerd[1729]: time="2026-03-02T13:30:52.140970453Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301305" Mar 2 13:30:52.144282 containerd[1729]: time="2026-03-02T13:30:52.144252298Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:52.152512 containerd[1729]: time="2026-03-02T13:30:52.151970351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:52.153331 containerd[1729]: time="2026-03-02T13:30:52.153276513Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 1.746251417s" Mar 2 13:30:52.153331 containerd[1729]: time="2026-03-02T13:30:52.153331513Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 2 13:30:52.153874 containerd[1729]: time="2026-03-02T13:30:52.153809714Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 2 13:30:52.575210 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 2 13:30:52.582675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:30:52.689229 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:30:52.696721 (kubelet)[2445]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:30:52.778350 kubelet[2445]: E0302 13:30:52.778297 2445 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:30:52.781596 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:30:52.781910 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:30:53.884150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2884906624.mount: Deactivated successfully. Mar 2 13:30:54.300208 containerd[1729]: time="2026-03-02T13:30:54.300087094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:54.302563 containerd[1729]: time="2026-03-02T13:30:54.302320778Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148870" Mar 2 13:30:54.305327 containerd[1729]: time="2026-03-02T13:30:54.305031742Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:54.310058 containerd[1729]: time="2026-03-02T13:30:54.310020990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:54.310799 containerd[1729]: time="2026-03-02T13:30:54.310769631Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 2.156918317s" Mar 2 13:30:54.310901 containerd[1729]: time="2026-03-02T13:30:54.310884472Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 2 13:30:54.311571 containerd[1729]: time="2026-03-02T13:30:54.311542033Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 2 13:30:55.030824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2108279464.mount: Deactivated successfully. Mar 2 13:30:56.538257 containerd[1729]: time="2026-03-02T13:30:56.538195365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:56.541156 containerd[1729]: time="2026-03-02T13:30:56.541119048Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Mar 2 13:30:56.544299 containerd[1729]: time="2026-03-02T13:30:56.544263971Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:56.551137 containerd[1729]: time="2026-03-02T13:30:56.551095218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:56.552306 containerd[1729]: time="2026-03-02T13:30:56.551954299Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.240376586s" Mar 2 13:30:56.552306 containerd[1729]: time="2026-03-02T13:30:56.551987819Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 2 13:30:56.552554 containerd[1729]: time="2026-03-02T13:30:56.552422580Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 2 13:30:57.120681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2527811025.mount: Deactivated successfully. Mar 2 13:30:57.141254 containerd[1729]: time="2026-03-02T13:30:57.140445535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:57.142931 containerd[1729]: time="2026-03-02T13:30:57.142898258Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 2 13:30:57.146008 containerd[1729]: time="2026-03-02T13:30:57.145955941Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:57.149928 containerd[1729]: time="2026-03-02T13:30:57.149862945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:57.150806 containerd[1729]: time="2026-03-02T13:30:57.150651345Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 598.197645ms" Mar 2 13:30:57.150806 containerd[1729]: time="2026-03-02T13:30:57.150685385Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 2 13:30:57.151282 containerd[1729]: time="2026-03-02T13:30:57.151107506Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 2 13:30:57.806654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount301942387.mount: Deactivated successfully. Mar 2 13:30:59.377503 containerd[1729]: time="2026-03-02T13:30:59.376970600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:59.379617 containerd[1729]: time="2026-03-02T13:30:59.379580723Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Mar 2 13:30:59.383972 containerd[1729]: time="2026-03-02T13:30:59.383938687Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:59.388587 containerd[1729]: time="2026-03-02T13:30:59.388522412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:30:59.389974 containerd[1729]: time="2026-03-02T13:30:59.389617813Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.238482387s" Mar 2 13:30:59.389974 containerd[1729]: time="2026-03-02T13:30:59.389655213Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 2 13:31:02.826008 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 2 13:31:02.834650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:31:02.946661 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:31:02.959142 (kubelet)[2606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:31:03.029716 kubelet[2606]: E0302 13:31:03.029676 2606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:31:03.033321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:31:03.033480 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:31:03.690561 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 2 13:31:05.091458 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:31:05.103765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:31:05.140192 update_engine[1696]: I20260302 13:31:05.139711 1696 update_attempter.cc:509] Updating boot flags... Mar 2 13:31:05.142019 systemd[1]: Reloading requested from client PID 2622 ('systemctl') (unit session-9.scope)... Mar 2 13:31:05.142037 systemd[1]: Reloading... Mar 2 13:31:05.258503 zram_generator::config[2681]: No configuration found. Mar 2 13:31:05.344171 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:31:05.423392 systemd[1]: Reloading finished in 281 ms. Mar 2 13:31:05.790749 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 2 13:31:05.790879 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 2 13:31:05.792540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:31:05.796783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:31:05.956517 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2732) Mar 2 13:31:06.122554 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2736) Mar 2 13:31:06.231990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:31:06.242776 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:31:06.283117 kubelet[2789]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:31:06.283117 kubelet[2789]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 13:31:06.283117 kubelet[2789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:31:06.283601 kubelet[2789]: I0302 13:31:06.283166 2789 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 13:31:06.537545 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2736) Mar 2 13:31:06.722716 kubelet[2789]: I0302 13:31:06.722674 2789 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 2 13:31:06.722716 kubelet[2789]: I0302 13:31:06.722707 2789 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:31:06.722980 kubelet[2789]: I0302 13:31:06.722961 2789 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 13:31:06.746752 kubelet[2789]: E0302 13:31:06.746701 2789 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:31:06.752421 kubelet[2789]: I0302 13:31:06.752175 2789 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:31:06.760698 kubelet[2789]: E0302 13:31:06.760654 2789 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:31:06.760698 kubelet[2789]: I0302 13:31:06.760700 2789 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 2 13:31:06.764179 kubelet[2789]: I0302 13:31:06.764155 2789 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 2 13:31:06.766117 kubelet[2789]: I0302 13:31:06.766070 2789 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:31:06.766280 kubelet[2789]: I0302 13:31:06.766118 2789 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.101-160832fd4e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:31:06.766365 kubelet[2789]: I0302 13:31:06.766287 2789 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 13:31:06.766365 kubelet[2789]: I0302 13:31:06.766295 2789 container_manager_linux.go:303] "Creating device plugin manager" Mar 2 13:31:06.766455 kubelet[2789]: I0302 13:31:06.766438 2789 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:31:06.769285 kubelet[2789]: I0302 13:31:06.769263 2789 kubelet.go:480] "Attempting to sync node with API server" Mar 2 13:31:06.769327 kubelet[2789]: I0302 13:31:06.769292 2789 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:31:06.769327 kubelet[2789]: I0302 13:31:06.769320 2789 kubelet.go:386] "Adding apiserver pod source" Mar 2 13:31:06.770639 kubelet[2789]: I0302 13:31:06.770446 2789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:31:06.774187 kubelet[2789]: E0302 13:31:06.774151 2789 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.101-160832fd4e&limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:31:06.775501 kubelet[2789]: I0302 13:31:06.774734 2789 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:31:06.775501 kubelet[2789]: I0302 13:31:06.775351 2789 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:31:06.775501 kubelet[2789]: W0302 13:31:06.775437 2789 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 13:31:06.779690 kubelet[2789]: I0302 13:31:06.779668 2789 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 2 13:31:06.779881 kubelet[2789]: I0302 13:31:06.779825 2789 server.go:1289] "Started kubelet" Mar 2 13:31:06.780321 kubelet[2789]: E0302 13:31:06.780287 2789 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:31:06.780490 kubelet[2789]: I0302 13:31:06.780391 2789 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:31:06.781224 kubelet[2789]: I0302 13:31:06.781203 2789 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:31:06.782506 kubelet[2789]: I0302 13:31:06.781979 2789 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:31:06.782506 kubelet[2789]: I0302 13:31:06.782423 2789 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:31:06.783768 kubelet[2789]: E0302 13:31:06.782720 2789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.22:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.22:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.101-160832fd4e.189909671a9d46d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.101-160832fd4e,UID:ci-4081.3.101-160832fd4e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.101-160832fd4e,},FirstTimestamp:2026-03-02 13:31:06.779797201 +0000 UTC m=+0.533340070,LastTimestamp:2026-03-02 13:31:06.779797201 +0000 UTC m=+0.533340070,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.101-160832fd4e,}" Mar 2 13:31:06.785561 kubelet[2789]: I0302 13:31:06.785536 2789 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 13:31:06.790311 kubelet[2789]: I0302 13:31:06.785947 2789 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:31:06.790595 kubelet[2789]: I0302 13:31:06.790577 2789 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 2 13:31:06.791737 kubelet[2789]: I0302 13:31:06.791707 2789 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 2 13:31:06.791990 kubelet[2789]: I0302 13:31:06.791977 2789 reconciler.go:26] "Reconciler: start to sync state" Mar 2 13:31:06.792606 kubelet[2789]: E0302 13:31:06.792580 2789 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:31:06.793795 kubelet[2789]: I0302 13:31:06.793770 2789 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:31:06.794022 kubelet[2789]: I0302 13:31:06.794003 2789 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:31:06.794926 kubelet[2789]: E0302 13:31:06.794904 2789 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:31:06.797488 kubelet[2789]: I0302 13:31:06.796247 2789 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:31:06.799320 kubelet[2789]: E0302 13:31:06.799285 2789 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.101-160832fd4e\" not found" Mar 2 13:31:06.815158 kubelet[2789]: I0302 13:31:06.815097 2789 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 2 13:31:06.816238 kubelet[2789]: I0302 13:31:06.816200 2789 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 2 13:31:06.816238 kubelet[2789]: I0302 13:31:06.816227 2789 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 2 13:31:06.816342 kubelet[2789]: I0302 13:31:06.816251 2789 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:31:06.816342 kubelet[2789]: I0302 13:31:06.816259 2789 kubelet.go:2436] "Starting kubelet main sync loop" Mar 2 13:31:06.816342 kubelet[2789]: E0302 13:31:06.816305 2789 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:31:06.819896 kubelet[2789]: E0302 13:31:06.819854 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-160832fd4e?timeout=10s\": dial tcp 10.200.20.22:6443: connect: connection refused" interval="200ms" Mar 2 13:31:06.821008 kubelet[2789]: E0302 13:31:06.820977 2789 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:31:06.883238 kubelet[2789]: I0302 13:31:06.883203 2789 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 13:31:06.883238 kubelet[2789]: I0302 13:31:06.883229 2789 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 13:31:06.883375 kubelet[2789]: I0302 13:31:06.883251 2789 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:31:06.889487 kubelet[2789]: I0302 13:31:06.889449 2789 policy_none.go:49] "None policy: Start" Mar 2 13:31:06.889487 kubelet[2789]: I0302 13:31:06.889487 2789 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 2 13:31:06.889575 kubelet[2789]: I0302 13:31:06.889504 2789 state_mem.go:35] "Initializing new in-memory state store" Mar 2 13:31:06.898067 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 2 13:31:06.907946 kubelet[2789]: E0302 13:31:06.907907 2789 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.101-160832fd4e\" not found" Mar 2 13:31:06.913537 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 2 13:31:06.916537 kubelet[2789]: E0302 13:31:06.916494 2789 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 2 13:31:06.917003 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 2 13:31:06.922133 kubelet[2789]: E0302 13:31:06.922101 2789 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:31:06.922316 kubelet[2789]: I0302 13:31:06.922299 2789 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 13:31:06.922343 kubelet[2789]: I0302 13:31:06.922315 2789 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:31:06.923097 kubelet[2789]: I0302 13:31:06.922937 2789 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 13:31:06.925440 kubelet[2789]: E0302 13:31:06.924860 2789 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:31:06.925440 kubelet[2789]: E0302 13:31:06.924901 2789 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.101-160832fd4e\" not found" Mar 2 13:31:07.021384 kubelet[2789]: E0302 13:31:07.021250 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-160832fd4e?timeout=10s\": dial tcp 10.200.20.22:6443: connect: connection refused" interval="400ms" Mar 2 13:31:07.024495 kubelet[2789]: I0302 13:31:07.024213 2789 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:07.024639 kubelet[2789]: E0302 13:31:07.024537 2789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.22:6443/api/v1/nodes\": dial tcp 10.200.20.22:6443: connect: connection refused" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:07.129875 systemd[1]: Created slice kubepods-burstable-pod745e5577629948574b0d8ae493b3af13.slice - libcontainer container kubepods-burstable-pod745e5577629948574b0d8ae493b3af13.slice. Mar 2 13:31:07.136395 kubelet[2789]: E0302 13:31:07.136206 2789 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-160832fd4e\" not found" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:07.141881 systemd[1]: Created slice kubepods-burstable-pod3ef96cc730d4903644d4777f3ff4bfec.slice - libcontainer container kubepods-burstable-pod3ef96cc730d4903644d4777f3ff4bfec.slice. Mar 2 13:31:07.150076 kubelet[2789]: E0302 13:31:07.149850 2789 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-160832fd4e\" not found" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:07.152950 systemd[1]: Created slice kubepods-burstable-podc84eed5a3aa13ab4cbbfba4d2a0cf494.slice - libcontainer container kubepods-burstable-podc84eed5a3aa13ab4cbbfba4d2a0cf494.slice. Mar 2 13:31:07.155028 kubelet[2789]: E0302 13:31:07.154813 2789 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-160832fd4e\" not found" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:07.193446 kubelet[2789]: I0302 13:31:07.193297 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/745e5577629948574b0d8ae493b3af13-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.101-160832fd4e\" (UID: \"745e5577629948574b0d8ae493b3af13\") " pod="kube-system/kube-apiserver-ci-4081.3.101-160832fd4e" Mar 2 13:31:07.193446 kubelet[2789]: I0302 13:31:07.193338 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ef96cc730d4903644d4777f3ff4bfec-ca-certs\") pod \"kube-controller-manager-ci-4081.3.101-160832fd4e\" (UID: \"3ef96cc730d4903644d4777f3ff4bfec\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-160832fd4e" Mar 2 13:31:07.193446 kubelet[2789]: I0302 13:31:07.193354 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ef96cc730d4903644d4777f3ff4bfec-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.101-160832fd4e\" (UID: \"3ef96cc730d4903644d4777f3ff4bfec\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-160832fd4e" Mar 2 13:31:07.193446 kubelet[2789]: I0302 13:31:07.193369 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c84eed5a3aa13ab4cbbfba4d2a0cf494-kubeconfig\") pod \"kube-scheduler-ci-4081.3.101-160832fd4e\" (UID: \"c84eed5a3aa13ab4cbbfba4d2a0cf494\") " pod="kube-system/kube-scheduler-ci-4081.3.101-160832fd4e" Mar 2 13:31:07.193446 kubelet[2789]: I0302 13:31:07.193385 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/745e5577629948574b0d8ae493b3af13-ca-certs\") pod \"kube-apiserver-ci-4081.3.101-160832fd4e\" (UID: \"745e5577629948574b0d8ae493b3af13\") " pod="kube-system/kube-apiserver-ci-4081.3.101-160832fd4e" Mar 2 13:31:07.193665 kubelet[2789]: I0302 13:31:07.193428 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/745e5577629948574b0d8ae493b3af13-k8s-certs\") pod \"kube-apiserver-ci-4081.3.101-160832fd4e\" (UID: \"745e5577629948574b0d8ae493b3af13\") " pod="kube-system/kube-apiserver-ci-4081.3.101-160832fd4e" Mar 2 13:31:07.193665 kubelet[2789]: I0302 13:31:07.193475 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3ef96cc730d4903644d4777f3ff4bfec-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.101-160832fd4e\" (UID: \"3ef96cc730d4903644d4777f3ff4bfec\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-160832fd4e" Mar 2 13:31:07.193665 kubelet[2789]: I0302 13:31:07.193500 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ef96cc730d4903644d4777f3ff4bfec-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.101-160832fd4e\" (UID: \"3ef96cc730d4903644d4777f3ff4bfec\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-160832fd4e" Mar 2 13:31:07.193665 kubelet[2789]: I0302 13:31:07.193516 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ef96cc730d4903644d4777f3ff4bfec-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.101-160832fd4e\" (UID: \"3ef96cc730d4903644d4777f3ff4bfec\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-160832fd4e" Mar 2 13:31:07.226431 kubelet[2789]: I0302 13:31:07.226389 2789 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:07.226743 kubelet[2789]: E0302 13:31:07.226717 2789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.22:6443/api/v1/nodes\": dial tcp 10.200.20.22:6443: connect: connection refused" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:07.421679 kubelet[2789]: E0302 13:31:07.421644 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-160832fd4e?timeout=10s\": dial tcp 10.200.20.22:6443: connect: connection refused" interval="800ms" Mar 2 13:31:07.437420 containerd[1729]: time="2026-03-02T13:31:07.437379087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.101-160832fd4e,Uid:745e5577629948574b0d8ae493b3af13,Namespace:kube-system,Attempt:0,}" Mar 2 13:31:07.451703 containerd[1729]: time="2026-03-02T13:31:07.451526743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.101-160832fd4e,Uid:3ef96cc730d4903644d4777f3ff4bfec,Namespace:kube-system,Attempt:0,}" Mar 2 13:31:07.456330 containerd[1729]: time="2026-03-02T13:31:07.456294148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.101-160832fd4e,Uid:c84eed5a3aa13ab4cbbfba4d2a0cf494,Namespace:kube-system,Attempt:0,}" Mar 2 13:31:07.629055 kubelet[2789]: I0302 13:31:07.628955 2789 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:07.629488 kubelet[2789]: E0302 13:31:07.629445 2789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.22:6443/api/v1/nodes\": dial tcp 10.200.20.22:6443: connect: connection refused" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:07.712541 kubelet[2789]: E0302 13:31:07.712400 2789 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:31:07.957314 kubelet[2789]: E0302 13:31:07.957269 2789 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:31:08.156665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3132497062.mount: Deactivated successfully. Mar 2 13:31:08.181110 containerd[1729]: time="2026-03-02T13:31:08.181061005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:31:08.183278 containerd[1729]: time="2026-03-02T13:31:08.183238649Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 2 13:31:08.185996 containerd[1729]: time="2026-03-02T13:31:08.185963574Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:31:08.190451 containerd[1729]: time="2026-03-02T13:31:08.190408623Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:31:08.195619 containerd[1729]: time="2026-03-02T13:31:08.195571032Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:31:08.201571 containerd[1729]: time="2026-03-02T13:31:08.201530484Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:31:08.205046 containerd[1729]: time="2026-03-02T13:31:08.203798528Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:31:08.210191 containerd[1729]: time="2026-03-02T13:31:08.210125580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:31:08.211496 containerd[1729]: time="2026-03-02T13:31:08.210971622Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 773.510735ms" Mar 2 13:31:08.211956 containerd[1729]: time="2026-03-02T13:31:08.211922664Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 755.559396ms" Mar 2 13:31:08.218610 containerd[1729]: time="2026-03-02T13:31:08.218415596Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 766.818133ms" Mar 2 13:31:08.222533 kubelet[2789]: E0302 13:31:08.222495 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-160832fd4e?timeout=10s\": dial tcp 10.200.20.22:6443: connect: connection refused" interval="1.6s" Mar 2 13:31:08.284312 kubelet[2789]: E0302 13:31:08.284268 2789 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.101-160832fd4e&limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:31:08.363434 kubelet[2789]: E0302 13:31:08.363378 2789 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:31:08.431626 kubelet[2789]: I0302 13:31:08.431526 2789 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:08.431928 kubelet[2789]: E0302 13:31:08.431847 2789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.22:6443/api/v1/nodes\": dial tcp 10.200.20.22:6443: connect: connection refused" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:08.776513 kubelet[2789]: E0302 13:31:08.776086 2789 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:31:08.788031 containerd[1729]: time="2026-03-02T13:31:08.787928560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:31:08.788648 containerd[1729]: time="2026-03-02T13:31:08.788001000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:31:08.788648 containerd[1729]: time="2026-03-02T13:31:08.788030960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:31:08.788990 containerd[1729]: time="2026-03-02T13:31:08.788925562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:31:08.790832 containerd[1729]: time="2026-03-02T13:31:08.790727806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:31:08.790964 containerd[1729]: time="2026-03-02T13:31:08.790807526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:31:08.790964 containerd[1729]: time="2026-03-02T13:31:08.790912406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:31:08.791210 containerd[1729]: time="2026-03-02T13:31:08.791079126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:31:08.799049 containerd[1729]: time="2026-03-02T13:31:08.798805861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:31:08.799049 containerd[1729]: time="2026-03-02T13:31:08.798872221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:31:08.799049 containerd[1729]: time="2026-03-02T13:31:08.798883421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:31:08.799049 containerd[1729]: time="2026-03-02T13:31:08.798961381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:31:08.816686 systemd[1]: Started cri-containerd-9d358c0d21fb90905630904017b333284e71f5040254bb37c7fee499a212bc4c.scope - libcontainer container 9d358c0d21fb90905630904017b333284e71f5040254bb37c7fee499a212bc4c. Mar 2 13:31:08.820410 systemd[1]: Started cri-containerd-68ef6894eba1e2141edd5b15b5ff602123f606f75273ebbbba3e5710db13f958.scope - libcontainer container 68ef6894eba1e2141edd5b15b5ff602123f606f75273ebbbba3e5710db13f958. Mar 2 13:31:08.831656 systemd[1]: Started cri-containerd-f5f326a85b59a987e908af11702702cae9032f01577d80474d4ebdefa0629d1c.scope - libcontainer container f5f326a85b59a987e908af11702702cae9032f01577d80474d4ebdefa0629d1c. Mar 2 13:31:08.870880 containerd[1729]: time="2026-03-02T13:31:08.870616918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.101-160832fd4e,Uid:745e5577629948574b0d8ae493b3af13,Namespace:kube-system,Attempt:0,} returns sandbox id \"68ef6894eba1e2141edd5b15b5ff602123f606f75273ebbbba3e5710db13f958\"" Mar 2 13:31:08.876616 containerd[1729]: time="2026-03-02T13:31:08.876431009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.101-160832fd4e,Uid:3ef96cc730d4903644d4777f3ff4bfec,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5f326a85b59a987e908af11702702cae9032f01577d80474d4ebdefa0629d1c\"" Mar 2 13:31:08.878582 containerd[1729]: time="2026-03-02T13:31:08.878138772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.101-160832fd4e,Uid:c84eed5a3aa13ab4cbbfba4d2a0cf494,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d358c0d21fb90905630904017b333284e71f5040254bb37c7fee499a212bc4c\"" Mar 2 13:31:08.892583 containerd[1729]: time="2026-03-02T13:31:08.892533639Z" level=info msg="CreateContainer within sandbox \"68ef6894eba1e2141edd5b15b5ff602123f606f75273ebbbba3e5710db13f958\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 2 13:31:08.898644 containerd[1729]: time="2026-03-02T13:31:08.898604091Z" level=info msg="CreateContainer within sandbox \"f5f326a85b59a987e908af11702702cae9032f01577d80474d4ebdefa0629d1c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 2 13:31:08.903060 containerd[1729]: time="2026-03-02T13:31:08.902887379Z" level=info msg="CreateContainer within sandbox \"9d358c0d21fb90905630904017b333284e71f5040254bb37c7fee499a212bc4c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 2 13:31:08.962828 containerd[1729]: time="2026-03-02T13:31:08.962615693Z" level=info msg="CreateContainer within sandbox \"68ef6894eba1e2141edd5b15b5ff602123f606f75273ebbbba3e5710db13f958\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4a2783cd28049d95280c9dafcf30591ca39d7b6b05eb5fb627cdd3dc5b8faea3\"" Mar 2 13:31:08.964508 containerd[1729]: time="2026-03-02T13:31:08.963360894Z" level=info msg="StartContainer for \"4a2783cd28049d95280c9dafcf30591ca39d7b6b05eb5fb627cdd3dc5b8faea3\"" Mar 2 13:31:08.974739 containerd[1729]: time="2026-03-02T13:31:08.974686156Z" level=info msg="CreateContainer within sandbox \"f5f326a85b59a987e908af11702702cae9032f01577d80474d4ebdefa0629d1c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2c4246bd6f48d7dd1559775b809c87095ade70773ca12dd06d5fa87aea4b7d99\"" Mar 2 13:31:08.976514 containerd[1729]: time="2026-03-02T13:31:08.975257317Z" level=info msg="StartContainer for \"2c4246bd6f48d7dd1559775b809c87095ade70773ca12dd06d5fa87aea4b7d99\"" Mar 2 13:31:08.982844 containerd[1729]: time="2026-03-02T13:31:08.982796211Z" level=info msg="CreateContainer within sandbox \"9d358c0d21fb90905630904017b333284e71f5040254bb37c7fee499a212bc4c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b80ed6705d3d953eb81a2520da416440970fc0be79d100131e227cfda1c12924\"" Mar 2 13:31:08.984885 containerd[1729]: time="2026-03-02T13:31:08.984845255Z" level=info msg="StartContainer for \"b80ed6705d3d953eb81a2520da416440970fc0be79d100131e227cfda1c12924\"" Mar 2 13:31:08.995693 systemd[1]: Started cri-containerd-4a2783cd28049d95280c9dafcf30591ca39d7b6b05eb5fb627cdd3dc5b8faea3.scope - libcontainer container 4a2783cd28049d95280c9dafcf30591ca39d7b6b05eb5fb627cdd3dc5b8faea3. Mar 2 13:31:09.025663 systemd[1]: Started cri-containerd-2c4246bd6f48d7dd1559775b809c87095ade70773ca12dd06d5fa87aea4b7d99.scope - libcontainer container 2c4246bd6f48d7dd1559775b809c87095ade70773ca12dd06d5fa87aea4b7d99. Mar 2 13:31:09.033711 systemd[1]: Started cri-containerd-b80ed6705d3d953eb81a2520da416440970fc0be79d100131e227cfda1c12924.scope - libcontainer container b80ed6705d3d953eb81a2520da416440970fc0be79d100131e227cfda1c12924. Mar 2 13:31:09.054893 containerd[1729]: time="2026-03-02T13:31:09.054831708Z" level=info msg="StartContainer for \"4a2783cd28049d95280c9dafcf30591ca39d7b6b05eb5fb627cdd3dc5b8faea3\" returns successfully" Mar 2 13:31:09.098758 containerd[1729]: time="2026-03-02T13:31:09.098704632Z" level=info msg="StartContainer for \"2c4246bd6f48d7dd1559775b809c87095ade70773ca12dd06d5fa87aea4b7d99\" returns successfully" Mar 2 13:31:09.115230 containerd[1729]: time="2026-03-02T13:31:09.115100463Z" level=info msg="StartContainer for \"b80ed6705d3d953eb81a2520da416440970fc0be79d100131e227cfda1c12924\" returns successfully" Mar 2 13:31:09.835848 kubelet[2789]: E0302 13:31:09.835817 2789 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-160832fd4e\" not found" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:09.839256 kubelet[2789]: E0302 13:31:09.839041 2789 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-160832fd4e\" not found" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:09.842909 kubelet[2789]: E0302 13:31:09.842746 2789 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-160832fd4e\" not found" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:10.036811 kubelet[2789]: I0302 13:31:10.036037 2789 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:10.844372 kubelet[2789]: E0302 13:31:10.844302 2789 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-160832fd4e\" not found" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:10.845807 kubelet[2789]: E0302 13:31:10.845661 2789 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-160832fd4e\" not found" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:11.846212 kubelet[2789]: E0302 13:31:11.846183 2789 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-160832fd4e\" not found" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:12.060500 kubelet[2789]: E0302 13:31:12.060447 2789 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.101-160832fd4e\" not found" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:12.192455 kubelet[2789]: I0302 13:31:12.192152 2789 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:12.192455 kubelet[2789]: E0302 13:31:12.192194 2789 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.101-160832fd4e\": node \"ci-4081.3.101-160832fd4e\" not found" Mar 2 13:31:12.200656 kubelet[2789]: I0302 13:31:12.200613 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.101-160832fd4e" Mar 2 13:31:12.213396 kubelet[2789]: E0302 13:31:12.213188 2789 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.101-160832fd4e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.101-160832fd4e" Mar 2 13:31:12.213396 kubelet[2789]: I0302 13:31:12.213216 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.101-160832fd4e" Mar 2 13:31:12.216098 kubelet[2789]: E0302 13:31:12.216059 2789 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.101-160832fd4e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.101-160832fd4e" Mar 2 13:31:12.216098 kubelet[2789]: I0302 13:31:12.216092 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.101-160832fd4e" Mar 2 13:31:12.218429 kubelet[2789]: E0302 13:31:12.218389 2789 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.101-160832fd4e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.101-160832fd4e" Mar 2 13:31:12.352124 kubelet[2789]: I0302 13:31:12.352092 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.101-160832fd4e" Mar 2 13:31:12.353993 kubelet[2789]: E0302 13:31:12.353965 2789 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.101-160832fd4e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.101-160832fd4e" Mar 2 13:31:12.782951 kubelet[2789]: I0302 13:31:12.782670 2789 apiserver.go:52] "Watching apiserver" Mar 2 13:31:12.792844 kubelet[2789]: I0302 13:31:12.792806 2789 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 2 13:31:14.683251 systemd[1]: Reloading requested from client PID 3104 ('systemctl') (unit session-9.scope)... Mar 2 13:31:14.683271 systemd[1]: Reloading... Mar 2 13:31:14.782664 zram_generator::config[3150]: No configuration found. Mar 2 13:31:14.884541 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:31:14.979045 systemd[1]: Reloading finished in 295 ms. Mar 2 13:31:15.015075 kubelet[2789]: I0302 13:31:15.015007 2789 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:31:15.015502 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:31:15.032800 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 13:31:15.033168 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:31:15.038981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:31:15.149361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:31:15.154823 (kubelet)[3208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:31:15.190417 kubelet[3208]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:31:15.190417 kubelet[3208]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 13:31:15.190417 kubelet[3208]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:31:15.192116 kubelet[3208]: I0302 13:31:15.190886 3208 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 13:31:15.198202 kubelet[3208]: I0302 13:31:15.198175 3208 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 2 13:31:15.198308 kubelet[3208]: I0302 13:31:15.198299 3208 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:31:15.198559 kubelet[3208]: I0302 13:31:15.198545 3208 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 13:31:15.199820 kubelet[3208]: I0302 13:31:15.199802 3208 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 2 13:31:15.202021 kubelet[3208]: I0302 13:31:15.201993 3208 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:31:15.205356 kubelet[3208]: E0302 13:31:15.205327 3208 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:31:15.205509 kubelet[3208]: I0302 13:31:15.205495 3208 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 2 13:31:15.208636 kubelet[3208]: I0302 13:31:15.208614 3208 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 2 13:31:15.208925 kubelet[3208]: I0302 13:31:15.208896 3208 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:31:15.209208 kubelet[3208]: I0302 13:31:15.209001 3208 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.101-160832fd4e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:31:15.209362 kubelet[3208]: I0302 13:31:15.209347 3208 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 13:31:15.209422 kubelet[3208]: I0302 13:31:15.209414 3208 container_manager_linux.go:303] "Creating device plugin manager" Mar 2 13:31:15.209561 kubelet[3208]: I0302 13:31:15.209549 3208 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:31:15.209833 kubelet[3208]: I0302 13:31:15.209813 3208 kubelet.go:480] "Attempting to sync node with API server" Mar 2 13:31:15.209882 kubelet[3208]: I0302 13:31:15.209836 3208 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:31:15.209882 kubelet[3208]: I0302 13:31:15.209860 3208 kubelet.go:386] "Adding apiserver pod source" Mar 2 13:31:15.209882 kubelet[3208]: I0302 13:31:15.209873 3208 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:31:15.217789 kubelet[3208]: I0302 13:31:15.217762 3208 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:31:15.218641 kubelet[3208]: I0302 13:31:15.218494 3208 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:31:15.221171 kubelet[3208]: I0302 13:31:15.221153 3208 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 2 13:31:15.221287 kubelet[3208]: I0302 13:31:15.221277 3208 server.go:1289] "Started kubelet" Mar 2 13:31:15.222907 kubelet[3208]: I0302 13:31:15.222774 3208 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 13:31:15.229075 kubelet[3208]: I0302 13:31:15.229045 3208 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:31:15.231576 kubelet[3208]: I0302 13:31:15.229905 3208 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:31:15.234103 kubelet[3208]: I0302 13:31:15.234053 3208 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:31:15.234356 kubelet[3208]: I0302 13:31:15.234344 3208 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:31:15.234708 kubelet[3208]: I0302 13:31:15.234668 3208 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:31:15.235927 kubelet[3208]: I0302 13:31:15.235911 3208 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 2 13:31:15.236289 kubelet[3208]: E0302 13:31:15.236267 3208 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.101-160832fd4e\" not found" Mar 2 13:31:15.238804 kubelet[3208]: I0302 13:31:15.238782 3208 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 2 13:31:15.239065 kubelet[3208]: I0302 13:31:15.239051 3208 reconciler.go:26] "Reconciler: start to sync state" Mar 2 13:31:15.246677 kubelet[3208]: I0302 13:31:15.246638 3208 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 2 13:31:15.248422 kubelet[3208]: I0302 13:31:15.247966 3208 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 2 13:31:15.248422 kubelet[3208]: I0302 13:31:15.248004 3208 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 2 13:31:15.248422 kubelet[3208]: I0302 13:31:15.248029 3208 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:31:15.248422 kubelet[3208]: I0302 13:31:15.248036 3208 kubelet.go:2436] "Starting kubelet main sync loop" Mar 2 13:31:15.248422 kubelet[3208]: E0302 13:31:15.248081 3208 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:31:15.255870 kubelet[3208]: I0302 13:31:15.255839 3208 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:31:15.256177 kubelet[3208]: I0302 13:31:15.256149 3208 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:31:15.265235 kubelet[3208]: I0302 13:31:15.263234 3208 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:31:15.327506 kubelet[3208]: I0302 13:31:15.326871 3208 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 13:31:15.327506 kubelet[3208]: I0302 13:31:15.326889 3208 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 13:31:15.327506 kubelet[3208]: I0302 13:31:15.326910 3208 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:31:15.327506 kubelet[3208]: I0302 13:31:15.327046 3208 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 2 13:31:15.327506 kubelet[3208]: I0302 13:31:15.327056 3208 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 2 13:31:15.327506 kubelet[3208]: I0302 13:31:15.327073 3208 policy_none.go:49] "None policy: Start" Mar 2 13:31:15.327506 kubelet[3208]: I0302 13:31:15.327093 3208 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 2 13:31:15.327506 kubelet[3208]: I0302 13:31:15.327102 3208 state_mem.go:35] "Initializing new in-memory state store" Mar 2 13:31:15.327506 kubelet[3208]: I0302 13:31:15.327195 3208 state_mem.go:75] "Updated machine memory state" Mar 2 13:31:15.332879 kubelet[3208]: E0302 13:31:15.332584 3208 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:31:15.333219 kubelet[3208]: I0302 13:31:15.333069 3208 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 13:31:15.333219 kubelet[3208]: I0302 13:31:15.333081 3208 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:31:15.333328 kubelet[3208]: I0302 13:31:15.333308 3208 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 13:31:15.338494 kubelet[3208]: E0302 13:31:15.337058 3208 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:31:15.348751 kubelet[3208]: I0302 13:31:15.348716 3208 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.101-160832fd4e" Mar 2 13:31:15.349543 kubelet[3208]: I0302 13:31:15.349187 3208 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.101-160832fd4e" Mar 2 13:31:15.349543 kubelet[3208]: I0302 13:31:15.349407 3208 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.101-160832fd4e" Mar 2 13:31:15.368264 kubelet[3208]: I0302 13:31:15.368234 3208 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:31:15.368797 kubelet[3208]: I0302 13:31:15.368527 3208 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:31:15.369192 kubelet[3208]: I0302 13:31:15.368597 3208 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:31:15.436337 kubelet[3208]: I0302 13:31:15.435986 3208 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:15.451053 kubelet[3208]: I0302 13:31:15.451011 3208 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:15.451234 kubelet[3208]: I0302 13:31:15.451107 3208 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.101-160832fd4e" Mar 2 13:31:15.540606 kubelet[3208]: I0302 13:31:15.540481 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c84eed5a3aa13ab4cbbfba4d2a0cf494-kubeconfig\") pod \"kube-scheduler-ci-4081.3.101-160832fd4e\" (UID: \"c84eed5a3aa13ab4cbbfba4d2a0cf494\") " pod="kube-system/kube-scheduler-ci-4081.3.101-160832fd4e" Mar 2 13:31:15.540606 kubelet[3208]: I0302 13:31:15.540526 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/745e5577629948574b0d8ae493b3af13-k8s-certs\") pod \"kube-apiserver-ci-4081.3.101-160832fd4e\" (UID: \"745e5577629948574b0d8ae493b3af13\") " pod="kube-system/kube-apiserver-ci-4081.3.101-160832fd4e" Mar 2 13:31:15.540606 kubelet[3208]: I0302 13:31:15.540550 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/745e5577629948574b0d8ae493b3af13-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.101-160832fd4e\" (UID: \"745e5577629948574b0d8ae493b3af13\") " pod="kube-system/kube-apiserver-ci-4081.3.101-160832fd4e" Mar 2 13:31:15.540606 kubelet[3208]: I0302 13:31:15.540572 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ef96cc730d4903644d4777f3ff4bfec-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.101-160832fd4e\" (UID: \"3ef96cc730d4903644d4777f3ff4bfec\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-160832fd4e" Mar 2 13:31:15.541629 kubelet[3208]: I0302 13:31:15.541598 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/745e5577629948574b0d8ae493b3af13-ca-certs\") pod \"kube-apiserver-ci-4081.3.101-160832fd4e\" (UID: \"745e5577629948574b0d8ae493b3af13\") " pod="kube-system/kube-apiserver-ci-4081.3.101-160832fd4e" Mar 2 13:31:15.541730 kubelet[3208]: I0302 13:31:15.541654 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ef96cc730d4903644d4777f3ff4bfec-ca-certs\") pod \"kube-controller-manager-ci-4081.3.101-160832fd4e\" (UID: \"3ef96cc730d4903644d4777f3ff4bfec\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-160832fd4e" Mar 2 13:31:15.541730 kubelet[3208]: I0302 13:31:15.541697 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3ef96cc730d4903644d4777f3ff4bfec-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.101-160832fd4e\" (UID: \"3ef96cc730d4903644d4777f3ff4bfec\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-160832fd4e" Mar 2 13:31:15.541789 kubelet[3208]: I0302 13:31:15.541729 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ef96cc730d4903644d4777f3ff4bfec-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.101-160832fd4e\" (UID: \"3ef96cc730d4903644d4777f3ff4bfec\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-160832fd4e" Mar 2 13:31:15.541789 kubelet[3208]: I0302 13:31:15.541750 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3ef96cc730d4903644d4777f3ff4bfec-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.101-160832fd4e\" (UID: \"3ef96cc730d4903644d4777f3ff4bfec\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-160832fd4e" Mar 2 13:31:16.212531 kubelet[3208]: I0302 13:31:16.212239 3208 apiserver.go:52] "Watching apiserver" Mar 2 13:31:16.239107 kubelet[3208]: I0302 13:31:16.239032 3208 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 2 13:31:16.300641 kubelet[3208]: I0302 13:31:16.299986 3208 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.101-160832fd4e" Mar 2 13:31:16.301404 kubelet[3208]: I0302 13:31:16.300692 3208 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.101-160832fd4e" Mar 2 13:31:16.323007 kubelet[3208]: I0302 13:31:16.322977 3208 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:31:16.323397 kubelet[3208]: E0302 13:31:16.323208 3208 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.101-160832fd4e\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.101-160832fd4e" Mar 2 13:31:16.324304 kubelet[3208]: I0302 13:31:16.324189 3208 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 2 13:31:16.324304 kubelet[3208]: E0302 13:31:16.324227 3208 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.101-160832fd4e\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.101-160832fd4e" Mar 2 13:31:16.337635 kubelet[3208]: I0302 13:31:16.337276 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.101-160832fd4e" podStartSLOduration=1.337260709 podStartE2EDuration="1.337260709s" podCreationTimestamp="2026-03-02 13:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:31:16.324832171 +0000 UTC m=+1.165262694" watchObservedRunningTime="2026-03-02 13:31:16.337260709 +0000 UTC m=+1.177691192" Mar 2 13:31:16.337635 kubelet[3208]: I0302 13:31:16.337396 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.101-160832fd4e" podStartSLOduration=1.337392149 podStartE2EDuration="1.337392149s" podCreationTimestamp="2026-03-02 13:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:31:16.337092509 +0000 UTC m=+1.177522992" watchObservedRunningTime="2026-03-02 13:31:16.337392149 +0000 UTC m=+1.177822632" Mar 2 13:31:16.348865 kubelet[3208]: I0302 13:31:16.348796 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.101-160832fd4e" podStartSLOduration=1.348782366 podStartE2EDuration="1.348782366s" podCreationTimestamp="2026-03-02 13:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:31:16.348750046 +0000 UTC m=+1.189180529" watchObservedRunningTime="2026-03-02 13:31:16.348782366 +0000 UTC m=+1.189212849" Mar 2 13:31:20.411408 kubelet[3208]: I0302 13:31:20.411370 3208 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 2 13:31:20.412297 kubelet[3208]: I0302 13:31:20.411946 3208 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 2 13:31:20.412337 containerd[1729]: time="2026-03-02T13:31:20.411726159Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 13:31:21.415497 systemd[1]: Created slice kubepods-besteffort-podd8b925c1_79c9_40c4_abe9_cd4cb5cfe763.slice - libcontainer container kubepods-besteffort-podd8b925c1_79c9_40c4_abe9_cd4cb5cfe763.slice. Mar 2 13:31:21.477442 kubelet[3208]: I0302 13:31:21.477267 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d8b925c1-79c9-40c4-abe9-cd4cb5cfe763-kube-proxy\") pod \"kube-proxy-sd2wx\" (UID: \"d8b925c1-79c9-40c4-abe9-cd4cb5cfe763\") " pod="kube-system/kube-proxy-sd2wx" Mar 2 13:31:21.477442 kubelet[3208]: I0302 13:31:21.477322 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8b925c1-79c9-40c4-abe9-cd4cb5cfe763-lib-modules\") pod \"kube-proxy-sd2wx\" (UID: \"d8b925c1-79c9-40c4-abe9-cd4cb5cfe763\") " pod="kube-system/kube-proxy-sd2wx" Mar 2 13:31:21.477442 kubelet[3208]: I0302 13:31:21.477343 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8b925c1-79c9-40c4-abe9-cd4cb5cfe763-xtables-lock\") pod \"kube-proxy-sd2wx\" (UID: \"d8b925c1-79c9-40c4-abe9-cd4cb5cfe763\") " pod="kube-system/kube-proxy-sd2wx" Mar 2 13:31:21.477442 kubelet[3208]: I0302 13:31:21.477358 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwdr5\" (UniqueName: \"kubernetes.io/projected/d8b925c1-79c9-40c4-abe9-cd4cb5cfe763-kube-api-access-hwdr5\") pod \"kube-proxy-sd2wx\" (UID: \"d8b925c1-79c9-40c4-abe9-cd4cb5cfe763\") " pod="kube-system/kube-proxy-sd2wx" Mar 2 13:31:21.710415 systemd[1]: Created slice kubepods-besteffort-pod0f775ba0_9f8c_4c35_b8b2_a9b6d5b51240.slice - libcontainer container kubepods-besteffort-pod0f775ba0_9f8c_4c35_b8b2_a9b6d5b51240.slice. Mar 2 13:31:21.724380 containerd[1729]: time="2026-03-02T13:31:21.723993962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sd2wx,Uid:d8b925c1-79c9-40c4-abe9-cd4cb5cfe763,Namespace:kube-system,Attempt:0,}" Mar 2 13:31:21.765416 containerd[1729]: time="2026-03-02T13:31:21.765339182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:31:21.766182 containerd[1729]: time="2026-03-02T13:31:21.766093143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:31:21.766182 containerd[1729]: time="2026-03-02T13:31:21.766119703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:31:21.766353 containerd[1729]: time="2026-03-02T13:31:21.766322104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:31:21.779889 kubelet[3208]: I0302 13:31:21.779774 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt9c6\" (UniqueName: \"kubernetes.io/projected/0f775ba0-9f8c-4c35-b8b2-a9b6d5b51240-kube-api-access-pt9c6\") pod \"tigera-operator-7d4578d8d-j7hws\" (UID: \"0f775ba0-9f8c-4c35-b8b2-a9b6d5b51240\") " pod="tigera-operator/tigera-operator-7d4578d8d-j7hws" Mar 2 13:31:21.779889 kubelet[3208]: I0302 13:31:21.779825 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0f775ba0-9f8c-4c35-b8b2-a9b6d5b51240-var-lib-calico\") pod \"tigera-operator-7d4578d8d-j7hws\" (UID: \"0f775ba0-9f8c-4c35-b8b2-a9b6d5b51240\") " pod="tigera-operator/tigera-operator-7d4578d8d-j7hws" Mar 2 13:31:21.789708 systemd[1]: Started cri-containerd-74666446b562b18465aa5d19b958dcfaca9def0c1513e01c7f8206c33b2ebd2d.scope - libcontainer container 74666446b562b18465aa5d19b958dcfaca9def0c1513e01c7f8206c33b2ebd2d. Mar 2 13:31:21.812562 containerd[1729]: time="2026-03-02T13:31:21.812513131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sd2wx,Uid:d8b925c1-79c9-40c4-abe9-cd4cb5cfe763,Namespace:kube-system,Attempt:0,} returns sandbox id \"74666446b562b18465aa5d19b958dcfaca9def0c1513e01c7f8206c33b2ebd2d\"" Mar 2 13:31:21.821290 containerd[1729]: time="2026-03-02T13:31:21.821244024Z" level=info msg="CreateContainer within sandbox \"74666446b562b18465aa5d19b958dcfaca9def0c1513e01c7f8206c33b2ebd2d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 13:31:21.860574 containerd[1729]: time="2026-03-02T13:31:21.860448642Z" level=info msg="CreateContainer within sandbox \"74666446b562b18465aa5d19b958dcfaca9def0c1513e01c7f8206c33b2ebd2d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"07f746aac28824bc46624eda8e70dd6239dae88c81472cc3d0f30c9108bb2f9b\"" Mar 2 13:31:21.862305 containerd[1729]: time="2026-03-02T13:31:21.862173164Z" level=info msg="StartContainer for \"07f746aac28824bc46624eda8e70dd6239dae88c81472cc3d0f30c9108bb2f9b\"" Mar 2 13:31:21.889923 systemd[1]: Started cri-containerd-07f746aac28824bc46624eda8e70dd6239dae88c81472cc3d0f30c9108bb2f9b.scope - libcontainer container 07f746aac28824bc46624eda8e70dd6239dae88c81472cc3d0f30c9108bb2f9b. Mar 2 13:31:21.923816 containerd[1729]: time="2026-03-02T13:31:21.923769334Z" level=info msg="StartContainer for \"07f746aac28824bc46624eda8e70dd6239dae88c81472cc3d0f30c9108bb2f9b\" returns successfully" Mar 2 13:31:22.019150 containerd[1729]: time="2026-03-02T13:31:22.018682193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d4578d8d-j7hws,Uid:0f775ba0-9f8c-4c35-b8b2-a9b6d5b51240,Namespace:tigera-operator,Attempt:0,}" Mar 2 13:31:22.072097 containerd[1729]: time="2026-03-02T13:31:22.071528631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:31:22.072984 containerd[1729]: time="2026-03-02T13:31:22.072152712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:31:22.072984 containerd[1729]: time="2026-03-02T13:31:22.072172432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:31:22.072984 containerd[1729]: time="2026-03-02T13:31:22.072921553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:31:22.100684 systemd[1]: Started cri-containerd-842f37bee1e5bf7067988a7480b31579ccda88f7e89e9f93629cb6b74b797336.scope - libcontainer container 842f37bee1e5bf7067988a7480b31579ccda88f7e89e9f93629cb6b74b797336. Mar 2 13:31:22.129833 containerd[1729]: time="2026-03-02T13:31:22.129507756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d4578d8d-j7hws,Uid:0f775ba0-9f8c-4c35-b8b2-a9b6d5b51240,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"842f37bee1e5bf7067988a7480b31579ccda88f7e89e9f93629cb6b74b797336\"" Mar 2 13:31:22.131308 containerd[1729]: time="2026-03-02T13:31:22.131007438Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.3\"" Mar 2 13:31:22.596764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3462756001.mount: Deactivated successfully. Mar 2 13:31:23.742797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166997924.mount: Deactivated successfully. Mar 2 13:31:24.442923 containerd[1729]: time="2026-03-02T13:31:24.442865059Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:24.445652 containerd[1729]: time="2026-03-02T13:31:24.445615303Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.3: active requests=0, bytes read=25060789" Mar 2 13:31:24.450658 containerd[1729]: time="2026-03-02T13:31:24.450617789Z" level=info msg="ImageCreate event name:\"sha256:a94b0dfe779f8dc351e02e8988fd60aecb466000f13b6f00042ab83ebb237d87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:24.454597 containerd[1729]: time="2026-03-02T13:31:24.454538034Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3b1a6762e1f3fae8490773b8f06ddd1e6775850febbece4d6002416f39adc670\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:24.455901 containerd[1729]: time="2026-03-02T13:31:24.455245395Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.3\" with image id \"sha256:a94b0dfe779f8dc351e02e8988fd60aecb466000f13b6f00042ab83ebb237d87\", repo tag \"quay.io/tigera/operator:v1.40.3\", repo digest \"quay.io/tigera/operator@sha256:3b1a6762e1f3fae8490773b8f06ddd1e6775850febbece4d6002416f39adc670\", size \"25056784\" in 2.324205037s" Mar 2 13:31:24.455901 containerd[1729]: time="2026-03-02T13:31:24.455284275Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.3\" returns image reference \"sha256:a94b0dfe779f8dc351e02e8988fd60aecb466000f13b6f00042ab83ebb237d87\"" Mar 2 13:31:24.465000 containerd[1729]: time="2026-03-02T13:31:24.464950807Z" level=info msg="CreateContainer within sandbox \"842f37bee1e5bf7067988a7480b31579ccda88f7e89e9f93629cb6b74b797336\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 2 13:31:24.486357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4255356273.mount: Deactivated successfully. Mar 2 13:31:24.498291 containerd[1729]: time="2026-03-02T13:31:24.498240010Z" level=info msg="CreateContainer within sandbox \"842f37bee1e5bf7067988a7480b31579ccda88f7e89e9f93629cb6b74b797336\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"64d376da4eda2ffeea0e7aed52db2e4bed66e22f952353ccedfc90b0c3fc1adb\"" Mar 2 13:31:24.499456 containerd[1729]: time="2026-03-02T13:31:24.499091251Z" level=info msg="StartContainer for \"64d376da4eda2ffeea0e7aed52db2e4bed66e22f952353ccedfc90b0c3fc1adb\"" Mar 2 13:31:24.526650 systemd[1]: Started cri-containerd-64d376da4eda2ffeea0e7aed52db2e4bed66e22f952353ccedfc90b0c3fc1adb.scope - libcontainer container 64d376da4eda2ffeea0e7aed52db2e4bed66e22f952353ccedfc90b0c3fc1adb. Mar 2 13:31:24.552113 containerd[1729]: time="2026-03-02T13:31:24.552004478Z" level=info msg="StartContainer for \"64d376da4eda2ffeea0e7aed52db2e4bed66e22f952353ccedfc90b0c3fc1adb\" returns successfully" Mar 2 13:31:25.329845 kubelet[3208]: I0302 13:31:25.329725 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sd2wx" podStartSLOduration=4.329709628 podStartE2EDuration="4.329709628s" podCreationTimestamp="2026-03-02 13:31:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:31:22.327875206 +0000 UTC m=+7.168305849" watchObservedRunningTime="2026-03-02 13:31:25.329709628 +0000 UTC m=+10.170140111" Mar 2 13:31:26.356489 kubelet[3208]: I0302 13:31:26.356340 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d4578d8d-j7hws" podStartSLOduration=3.030052054 podStartE2EDuration="5.356312294s" podCreationTimestamp="2026-03-02 13:31:21 +0000 UTC" firstStartedPulling="2026-03-02 13:31:22.130624117 +0000 UTC m=+6.971054600" lastFinishedPulling="2026-03-02 13:31:24.456884357 +0000 UTC m=+9.297314840" observedRunningTime="2026-03-02 13:31:25.329984268 +0000 UTC m=+10.170414751" watchObservedRunningTime="2026-03-02 13:31:26.356312294 +0000 UTC m=+11.196742777" Mar 2 13:31:30.584822 sudo[2202]: pam_unix(sudo:session): session closed for user root Mar 2 13:31:30.661428 sshd[2199]: pam_unix(sshd:session): session closed for user core Mar 2 13:31:30.666714 systemd[1]: sshd@6-10.200.20.22:22-10.200.16.10:45350.service: Deactivated successfully. Mar 2 13:31:30.669523 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 13:31:30.669876 systemd[1]: session-9.scope: Consumed 7.372s CPU time, 155.0M memory peak, 0B memory swap peak. Mar 2 13:31:30.673041 systemd-logind[1692]: Session 9 logged out. Waiting for processes to exit. Mar 2 13:31:30.675128 systemd-logind[1692]: Removed session 9. Mar 2 13:31:36.264260 systemd[1]: Created slice kubepods-besteffort-pod172cfdeb_7400_44cf_be85_7ddd631561bc.slice - libcontainer container kubepods-besteffort-pod172cfdeb_7400_44cf_be85_7ddd631561bc.slice. Mar 2 13:31:36.267591 kubelet[3208]: I0302 13:31:36.266053 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/172cfdeb-7400-44cf-be85-7ddd631561bc-typha-certs\") pod \"calico-typha-5986f5c599-dbmhb\" (UID: \"172cfdeb-7400-44cf-be85-7ddd631561bc\") " pod="calico-system/calico-typha-5986f5c599-dbmhb" Mar 2 13:31:36.268315 kubelet[3208]: I0302 13:31:36.266092 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g6wg\" (UniqueName: \"kubernetes.io/projected/172cfdeb-7400-44cf-be85-7ddd631561bc-kube-api-access-9g6wg\") pod \"calico-typha-5986f5c599-dbmhb\" (UID: \"172cfdeb-7400-44cf-be85-7ddd631561bc\") " pod="calico-system/calico-typha-5986f5c599-dbmhb" Mar 2 13:31:36.268654 kubelet[3208]: I0302 13:31:36.268497 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/172cfdeb-7400-44cf-be85-7ddd631561bc-tigera-ca-bundle\") pod \"calico-typha-5986f5c599-dbmhb\" (UID: \"172cfdeb-7400-44cf-be85-7ddd631561bc\") " pod="calico-system/calico-typha-5986f5c599-dbmhb" Mar 2 13:31:36.391408 systemd[1]: Created slice kubepods-besteffort-pode359ec67_5e1b_45af_bad2_30e71d548555.slice - libcontainer container kubepods-besteffort-pode359ec67_5e1b_45af_bad2_30e71d548555.slice. Mar 2 13:31:36.472486 kubelet[3208]: I0302 13:31:36.472423 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e359ec67-5e1b-45af-bad2-30e71d548555-tigera-ca-bundle\") pod \"calico-node-jf24n\" (UID: \"e359ec67-5e1b-45af-bad2-30e71d548555\") " pod="calico-system/calico-node-jf24n" Mar 2 13:31:36.472486 kubelet[3208]: I0302 13:31:36.472496 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e359ec67-5e1b-45af-bad2-30e71d548555-lib-modules\") pod \"calico-node-jf24n\" (UID: \"e359ec67-5e1b-45af-bad2-30e71d548555\") " pod="calico-system/calico-node-jf24n" Mar 2 13:31:36.472680 kubelet[3208]: I0302 13:31:36.472515 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/e359ec67-5e1b-45af-bad2-30e71d548555-nodeproc\") pod \"calico-node-jf24n\" (UID: \"e359ec67-5e1b-45af-bad2-30e71d548555\") " pod="calico-system/calico-node-jf24n" Mar 2 13:31:36.472680 kubelet[3208]: I0302 13:31:36.472537 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/e359ec67-5e1b-45af-bad2-30e71d548555-sys-fs\") pod \"calico-node-jf24n\" (UID: \"e359ec67-5e1b-45af-bad2-30e71d548555\") " pod="calico-system/calico-node-jf24n" Mar 2 13:31:36.472680 kubelet[3208]: I0302 13:31:36.472553 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e359ec67-5e1b-45af-bad2-30e71d548555-xtables-lock\") pod \"calico-node-jf24n\" (UID: \"e359ec67-5e1b-45af-bad2-30e71d548555\") " pod="calico-system/calico-node-jf24n" Mar 2 13:31:36.472680 kubelet[3208]: I0302 13:31:36.472570 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e359ec67-5e1b-45af-bad2-30e71d548555-cni-log-dir\") pod \"calico-node-jf24n\" (UID: \"e359ec67-5e1b-45af-bad2-30e71d548555\") " pod="calico-system/calico-node-jf24n" Mar 2 13:31:36.472680 kubelet[3208]: I0302 13:31:36.472584 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e359ec67-5e1b-45af-bad2-30e71d548555-var-lib-calico\") pod \"calico-node-jf24n\" (UID: \"e359ec67-5e1b-45af-bad2-30e71d548555\") " pod="calico-system/calico-node-jf24n" Mar 2 13:31:36.472798 kubelet[3208]: I0302 13:31:36.472600 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e359ec67-5e1b-45af-bad2-30e71d548555-var-run-calico\") pod \"calico-node-jf24n\" (UID: \"e359ec67-5e1b-45af-bad2-30e71d548555\") " pod="calico-system/calico-node-jf24n" Mar 2 13:31:36.472798 kubelet[3208]: I0302 13:31:36.472623 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e359ec67-5e1b-45af-bad2-30e71d548555-cni-net-dir\") pod \"calico-node-jf24n\" (UID: \"e359ec67-5e1b-45af-bad2-30e71d548555\") " pod="calico-system/calico-node-jf24n" Mar 2 13:31:36.472798 kubelet[3208]: I0302 13:31:36.472640 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e359ec67-5e1b-45af-bad2-30e71d548555-policysync\") pod \"calico-node-jf24n\" (UID: \"e359ec67-5e1b-45af-bad2-30e71d548555\") " pod="calico-system/calico-node-jf24n" Mar 2 13:31:36.472798 kubelet[3208]: I0302 13:31:36.472656 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e359ec67-5e1b-45af-bad2-30e71d548555-node-certs\") pod \"calico-node-jf24n\" (UID: \"e359ec67-5e1b-45af-bad2-30e71d548555\") " pod="calico-system/calico-node-jf24n" Mar 2 13:31:36.472798 kubelet[3208]: I0302 13:31:36.472674 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqwz6\" (UniqueName: \"kubernetes.io/projected/e359ec67-5e1b-45af-bad2-30e71d548555-kube-api-access-mqwz6\") pod \"calico-node-jf24n\" (UID: \"e359ec67-5e1b-45af-bad2-30e71d548555\") " pod="calico-system/calico-node-jf24n" Mar 2 13:31:36.472907 kubelet[3208]: I0302 13:31:36.472689 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/e359ec67-5e1b-45af-bad2-30e71d548555-bpffs\") pod \"calico-node-jf24n\" (UID: \"e359ec67-5e1b-45af-bad2-30e71d548555\") " pod="calico-system/calico-node-jf24n" Mar 2 13:31:36.472907 kubelet[3208]: I0302 13:31:36.472706 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e359ec67-5e1b-45af-bad2-30e71d548555-cni-bin-dir\") pod \"calico-node-jf24n\" (UID: \"e359ec67-5e1b-45af-bad2-30e71d548555\") " pod="calico-system/calico-node-jf24n" Mar 2 13:31:36.472907 kubelet[3208]: I0302 13:31:36.472724 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e359ec67-5e1b-45af-bad2-30e71d548555-flexvol-driver-host\") pod \"calico-node-jf24n\" (UID: \"e359ec67-5e1b-45af-bad2-30e71d548555\") " pod="calico-system/calico-node-jf24n" Mar 2 13:31:36.497524 kubelet[3208]: E0302 13:31:36.496923 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm7xw" podUID="8a206f66-6255-4ca6-b52b-ffcb569d488d" Mar 2 13:31:36.573850 containerd[1729]: time="2026-03-02T13:31:36.573578667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5986f5c599-dbmhb,Uid:172cfdeb-7400-44cf-be85-7ddd631561bc,Namespace:calico-system,Attempt:0,}" Mar 2 13:31:36.576298 kubelet[3208]: I0302 13:31:36.575037 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8a206f66-6255-4ca6-b52b-ffcb569d488d-kubelet-dir\") pod \"csi-node-driver-jm7xw\" (UID: \"8a206f66-6255-4ca6-b52b-ffcb569d488d\") " pod="calico-system/csi-node-driver-jm7xw" Mar 2 13:31:36.576298 kubelet[3208]: I0302 13:31:36.575090 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8a206f66-6255-4ca6-b52b-ffcb569d488d-registration-dir\") pod \"csi-node-driver-jm7xw\" (UID: \"8a206f66-6255-4ca6-b52b-ffcb569d488d\") " pod="calico-system/csi-node-driver-jm7xw" Mar 2 13:31:36.576298 kubelet[3208]: I0302 13:31:36.575112 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2gks\" (UniqueName: \"kubernetes.io/projected/8a206f66-6255-4ca6-b52b-ffcb569d488d-kube-api-access-j2gks\") pod \"csi-node-driver-jm7xw\" (UID: \"8a206f66-6255-4ca6-b52b-ffcb569d488d\") " pod="calico-system/csi-node-driver-jm7xw" Mar 2 13:31:36.576298 kubelet[3208]: I0302 13:31:36.575151 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8a206f66-6255-4ca6-b52b-ffcb569d488d-socket-dir\") pod \"csi-node-driver-jm7xw\" (UID: \"8a206f66-6255-4ca6-b52b-ffcb569d488d\") " pod="calico-system/csi-node-driver-jm7xw" Mar 2 13:31:36.576298 kubelet[3208]: I0302 13:31:36.575174 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8a206f66-6255-4ca6-b52b-ffcb569d488d-varrun\") pod \"csi-node-driver-jm7xw\" (UID: \"8a206f66-6255-4ca6-b52b-ffcb569d488d\") " pod="calico-system/csi-node-driver-jm7xw" Mar 2 13:31:36.579190 kubelet[3208]: E0302 13:31:36.579135 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.579190 kubelet[3208]: W0302 13:31:36.579177 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.579366 kubelet[3208]: E0302 13:31:36.579202 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.584683 kubelet[3208]: E0302 13:31:36.584651 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.584683 kubelet[3208]: W0302 13:31:36.584675 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.584838 kubelet[3208]: E0302 13:31:36.584696 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.597351 kubelet[3208]: E0302 13:31:36.597293 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.598956 kubelet[3208]: W0302 13:31:36.597323 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.598956 kubelet[3208]: E0302 13:31:36.598935 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.627683 containerd[1729]: time="2026-03-02T13:31:36.627352010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:31:36.627683 containerd[1729]: time="2026-03-02T13:31:36.627443530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:31:36.627683 containerd[1729]: time="2026-03-02T13:31:36.627504251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:31:36.627683 containerd[1729]: time="2026-03-02T13:31:36.627600011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:31:36.643085 systemd[1]: Started cri-containerd-d76d67cd11d8160495655c4c2a130ce8d84a85166376258b291368a5e034c174.scope - libcontainer container d76d67cd11d8160495655c4c2a130ce8d84a85166376258b291368a5e034c174. Mar 2 13:31:36.673603 containerd[1729]: time="2026-03-02T13:31:36.673539105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5986f5c599-dbmhb,Uid:172cfdeb-7400-44cf-be85-7ddd631561bc,Namespace:calico-system,Attempt:0,} returns sandbox id \"d76d67cd11d8160495655c4c2a130ce8d84a85166376258b291368a5e034c174\"" Mar 2 13:31:36.676326 containerd[1729]: time="2026-03-02T13:31:36.675960587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.3\"" Mar 2 13:31:36.677103 kubelet[3208]: E0302 13:31:36.676663 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.677103 kubelet[3208]: W0302 13:31:36.676684 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.677103 kubelet[3208]: E0302 13:31:36.676705 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.677810 kubelet[3208]: E0302 13:31:36.677745 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.677810 kubelet[3208]: W0302 13:31:36.677763 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.677810 kubelet[3208]: E0302 13:31:36.677780 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.678223 kubelet[3208]: E0302 13:31:36.678201 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.678270 kubelet[3208]: W0302 13:31:36.678219 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.678298 kubelet[3208]: E0302 13:31:36.678279 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.678618 kubelet[3208]: E0302 13:31:36.678599 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.678618 kubelet[3208]: W0302 13:31:36.678615 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.678697 kubelet[3208]: E0302 13:31:36.678626 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.678900 kubelet[3208]: E0302 13:31:36.678883 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.678900 kubelet[3208]: W0302 13:31:36.678898 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.678964 kubelet[3208]: E0302 13:31:36.678910 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.679299 kubelet[3208]: E0302 13:31:36.679278 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.679299 kubelet[3208]: W0302 13:31:36.679294 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.679378 kubelet[3208]: E0302 13:31:36.679306 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.679527 kubelet[3208]: E0302 13:31:36.679510 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.679527 kubelet[3208]: W0302 13:31:36.679525 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.679604 kubelet[3208]: E0302 13:31:36.679536 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.679828 kubelet[3208]: E0302 13:31:36.679809 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.679828 kubelet[3208]: W0302 13:31:36.679824 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.679902 kubelet[3208]: E0302 13:31:36.679836 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.680099 kubelet[3208]: E0302 13:31:36.680067 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.680185 kubelet[3208]: W0302 13:31:36.680098 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.680185 kubelet[3208]: E0302 13:31:36.680112 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.680399 kubelet[3208]: E0302 13:31:36.680360 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.680888 kubelet[3208]: W0302 13:31:36.680377 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.680888 kubelet[3208]: E0302 13:31:36.680514 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.680987 kubelet[3208]: E0302 13:31:36.680876 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.680987 kubelet[3208]: W0302 13:31:36.680920 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.680987 kubelet[3208]: E0302 13:31:36.680934 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.681509 kubelet[3208]: E0302 13:31:36.681369 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.681509 kubelet[3208]: W0302 13:31:36.681387 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.681509 kubelet[3208]: E0302 13:31:36.681401 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.682044 kubelet[3208]: E0302 13:31:36.682013 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.682044 kubelet[3208]: W0302 13:31:36.682035 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.682044 kubelet[3208]: E0302 13:31:36.682048 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.682624 kubelet[3208]: E0302 13:31:36.682598 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.682624 kubelet[3208]: W0302 13:31:36.682620 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.682722 kubelet[3208]: E0302 13:31:36.682633 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.683156 kubelet[3208]: E0302 13:31:36.683136 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.683156 kubelet[3208]: W0302 13:31:36.683155 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.683227 kubelet[3208]: E0302 13:31:36.683183 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.683426 kubelet[3208]: E0302 13:31:36.683408 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.683426 kubelet[3208]: W0302 13:31:36.683424 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.683520 kubelet[3208]: E0302 13:31:36.683434 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.683797 kubelet[3208]: E0302 13:31:36.683764 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.683797 kubelet[3208]: W0302 13:31:36.683795 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.683870 kubelet[3208]: E0302 13:31:36.683806 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.684059 kubelet[3208]: E0302 13:31:36.684042 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.684059 kubelet[3208]: W0302 13:31:36.684057 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.684125 kubelet[3208]: E0302 13:31:36.684067 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.684594 kubelet[3208]: E0302 13:31:36.684569 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.684808 kubelet[3208]: W0302 13:31:36.684590 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.684922 kubelet[3208]: E0302 13:31:36.684769 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.685196 kubelet[3208]: E0302 13:31:36.685178 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.685196 kubelet[3208]: W0302 13:31:36.685193 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.685263 kubelet[3208]: E0302 13:31:36.685254 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.686314 kubelet[3208]: E0302 13:31:36.686281 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.686314 kubelet[3208]: W0302 13:31:36.686304 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.686314 kubelet[3208]: E0302 13:31:36.686316 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.686632 kubelet[3208]: E0302 13:31:36.686611 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.686632 kubelet[3208]: W0302 13:31:36.686628 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.686819 kubelet[3208]: E0302 13:31:36.686646 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.687117 kubelet[3208]: E0302 13:31:36.687065 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.687117 kubelet[3208]: W0302 13:31:36.687098 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.687117 kubelet[3208]: E0302 13:31:36.687110 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.687512 kubelet[3208]: E0302 13:31:36.687447 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.687512 kubelet[3208]: W0302 13:31:36.687462 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.687512 kubelet[3208]: E0302 13:31:36.687493 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.687856 kubelet[3208]: E0302 13:31:36.687808 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.687856 kubelet[3208]: W0302 13:31:36.687825 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.687856 kubelet[3208]: E0302 13:31:36.687835 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.695904 kubelet[3208]: E0302 13:31:36.695824 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:36.695904 kubelet[3208]: W0302 13:31:36.695845 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:36.695904 kubelet[3208]: E0302 13:31:36.695861 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:36.702584 containerd[1729]: time="2026-03-02T13:31:36.702458418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jf24n,Uid:e359ec67-5e1b-45af-bad2-30e71d548555,Namespace:calico-system,Attempt:0,}" Mar 2 13:31:36.757808 containerd[1729]: time="2026-03-02T13:31:36.757686563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:31:36.757808 containerd[1729]: time="2026-03-02T13:31:36.757752723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:31:36.757808 containerd[1729]: time="2026-03-02T13:31:36.757779243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:31:36.758046 containerd[1729]: time="2026-03-02T13:31:36.757872244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:31:36.771677 systemd[1]: Started cri-containerd-c717decd33d700cd05a75d2cc0a524fb143853b188777fd8e7393edd0edae8ea.scope - libcontainer container c717decd33d700cd05a75d2cc0a524fb143853b188777fd8e7393edd0edae8ea. Mar 2 13:31:36.795671 containerd[1729]: time="2026-03-02T13:31:36.795560768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jf24n,Uid:e359ec67-5e1b-45af-bad2-30e71d548555,Namespace:calico-system,Attempt:0,} returns sandbox id \"c717decd33d700cd05a75d2cc0a524fb143853b188777fd8e7393edd0edae8ea\"" Mar 2 13:31:38.172211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount342652231.mount: Deactivated successfully. Mar 2 13:31:38.249437 kubelet[3208]: E0302 13:31:38.249382 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm7xw" podUID="8a206f66-6255-4ca6-b52b-ffcb569d488d" Mar 2 13:31:39.022240 containerd[1729]: time="2026-03-02T13:31:39.021506500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:39.025262 containerd[1729]: time="2026-03-02T13:31:39.025231705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.3: active requests=0, bytes read=33841852" Mar 2 13:31:39.028101 containerd[1729]: time="2026-03-02T13:31:39.028046868Z" level=info msg="ImageCreate event name:\"sha256:d28a261c14ff1c1c526940695055ffc414471b39d275a706eac99ccbbd5fdc62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:39.033187 containerd[1729]: time="2026-03-02T13:31:39.033137034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:3e62cf98a20c42a1786397d0192cfb639634ef95c6f463ab92f0439a5c1a4ae5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:39.033943 containerd[1729]: time="2026-03-02T13:31:39.033830315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.3\" with image id \"sha256:d28a261c14ff1c1c526940695055ffc414471b39d275a706eac99ccbbd5fdc62\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:3e62cf98a20c42a1786397d0192cfb639634ef95c6f463ab92f0439a5c1a4ae5\", size \"33841706\" in 2.357827128s" Mar 2 13:31:39.033943 containerd[1729]: time="2026-03-02T13:31:39.033864715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.3\" returns image reference \"sha256:d28a261c14ff1c1c526940695055ffc414471b39d275a706eac99ccbbd5fdc62\"" Mar 2 13:31:39.035319 containerd[1729]: time="2026-03-02T13:31:39.035110196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\"" Mar 2 13:31:39.052085 containerd[1729]: time="2026-03-02T13:31:39.052037456Z" level=info msg="CreateContainer within sandbox \"d76d67cd11d8160495655c4c2a130ce8d84a85166376258b291368a5e034c174\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 2 13:31:39.100380 containerd[1729]: time="2026-03-02T13:31:39.100322433Z" level=info msg="CreateContainer within sandbox \"d76d67cd11d8160495655c4c2a130ce8d84a85166376258b291368a5e034c174\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"03b03272f06f898069909d8a9d133592d2a1fe8fa811587ed0e207756d0e2f74\"" Mar 2 13:31:39.101496 containerd[1729]: time="2026-03-02T13:31:39.101335034Z" level=info msg="StartContainer for \"03b03272f06f898069909d8a9d133592d2a1fe8fa811587ed0e207756d0e2f74\"" Mar 2 13:31:39.135661 systemd[1]: Started cri-containerd-03b03272f06f898069909d8a9d133592d2a1fe8fa811587ed0e207756d0e2f74.scope - libcontainer container 03b03272f06f898069909d8a9d133592d2a1fe8fa811587ed0e207756d0e2f74. Mar 2 13:31:39.175350 containerd[1729]: time="2026-03-02T13:31:39.175277521Z" level=info msg="StartContainer for \"03b03272f06f898069909d8a9d133592d2a1fe8fa811587ed0e207756d0e2f74\" returns successfully" Mar 2 13:31:39.376989 kubelet[3208]: E0302 13:31:39.376958 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.378591 kubelet[3208]: W0302 13:31:39.377392 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.378591 kubelet[3208]: E0302 13:31:39.377426 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.378965 kubelet[3208]: E0302 13:31:39.378789 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.378965 kubelet[3208]: W0302 13:31:39.378816 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.378965 kubelet[3208]: E0302 13:31:39.378869 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.379315 kubelet[3208]: E0302 13:31:39.379207 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.379315 kubelet[3208]: W0302 13:31:39.379223 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.379315 kubelet[3208]: E0302 13:31:39.379233 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.379540 kubelet[3208]: E0302 13:31:39.379406 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.379540 kubelet[3208]: W0302 13:31:39.379416 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.379540 kubelet[3208]: E0302 13:31:39.379426 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.379951 kubelet[3208]: E0302 13:31:39.379841 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.379951 kubelet[3208]: W0302 13:31:39.379858 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.379951 kubelet[3208]: E0302 13:31:39.379873 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.380596 kubelet[3208]: E0302 13:31:39.380037 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.380596 kubelet[3208]: W0302 13:31:39.380046 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.380596 kubelet[3208]: E0302 13:31:39.380058 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.381196 kubelet[3208]: E0302 13:31:39.381057 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.381196 kubelet[3208]: W0302 13:31:39.381079 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.381196 kubelet[3208]: E0302 13:31:39.381091 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.381454 kubelet[3208]: E0302 13:31:39.381292 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.381454 kubelet[3208]: W0302 13:31:39.381302 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.381454 kubelet[3208]: E0302 13:31:39.381311 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.382894 kubelet[3208]: E0302 13:31:39.382785 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.382894 kubelet[3208]: W0302 13:31:39.382804 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.382894 kubelet[3208]: E0302 13:31:39.382817 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.385926 kubelet[3208]: E0302 13:31:39.385904 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.386183 kubelet[3208]: W0302 13:31:39.386043 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.386183 kubelet[3208]: E0302 13:31:39.386065 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.386325 kubelet[3208]: E0302 13:31:39.386312 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.386487 kubelet[3208]: W0302 13:31:39.386382 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.386487 kubelet[3208]: E0302 13:31:39.386396 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.386959 kubelet[3208]: E0302 13:31:39.386820 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.386959 kubelet[3208]: W0302 13:31:39.386836 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.386959 kubelet[3208]: E0302 13:31:39.386848 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.387171 kubelet[3208]: E0302 13:31:39.387139 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.387584 kubelet[3208]: W0302 13:31:39.387563 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.387844 kubelet[3208]: E0302 13:31:39.387764 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.388451 kubelet[3208]: E0302 13:31:39.388228 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.388451 kubelet[3208]: W0302 13:31:39.388242 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.388451 kubelet[3208]: E0302 13:31:39.388253 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.389032 kubelet[3208]: E0302 13:31:39.388923 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.389032 kubelet[3208]: W0302 13:31:39.388941 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.389032 kubelet[3208]: E0302 13:31:39.388952 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.400093 kubelet[3208]: E0302 13:31:39.399909 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.400093 kubelet[3208]: W0302 13:31:39.399940 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.400093 kubelet[3208]: E0302 13:31:39.399962 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.400404 kubelet[3208]: E0302 13:31:39.400384 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.400606 kubelet[3208]: W0302 13:31:39.400402 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.400606 kubelet[3208]: E0302 13:31:39.400422 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.400916 kubelet[3208]: E0302 13:31:39.400893 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.401059 kubelet[3208]: W0302 13:31:39.400990 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.401059 kubelet[3208]: E0302 13:31:39.401013 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.402492 kubelet[3208]: E0302 13:31:39.402332 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.402492 kubelet[3208]: W0302 13:31:39.402357 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.402492 kubelet[3208]: E0302 13:31:39.402375 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.402711 kubelet[3208]: E0302 13:31:39.402679 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.402711 kubelet[3208]: W0302 13:31:39.402692 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.402857 kubelet[3208]: E0302 13:31:39.402702 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.403245 kubelet[3208]: E0302 13:31:39.403144 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.403245 kubelet[3208]: W0302 13:31:39.403159 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.403245 kubelet[3208]: E0302 13:31:39.403170 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.404012 kubelet[3208]: E0302 13:31:39.403930 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.404012 kubelet[3208]: W0302 13:31:39.403948 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.404012 kubelet[3208]: E0302 13:31:39.403962 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.405210 kubelet[3208]: E0302 13:31:39.404573 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.405210 kubelet[3208]: W0302 13:31:39.404589 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.405210 kubelet[3208]: E0302 13:31:39.404603 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.406835 kubelet[3208]: E0302 13:31:39.406692 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.406835 kubelet[3208]: W0302 13:31:39.406715 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.406835 kubelet[3208]: E0302 13:31:39.406733 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.407046 kubelet[3208]: E0302 13:31:39.407004 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.407046 kubelet[3208]: W0302 13:31:39.407017 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.407046 kubelet[3208]: E0302 13:31:39.407028 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.407260 kubelet[3208]: E0302 13:31:39.407243 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.407260 kubelet[3208]: W0302 13:31:39.407257 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.407372 kubelet[3208]: E0302 13:31:39.407269 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.407607 kubelet[3208]: E0302 13:31:39.407582 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.407607 kubelet[3208]: W0302 13:31:39.407604 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.407724 kubelet[3208]: E0302 13:31:39.407622 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.407837 kubelet[3208]: E0302 13:31:39.407797 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.407837 kubelet[3208]: W0302 13:31:39.407806 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.407837 kubelet[3208]: E0302 13:31:39.407816 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.408337 kubelet[3208]: E0302 13:31:39.408312 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.408337 kubelet[3208]: W0302 13:31:39.408332 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.408542 kubelet[3208]: E0302 13:31:39.408346 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.409569 kubelet[3208]: E0302 13:31:39.409546 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.409569 kubelet[3208]: W0302 13:31:39.409564 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.409767 kubelet[3208]: E0302 13:31:39.409578 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.409880 kubelet[3208]: E0302 13:31:39.409863 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.409880 kubelet[3208]: W0302 13:31:39.409878 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.409955 kubelet[3208]: E0302 13:31:39.409890 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.410307 kubelet[3208]: E0302 13:31:39.410180 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.410307 kubelet[3208]: W0302 13:31:39.410196 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.410307 kubelet[3208]: E0302 13:31:39.410209 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:39.410508 kubelet[3208]: E0302 13:31:39.410485 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:39.410611 kubelet[3208]: W0302 13:31:39.410565 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:39.410611 kubelet[3208]: E0302 13:31:39.410584 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.123017 kubelet[3208]: I0302 13:31:40.122546 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5986f5c599-dbmhb" podStartSLOduration=1.762601995 podStartE2EDuration="4.122527165s" podCreationTimestamp="2026-03-02 13:31:36 +0000 UTC" firstStartedPulling="2026-03-02 13:31:36.675076506 +0000 UTC m=+21.515506989" lastFinishedPulling="2026-03-02 13:31:39.035001676 +0000 UTC m=+23.875432159" observedRunningTime="2026-03-02 13:31:39.398353903 +0000 UTC m=+24.238784426" watchObservedRunningTime="2026-03-02 13:31:40.122527165 +0000 UTC m=+24.962957648" Mar 2 13:31:40.249505 kubelet[3208]: E0302 13:31:40.248653 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm7xw" podUID="8a206f66-6255-4ca6-b52b-ffcb569d488d" Mar 2 13:31:40.396737 kubelet[3208]: E0302 13:31:40.396286 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.396737 kubelet[3208]: W0302 13:31:40.396315 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.396737 kubelet[3208]: E0302 13:31:40.396345 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.398417 kubelet[3208]: E0302 13:31:40.397439 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.398417 kubelet[3208]: W0302 13:31:40.397459 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.398417 kubelet[3208]: E0302 13:31:40.397490 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.398417 kubelet[3208]: E0302 13:31:40.397688 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.398417 kubelet[3208]: W0302 13:31:40.397706 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.398417 kubelet[3208]: E0302 13:31:40.397716 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.398417 kubelet[3208]: E0302 13:31:40.398291 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.398417 kubelet[3208]: W0302 13:31:40.398304 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.398417 kubelet[3208]: E0302 13:31:40.398315 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.398942 kubelet[3208]: E0302 13:31:40.398832 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.398942 kubelet[3208]: W0302 13:31:40.398846 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.398942 kubelet[3208]: E0302 13:31:40.398857 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.399213 kubelet[3208]: E0302 13:31:40.399107 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.399213 kubelet[3208]: W0302 13:31:40.399118 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.399213 kubelet[3208]: E0302 13:31:40.399127 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.399357 kubelet[3208]: E0302 13:31:40.399346 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.399463 kubelet[3208]: W0302 13:31:40.399416 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.399463 kubelet[3208]: E0302 13:31:40.399430 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.399847 kubelet[3208]: E0302 13:31:40.399730 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.399847 kubelet[3208]: W0302 13:31:40.399742 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.399847 kubelet[3208]: E0302 13:31:40.399756 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.400156 kubelet[3208]: E0302 13:31:40.400067 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.400156 kubelet[3208]: W0302 13:31:40.400078 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.400156 kubelet[3208]: E0302 13:31:40.400089 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.400444 kubelet[3208]: E0302 13:31:40.400401 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.400444 kubelet[3208]: W0302 13:31:40.400413 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.400444 kubelet[3208]: E0302 13:31:40.400423 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.400838 kubelet[3208]: E0302 13:31:40.400708 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.400838 kubelet[3208]: W0302 13:31:40.400720 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.400838 kubelet[3208]: E0302 13:31:40.400763 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.401246 kubelet[3208]: E0302 13:31:40.401090 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.401246 kubelet[3208]: W0302 13:31:40.401101 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.401246 kubelet[3208]: E0302 13:31:40.401112 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.401537 kubelet[3208]: E0302 13:31:40.401402 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.401537 kubelet[3208]: W0302 13:31:40.401413 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.401537 kubelet[3208]: E0302 13:31:40.401423 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.401857 kubelet[3208]: E0302 13:31:40.401767 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.401857 kubelet[3208]: W0302 13:31:40.401780 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.401857 kubelet[3208]: E0302 13:31:40.401790 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.402219 kubelet[3208]: E0302 13:31:40.402140 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.402219 kubelet[3208]: W0302 13:31:40.402151 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.402219 kubelet[3208]: E0302 13:31:40.402162 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.409703 kubelet[3208]: E0302 13:31:40.409670 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.409703 kubelet[3208]: W0302 13:31:40.409696 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.409903 kubelet[3208]: E0302 13:31:40.409717 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.410870 kubelet[3208]: E0302 13:31:40.409934 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.410870 kubelet[3208]: W0302 13:31:40.409947 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.410870 kubelet[3208]: E0302 13:31:40.409957 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.410870 kubelet[3208]: E0302 13:31:40.410184 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.410870 kubelet[3208]: W0302 13:31:40.410247 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.410870 kubelet[3208]: E0302 13:31:40.410280 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.411614 kubelet[3208]: E0302 13:31:40.411408 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.411614 kubelet[3208]: W0302 13:31:40.411430 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.411614 kubelet[3208]: E0302 13:31:40.411449 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.412099 kubelet[3208]: E0302 13:31:40.411955 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.412099 kubelet[3208]: W0302 13:31:40.411969 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.412099 kubelet[3208]: E0302 13:31:40.411982 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.412585 kubelet[3208]: E0302 13:31:40.412425 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.412585 kubelet[3208]: W0302 13:31:40.412439 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.412585 kubelet[3208]: E0302 13:31:40.412451 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.412979 kubelet[3208]: E0302 13:31:40.412864 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.412979 kubelet[3208]: W0302 13:31:40.412878 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.412979 kubelet[3208]: E0302 13:31:40.412891 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.413180 kubelet[3208]: E0302 13:31:40.413169 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.413352 kubelet[3208]: W0302 13:31:40.413233 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.413352 kubelet[3208]: E0302 13:31:40.413248 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.414050 kubelet[3208]: E0302 13:31:40.413648 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.414050 kubelet[3208]: W0302 13:31:40.413662 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.414050 kubelet[3208]: E0302 13:31:40.413673 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.414274 kubelet[3208]: E0302 13:31:40.414255 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.414274 kubelet[3208]: W0302 13:31:40.414272 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.414353 kubelet[3208]: E0302 13:31:40.414292 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.414867 kubelet[3208]: E0302 13:31:40.414842 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.414867 kubelet[3208]: W0302 13:31:40.414861 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.414961 kubelet[3208]: E0302 13:31:40.414875 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.415310 kubelet[3208]: E0302 13:31:40.415291 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.415310 kubelet[3208]: W0302 13:31:40.415307 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.415534 kubelet[3208]: E0302 13:31:40.415320 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.415679 kubelet[3208]: E0302 13:31:40.415660 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.415764 kubelet[3208]: W0302 13:31:40.415677 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.415764 kubelet[3208]: E0302 13:31:40.415690 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.416160 kubelet[3208]: E0302 13:31:40.416140 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.416160 kubelet[3208]: W0302 13:31:40.416156 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.416160 kubelet[3208]: E0302 13:31:40.416169 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.416946 kubelet[3208]: E0302 13:31:40.416743 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.416946 kubelet[3208]: W0302 13:31:40.416759 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.416946 kubelet[3208]: E0302 13:31:40.416771 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.417270 kubelet[3208]: E0302 13:31:40.417108 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.417270 kubelet[3208]: W0302 13:31:40.417120 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.417270 kubelet[3208]: E0302 13:31:40.417132 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.417768 kubelet[3208]: E0302 13:31:40.417748 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.417768 kubelet[3208]: W0302 13:31:40.417765 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.417861 kubelet[3208]: E0302 13:31:40.417779 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.418343 kubelet[3208]: E0302 13:31:40.418320 3208 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 2 13:31:40.418343 kubelet[3208]: W0302 13:31:40.418346 3208 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 2 13:31:40.418429 kubelet[3208]: E0302 13:31:40.418360 3208 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 2 13:31:40.462000 containerd[1729]: time="2026-03-02T13:31:40.461947379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:40.464582 containerd[1729]: time="2026-03-02T13:31:40.464539382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3: active requests=0, bytes read=4456989" Mar 2 13:31:40.467974 containerd[1729]: time="2026-03-02T13:31:40.467916986Z" level=info msg="ImageCreate event name:\"sha256:3c477f840adeca332cbee81ef65da50ec7be99ded887a8de75d5cf25b896d6a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:40.472288 containerd[1729]: time="2026-03-02T13:31:40.472204111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:6cdc6cc2f7cdcbd4bf2d9b6a59c03ed98b5c47f22e467d78b5c06e5fd7bff132\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:40.473718 containerd[1729]: time="2026-03-02T13:31:40.473145153Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" with image id \"sha256:3c477f840adeca332cbee81ef65da50ec7be99ded887a8de75d5cf25b896d6a9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:6cdc6cc2f7cdcbd4bf2d9b6a59c03ed98b5c47f22e467d78b5c06e5fd7bff132\", size \"5854474\" in 1.438004077s" Mar 2 13:31:40.473718 containerd[1729]: time="2026-03-02T13:31:40.473188633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.3\" returns image reference \"sha256:3c477f840adeca332cbee81ef65da50ec7be99ded887a8de75d5cf25b896d6a9\"" Mar 2 13:31:40.481694 containerd[1729]: time="2026-03-02T13:31:40.481651243Z" level=info msg="CreateContainer within sandbox \"c717decd33d700cd05a75d2cc0a524fb143853b188777fd8e7393edd0edae8ea\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 2 13:31:40.527544 containerd[1729]: time="2026-03-02T13:31:40.527412019Z" level=info msg="CreateContainer within sandbox \"c717decd33d700cd05a75d2cc0a524fb143853b188777fd8e7393edd0edae8ea\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a13d96eb7e8d977599a6aa1e824231ac05a83a3b02690fbf2ec58b04f9944bce\"" Mar 2 13:31:40.528285 containerd[1729]: time="2026-03-02T13:31:40.528194780Z" level=info msg="StartContainer for \"a13d96eb7e8d977599a6aa1e824231ac05a83a3b02690fbf2ec58b04f9944bce\"" Mar 2 13:31:40.558787 systemd[1]: Started cri-containerd-a13d96eb7e8d977599a6aa1e824231ac05a83a3b02690fbf2ec58b04f9944bce.scope - libcontainer container a13d96eb7e8d977599a6aa1e824231ac05a83a3b02690fbf2ec58b04f9944bce. Mar 2 13:31:40.594936 containerd[1729]: time="2026-03-02T13:31:40.594859261Z" level=info msg="StartContainer for \"a13d96eb7e8d977599a6aa1e824231ac05a83a3b02690fbf2ec58b04f9944bce\" returns successfully" Mar 2 13:31:40.603698 systemd[1]: cri-containerd-a13d96eb7e8d977599a6aa1e824231ac05a83a3b02690fbf2ec58b04f9944bce.scope: Deactivated successfully. Mar 2 13:31:40.629462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a13d96eb7e8d977599a6aa1e824231ac05a83a3b02690fbf2ec58b04f9944bce-rootfs.mount: Deactivated successfully. Mar 2 13:31:41.733370 containerd[1729]: time="2026-03-02T13:31:41.733151249Z" level=info msg="shim disconnected" id=a13d96eb7e8d977599a6aa1e824231ac05a83a3b02690fbf2ec58b04f9944bce namespace=k8s.io Mar 2 13:31:41.733370 containerd[1729]: time="2026-03-02T13:31:41.733205529Z" level=warning msg="cleaning up after shim disconnected" id=a13d96eb7e8d977599a6aa1e824231ac05a83a3b02690fbf2ec58b04f9944bce namespace=k8s.io Mar 2 13:31:41.733370 containerd[1729]: time="2026-03-02T13:31:41.733213689Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:31:41.744267 containerd[1729]: time="2026-03-02T13:31:41.743507542Z" level=warning msg="cleanup warnings time=\"2026-03-02T13:31:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 2 13:31:42.248591 kubelet[3208]: E0302 13:31:42.248539 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm7xw" podUID="8a206f66-6255-4ca6-b52b-ffcb569d488d" Mar 2 13:31:42.352749 containerd[1729]: time="2026-03-02T13:31:42.352707564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.3\"" Mar 2 13:31:44.249005 kubelet[3208]: E0302 13:31:44.248806 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm7xw" podUID="8a206f66-6255-4ca6-b52b-ffcb569d488d" Mar 2 13:31:46.249897 kubelet[3208]: E0302 13:31:46.248678 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm7xw" podUID="8a206f66-6255-4ca6-b52b-ffcb569d488d" Mar 2 13:31:48.248839 kubelet[3208]: E0302 13:31:48.248403 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm7xw" podUID="8a206f66-6255-4ca6-b52b-ffcb569d488d" Mar 2 13:31:50.250249 kubelet[3208]: E0302 13:31:50.249312 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm7xw" podUID="8a206f66-6255-4ca6-b52b-ffcb569d488d" Mar 2 13:31:50.874032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1974002677.mount: Deactivated successfully. Mar 2 13:31:50.920316 containerd[1729]: time="2026-03-02T13:31:50.920263840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:50.925647 containerd[1729]: time="2026-03-02T13:31:50.925601807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.3: active requests=0, bytes read=153583198" Mar 2 13:31:50.932028 containerd[1729]: time="2026-03-02T13:31:50.931970095Z" level=info msg="ImageCreate event name:\"sha256:98788f64d6cabef718c2551eb8b42ec11d1bfaa912cfeb4f6bf240f79159575d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:50.938143 containerd[1729]: time="2026-03-02T13:31:50.937392502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:c7aefc80042b94800407ab45640b59402d2897ae8755b9d8370516e7b0e404bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:50.938143 containerd[1729]: time="2026-03-02T13:31:50.938009702Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.3\" with image id \"sha256:98788f64d6cabef718c2551eb8b42ec11d1bfaa912cfeb4f6bf240f79159575d\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:c7aefc80042b94800407ab45640b59402d2897ae8755b9d8370516e7b0e404bc\", size \"153583060\" in 8.585264298s" Mar 2 13:31:50.938143 containerd[1729]: time="2026-03-02T13:31:50.938045542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.3\" returns image reference \"sha256:98788f64d6cabef718c2551eb8b42ec11d1bfaa912cfeb4f6bf240f79159575d\"" Mar 2 13:31:50.946116 containerd[1729]: time="2026-03-02T13:31:50.946066792Z" level=info msg="CreateContainer within sandbox \"c717decd33d700cd05a75d2cc0a524fb143853b188777fd8e7393edd0edae8ea\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 2 13:31:50.983633 containerd[1729]: time="2026-03-02T13:31:50.983501719Z" level=info msg="CreateContainer within sandbox \"c717decd33d700cd05a75d2cc0a524fb143853b188777fd8e7393edd0edae8ea\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"aaf7e5a2c59f196aaee898b0b6355801766b941f5e0bef94087ba108000eb179\"" Mar 2 13:31:50.984760 containerd[1729]: time="2026-03-02T13:31:50.984154680Z" level=info msg="StartContainer for \"aaf7e5a2c59f196aaee898b0b6355801766b941f5e0bef94087ba108000eb179\"" Mar 2 13:31:51.016667 systemd[1]: Started cri-containerd-aaf7e5a2c59f196aaee898b0b6355801766b941f5e0bef94087ba108000eb179.scope - libcontainer container aaf7e5a2c59f196aaee898b0b6355801766b941f5e0bef94087ba108000eb179. Mar 2 13:31:51.049010 containerd[1729]: time="2026-03-02T13:31:51.048959880Z" level=info msg="StartContainer for \"aaf7e5a2c59f196aaee898b0b6355801766b941f5e0bef94087ba108000eb179\" returns successfully" Mar 2 13:31:51.091900 systemd[1]: cri-containerd-aaf7e5a2c59f196aaee898b0b6355801766b941f5e0bef94087ba108000eb179.scope: Deactivated successfully. Mar 2 13:31:51.874645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aaf7e5a2c59f196aaee898b0b6355801766b941f5e0bef94087ba108000eb179-rootfs.mount: Deactivated successfully. Mar 2 13:31:52.249570 kubelet[3208]: E0302 13:31:52.249038 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm7xw" podUID="8a206f66-6255-4ca6-b52b-ffcb569d488d" Mar 2 13:31:52.678342 containerd[1729]: time="2026-03-02T13:31:52.678263984Z" level=info msg="shim disconnected" id=aaf7e5a2c59f196aaee898b0b6355801766b941f5e0bef94087ba108000eb179 namespace=k8s.io Mar 2 13:31:52.678342 containerd[1729]: time="2026-03-02T13:31:52.678333104Z" level=warning msg="cleaning up after shim disconnected" id=aaf7e5a2c59f196aaee898b0b6355801766b941f5e0bef94087ba108000eb179 namespace=k8s.io Mar 2 13:31:52.678342 containerd[1729]: time="2026-03-02T13:31:52.678347824Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:31:53.375087 containerd[1729]: time="2026-03-02T13:31:53.375040609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.3\"" Mar 2 13:31:54.249513 kubelet[3208]: E0302 13:31:54.248411 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm7xw" podUID="8a206f66-6255-4ca6-b52b-ffcb569d488d" Mar 2 13:31:56.249260 kubelet[3208]: E0302 13:31:56.248818 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm7xw" podUID="8a206f66-6255-4ca6-b52b-ffcb569d488d" Mar 2 13:31:57.555309 containerd[1729]: time="2026-03-02T13:31:57.554446654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:57.556918 containerd[1729]: time="2026-03-02T13:31:57.556880017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.3: active requests=0, bytes read=65998037" Mar 2 13:31:57.559832 containerd[1729]: time="2026-03-02T13:31:57.559776821Z" level=info msg="ImageCreate event name:\"sha256:2aba526dc0b0f95b83ab38a811f41d3daf3ec5ae8876bf273b65b9f142277231\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:57.564426 containerd[1729]: time="2026-03-02T13:31:57.564387346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:c25deb6a4b79f5e595eb464adf9fb3735ea5623889e249d5b3efa0b42ffcbb47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:31:57.565678 containerd[1729]: time="2026-03-02T13:31:57.565388067Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.3\" with image id \"sha256:2aba526dc0b0f95b83ab38a811f41d3daf3ec5ae8876bf273b65b9f142277231\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:c25deb6a4b79f5e595eb464adf9fb3735ea5623889e249d5b3efa0b42ffcbb47\", size \"67395562\" in 4.190307178s" Mar 2 13:31:57.565678 containerd[1729]: time="2026-03-02T13:31:57.565424307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.3\" returns image reference \"sha256:2aba526dc0b0f95b83ab38a811f41d3daf3ec5ae8876bf273b65b9f142277231\"" Mar 2 13:31:57.575210 containerd[1729]: time="2026-03-02T13:31:57.575081799Z" level=info msg="CreateContainer within sandbox \"c717decd33d700cd05a75d2cc0a524fb143853b188777fd8e7393edd0edae8ea\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 2 13:31:57.612163 containerd[1729]: time="2026-03-02T13:31:57.612113522Z" level=info msg="CreateContainer within sandbox \"c717decd33d700cd05a75d2cc0a524fb143853b188777fd8e7393edd0edae8ea\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a9d5164f6b5376f944a42b8c019dd3a7c18d9f62869d29f177a4bc8970c0dc99\"" Mar 2 13:31:57.614519 containerd[1729]: time="2026-03-02T13:31:57.613268964Z" level=info msg="StartContainer for \"a9d5164f6b5376f944a42b8c019dd3a7c18d9f62869d29f177a4bc8970c0dc99\"" Mar 2 13:31:57.645661 systemd[1]: Started cri-containerd-a9d5164f6b5376f944a42b8c019dd3a7c18d9f62869d29f177a4bc8970c0dc99.scope - libcontainer container a9d5164f6b5376f944a42b8c019dd3a7c18d9f62869d29f177a4bc8970c0dc99. Mar 2 13:31:57.676560 containerd[1729]: time="2026-03-02T13:31:57.675787998Z" level=info msg="StartContainer for \"a9d5164f6b5376f944a42b8c019dd3a7c18d9f62869d29f177a4bc8970c0dc99\" returns successfully" Mar 2 13:31:58.249199 kubelet[3208]: E0302 13:31:58.248815 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jm7xw" podUID="8a206f66-6255-4ca6-b52b-ffcb569d488d" Mar 2 13:31:59.347162 systemd[1]: cri-containerd-a9d5164f6b5376f944a42b8c019dd3a7c18d9f62869d29f177a4bc8970c0dc99.scope: Deactivated successfully. Mar 2 13:31:59.369552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9d5164f6b5376f944a42b8c019dd3a7c18d9f62869d29f177a4bc8970c0dc99-rootfs.mount: Deactivated successfully. Mar 2 13:31:59.424828 kubelet[3208]: I0302 13:31:59.424800 3208 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 2 13:31:59.725185 containerd[1729]: time="2026-03-02T13:31:59.724923455Z" level=info msg="shim disconnected" id=a9d5164f6b5376f944a42b8c019dd3a7c18d9f62869d29f177a4bc8970c0dc99 namespace=k8s.io Mar 2 13:31:59.725185 containerd[1729]: time="2026-03-02T13:31:59.725000735Z" level=warning msg="cleaning up after shim disconnected" id=a9d5164f6b5376f944a42b8c019dd3a7c18d9f62869d29f177a4bc8970c0dc99 namespace=k8s.io Mar 2 13:31:59.725185 containerd[1729]: time="2026-03-02T13:31:59.725009775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:31:59.737481 kubelet[3208]: I0302 13:31:59.737341 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7638e352-eaab-46e6-a147-f204ad1cab74-config-volume\") pod \"coredns-674b8bbfcf-5dqvc\" (UID: \"7638e352-eaab-46e6-a147-f204ad1cab74\") " pod="kube-system/coredns-674b8bbfcf-5dqvc" Mar 2 13:31:59.737481 kubelet[3208]: I0302 13:31:59.737416 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-564s2\" (UniqueName: \"kubernetes.io/projected/7638e352-eaab-46e6-a147-f204ad1cab74-kube-api-access-564s2\") pod \"coredns-674b8bbfcf-5dqvc\" (UID: \"7638e352-eaab-46e6-a147-f204ad1cab74\") " pod="kube-system/coredns-674b8bbfcf-5dqvc" Mar 2 13:31:59.739449 systemd[1]: Created slice kubepods-burstable-pod7638e352_eaab_46e6_a147_f204ad1cab74.slice - libcontainer container kubepods-burstable-pod7638e352_eaab_46e6_a147_f204ad1cab74.slice. Mar 2 13:31:59.755779 systemd[1]: Created slice kubepods-besteffort-pod0ce87770_1399_4d16_9b2e_f92d3470b97a.slice - libcontainer container kubepods-besteffort-pod0ce87770_1399_4d16_9b2e_f92d3470b97a.slice. Mar 2 13:31:59.765587 systemd[1]: Created slice kubepods-besteffort-pod72b2de10_35e9_4af6_93f9_e337a5d88bbb.slice - libcontainer container kubepods-besteffort-pod72b2de10_35e9_4af6_93f9_e337a5d88bbb.slice. Mar 2 13:31:59.778770 systemd[1]: Created slice kubepods-besteffort-pod08101d6f_0f5b_4f23_bfc4_ff79bac2631a.slice - libcontainer container kubepods-besteffort-pod08101d6f_0f5b_4f23_bfc4_ff79bac2631a.slice. Mar 2 13:31:59.795588 systemd[1]: Created slice kubepods-besteffort-pod03e66469_eeaa_4abc_aa42_114144b38f24.slice - libcontainer container kubepods-besteffort-pod03e66469_eeaa_4abc_aa42_114144b38f24.slice. Mar 2 13:31:59.802893 systemd[1]: Created slice kubepods-besteffort-pod07045e73_ca01_465e_919f_8f478e008766.slice - libcontainer container kubepods-besteffort-pod07045e73_ca01_465e_919f_8f478e008766.slice. Mar 2 13:31:59.812313 systemd[1]: Created slice kubepods-burstable-podcbb205fa_9365_49ad_a5f5_156be4f3e7b0.slice - libcontainer container kubepods-burstable-podcbb205fa_9365_49ad_a5f5_156be4f3e7b0.slice. Mar 2 13:31:59.838512 kubelet[3208]: I0302 13:31:59.838435 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07045e73-ca01-465e-919f-8f478e008766-goldmane-ca-bundle\") pod \"goldmane-9566f57b5-nnh7j\" (UID: \"07045e73-ca01-465e-919f-8f478e008766\") " pod="calico-system/goldmane-9566f57b5-nnh7j" Mar 2 13:31:59.838512 kubelet[3208]: I0302 13:31:59.838490 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4jbj\" (UniqueName: \"kubernetes.io/projected/08101d6f-0f5b-4f23-bfc4-ff79bac2631a-kube-api-access-d4jbj\") pod \"calico-apiserver-6cd4748474-xxvfj\" (UID: \"08101d6f-0f5b-4f23-bfc4-ff79bac2631a\") " pod="calico-system/calico-apiserver-6cd4748474-xxvfj" Mar 2 13:31:59.839349 kubelet[3208]: I0302 13:31:59.838749 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbb205fa-9365-49ad-a5f5-156be4f3e7b0-config-volume\") pod \"coredns-674b8bbfcf-gfmzk\" (UID: \"cbb205fa-9365-49ad-a5f5-156be4f3e7b0\") " pod="kube-system/coredns-674b8bbfcf-gfmzk" Mar 2 13:31:59.839349 kubelet[3208]: I0302 13:31:59.838805 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/03e66469-eeaa-4abc-aa42-114144b38f24-nginx-config\") pod \"whisker-57d877f87b-svvn6\" (UID: \"03e66469-eeaa-4abc-aa42-114144b38f24\") " pod="calico-system/whisker-57d877f87b-svvn6" Mar 2 13:31:59.839349 kubelet[3208]: I0302 13:31:59.838841 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/03e66469-eeaa-4abc-aa42-114144b38f24-whisker-backend-key-pair\") pod \"whisker-57d877f87b-svvn6\" (UID: \"03e66469-eeaa-4abc-aa42-114144b38f24\") " pod="calico-system/whisker-57d877f87b-svvn6" Mar 2 13:31:59.839349 kubelet[3208]: I0302 13:31:59.838888 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0ce87770-1399-4d16-9b2e-f92d3470b97a-calico-apiserver-certs\") pod \"calico-apiserver-6cd4748474-c2zkg\" (UID: \"0ce87770-1399-4d16-9b2e-f92d3470b97a\") " pod="calico-system/calico-apiserver-6cd4748474-c2zkg" Mar 2 13:31:59.839349 kubelet[3208]: I0302 13:31:59.838915 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx8pc\" (UniqueName: \"kubernetes.io/projected/72b2de10-35e9-4af6-93f9-e337a5d88bbb-kube-api-access-bx8pc\") pod \"calico-kube-controllers-75dbc7c697-wkcts\" (UID: \"72b2de10-35e9-4af6-93f9-e337a5d88bbb\") " pod="calico-system/calico-kube-controllers-75dbc7c697-wkcts" Mar 2 13:31:59.841265 kubelet[3208]: I0302 13:31:59.839558 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gt9b\" (UniqueName: \"kubernetes.io/projected/03e66469-eeaa-4abc-aa42-114144b38f24-kube-api-access-9gt9b\") pod \"whisker-57d877f87b-svvn6\" (UID: \"03e66469-eeaa-4abc-aa42-114144b38f24\") " pod="calico-system/whisker-57d877f87b-svvn6" Mar 2 13:31:59.841265 kubelet[3208]: I0302 13:31:59.839603 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/07045e73-ca01-465e-919f-8f478e008766-goldmane-key-pair\") pod \"goldmane-9566f57b5-nnh7j\" (UID: \"07045e73-ca01-465e-919f-8f478e008766\") " pod="calico-system/goldmane-9566f57b5-nnh7j" Mar 2 13:31:59.841265 kubelet[3208]: I0302 13:31:59.839621 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/08101d6f-0f5b-4f23-bfc4-ff79bac2631a-calico-apiserver-certs\") pod \"calico-apiserver-6cd4748474-xxvfj\" (UID: \"08101d6f-0f5b-4f23-bfc4-ff79bac2631a\") " pod="calico-system/calico-apiserver-6cd4748474-xxvfj" Mar 2 13:31:59.841265 kubelet[3208]: I0302 13:31:59.839701 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcjnx\" (UniqueName: \"kubernetes.io/projected/07045e73-ca01-465e-919f-8f478e008766-kube-api-access-qcjnx\") pod \"goldmane-9566f57b5-nnh7j\" (UID: \"07045e73-ca01-465e-919f-8f478e008766\") " pod="calico-system/goldmane-9566f57b5-nnh7j" Mar 2 13:31:59.841265 kubelet[3208]: I0302 13:31:59.839718 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r9lx\" (UniqueName: \"kubernetes.io/projected/0ce87770-1399-4d16-9b2e-f92d3470b97a-kube-api-access-9r9lx\") pod \"calico-apiserver-6cd4748474-c2zkg\" (UID: \"0ce87770-1399-4d16-9b2e-f92d3470b97a\") " pod="calico-system/calico-apiserver-6cd4748474-c2zkg" Mar 2 13:31:59.841388 kubelet[3208]: I0302 13:31:59.839738 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n9f6\" (UniqueName: \"kubernetes.io/projected/cbb205fa-9365-49ad-a5f5-156be4f3e7b0-kube-api-access-2n9f6\") pod \"coredns-674b8bbfcf-gfmzk\" (UID: \"cbb205fa-9365-49ad-a5f5-156be4f3e7b0\") " pod="kube-system/coredns-674b8bbfcf-gfmzk" Mar 2 13:31:59.841388 kubelet[3208]: I0302 13:31:59.839756 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72b2de10-35e9-4af6-93f9-e337a5d88bbb-tigera-ca-bundle\") pod \"calico-kube-controllers-75dbc7c697-wkcts\" (UID: \"72b2de10-35e9-4af6-93f9-e337a5d88bbb\") " pod="calico-system/calico-kube-controllers-75dbc7c697-wkcts" Mar 2 13:31:59.841388 kubelet[3208]: I0302 13:31:59.839773 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03e66469-eeaa-4abc-aa42-114144b38f24-whisker-ca-bundle\") pod \"whisker-57d877f87b-svvn6\" (UID: \"03e66469-eeaa-4abc-aa42-114144b38f24\") " pod="calico-system/whisker-57d877f87b-svvn6" Mar 2 13:31:59.841388 kubelet[3208]: I0302 13:31:59.839791 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07045e73-ca01-465e-919f-8f478e008766-config\") pod \"goldmane-9566f57b5-nnh7j\" (UID: \"07045e73-ca01-465e-919f-8f478e008766\") " pod="calico-system/goldmane-9566f57b5-nnh7j" Mar 2 13:32:00.052503 containerd[1729]: time="2026-03-02T13:32:00.052334121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5dqvc,Uid:7638e352-eaab-46e6-a147-f204ad1cab74,Namespace:kube-system,Attempt:0,}" Mar 2 13:32:00.065786 containerd[1729]: time="2026-03-02T13:32:00.065742737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd4748474-c2zkg,Uid:0ce87770-1399-4d16-9b2e-f92d3470b97a,Namespace:calico-system,Attempt:0,}" Mar 2 13:32:00.074806 containerd[1729]: time="2026-03-02T13:32:00.074760508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75dbc7c697-wkcts,Uid:72b2de10-35e9-4af6-93f9-e337a5d88bbb,Namespace:calico-system,Attempt:0,}" Mar 2 13:32:00.093110 containerd[1729]: time="2026-03-02T13:32:00.093063649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd4748474-xxvfj,Uid:08101d6f-0f5b-4f23-bfc4-ff79bac2631a,Namespace:calico-system,Attempt:0,}" Mar 2 13:32:00.101086 containerd[1729]: time="2026-03-02T13:32:00.101034619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d877f87b-svvn6,Uid:03e66469-eeaa-4abc-aa42-114144b38f24,Namespace:calico-system,Attempt:0,}" Mar 2 13:32:00.110209 containerd[1729]: time="2026-03-02T13:32:00.110170390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9566f57b5-nnh7j,Uid:07045e73-ca01-465e-919f-8f478e008766,Namespace:calico-system,Attempt:0,}" Mar 2 13:32:00.116438 containerd[1729]: time="2026-03-02T13:32:00.116399397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gfmzk,Uid:cbb205fa-9365-49ad-a5f5-156be4f3e7b0,Namespace:kube-system,Attempt:0,}" Mar 2 13:32:00.194172 containerd[1729]: time="2026-03-02T13:32:00.194097689Z" level=error msg="Failed to destroy network for sandbox \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.194869 containerd[1729]: time="2026-03-02T13:32:00.194693929Z" level=error msg="encountered an error cleaning up failed sandbox \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.194869 containerd[1729]: time="2026-03-02T13:32:00.194748169Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5dqvc,Uid:7638e352-eaab-46e6-a147-f204ad1cab74,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.195056 kubelet[3208]: E0302 13:32:00.194968 3208 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.195056 kubelet[3208]: E0302 13:32:00.195042 3208 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5dqvc" Mar 2 13:32:00.195137 kubelet[3208]: E0302 13:32:00.195063 3208 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5dqvc" Mar 2 13:32:00.195137 kubelet[3208]: E0302 13:32:00.195116 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-5dqvc_kube-system(7638e352-eaab-46e6-a147-f204ad1cab74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-5dqvc_kube-system(7638e352-eaab-46e6-a147-f204ad1cab74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5dqvc" podUID="7638e352-eaab-46e6-a147-f204ad1cab74" Mar 2 13:32:00.256784 systemd[1]: Created slice kubepods-besteffort-pod8a206f66_6255_4ca6_b52b_ffcb569d488d.slice - libcontainer container kubepods-besteffort-pod8a206f66_6255_4ca6_b52b_ffcb569d488d.slice. Mar 2 13:32:00.262616 containerd[1729]: time="2026-03-02T13:32:00.262576249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jm7xw,Uid:8a206f66-6255-4ca6-b52b-ffcb569d488d,Namespace:calico-system,Attempt:0,}" Mar 2 13:32:00.356781 containerd[1729]: time="2026-03-02T13:32:00.356629000Z" level=error msg="Failed to destroy network for sandbox \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.357311 containerd[1729]: time="2026-03-02T13:32:00.357191041Z" level=error msg="encountered an error cleaning up failed sandbox \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.357311 containerd[1729]: time="2026-03-02T13:32:00.357256321Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75dbc7c697-wkcts,Uid:72b2de10-35e9-4af6-93f9-e337a5d88bbb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.357590 kubelet[3208]: E0302 13:32:00.357496 3208 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.357590 kubelet[3208]: E0302 13:32:00.357571 3208 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75dbc7c697-wkcts" Mar 2 13:32:00.357682 kubelet[3208]: E0302 13:32:00.357594 3208 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75dbc7c697-wkcts" Mar 2 13:32:00.357682 kubelet[3208]: E0302 13:32:00.357638 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75dbc7c697-wkcts_calico-system(72b2de10-35e9-4af6-93f9-e337a5d88bbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75dbc7c697-wkcts_calico-system(72b2de10-35e9-4af6-93f9-e337a5d88bbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75dbc7c697-wkcts" podUID="72b2de10-35e9-4af6-93f9-e337a5d88bbb" Mar 2 13:32:00.367886 containerd[1729]: time="2026-03-02T13:32:00.367323973Z" level=error msg="Failed to destroy network for sandbox \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.367886 containerd[1729]: time="2026-03-02T13:32:00.367740693Z" level=error msg="encountered an error cleaning up failed sandbox \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.367886 containerd[1729]: time="2026-03-02T13:32:00.367795773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd4748474-xxvfj,Uid:08101d6f-0f5b-4f23-bfc4-ff79bac2631a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.368095 kubelet[3208]: E0302 13:32:00.368007 3208 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.368095 kubelet[3208]: E0302 13:32:00.368066 3208 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6cd4748474-xxvfj" Mar 2 13:32:00.368095 kubelet[3208]: E0302 13:32:00.368089 3208 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6cd4748474-xxvfj" Mar 2 13:32:00.368234 kubelet[3208]: E0302 13:32:00.368132 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cd4748474-xxvfj_calico-system(08101d6f-0f5b-4f23-bfc4-ff79bac2631a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cd4748474-xxvfj_calico-system(08101d6f-0f5b-4f23-bfc4-ff79bac2631a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6cd4748474-xxvfj" podUID="08101d6f-0f5b-4f23-bfc4-ff79bac2631a" Mar 2 13:32:00.409815 kubelet[3208]: I0302 13:32:00.409761 3208 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Mar 2 13:32:00.412941 containerd[1729]: time="2026-03-02T13:32:00.412659866Z" level=info msg="StopPodSandbox for \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\"" Mar 2 13:32:00.412941 containerd[1729]: time="2026-03-02T13:32:00.412871227Z" level=info msg="Ensure that sandbox b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28 in task-service has been cleanup successfully" Mar 2 13:32:00.439440 kubelet[3208]: I0302 13:32:00.439285 3208 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Mar 2 13:32:00.448948 containerd[1729]: time="2026-03-02T13:32:00.445456345Z" level=info msg="StopPodSandbox for \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\"" Mar 2 13:32:00.449841 containerd[1729]: time="2026-03-02T13:32:00.449711230Z" level=info msg="Ensure that sandbox 77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1 in task-service has been cleanup successfully" Mar 2 13:32:00.458064 containerd[1729]: time="2026-03-02T13:32:00.457934640Z" level=info msg="CreateContainer within sandbox \"c717decd33d700cd05a75d2cc0a524fb143853b188777fd8e7393edd0edae8ea\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 2 13:32:00.458201 containerd[1729]: time="2026-03-02T13:32:00.458154880Z" level=error msg="Failed to destroy network for sandbox \"e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.462194 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740-shm.mount: Deactivated successfully. Mar 2 13:32:00.467706 kubelet[3208]: I0302 13:32:00.467664 3208 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Mar 2 13:32:00.473106 containerd[1729]: time="2026-03-02T13:32:00.472085056Z" level=info msg="StopPodSandbox for \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\"" Mar 2 13:32:00.473106 containerd[1729]: time="2026-03-02T13:32:00.472279337Z" level=info msg="Ensure that sandbox 43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885 in task-service has been cleanup successfully" Mar 2 13:32:00.474690 containerd[1729]: time="2026-03-02T13:32:00.474651300Z" level=error msg="encountered an error cleaning up failed sandbox \"e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.475176 containerd[1729]: time="2026-03-02T13:32:00.474994220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd4748474-c2zkg,Uid:0ce87770-1399-4d16-9b2e-f92d3470b97a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.489169 kubelet[3208]: E0302 13:32:00.488741 3208 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.489169 kubelet[3208]: E0302 13:32:00.488797 3208 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6cd4748474-c2zkg" Mar 2 13:32:00.489169 kubelet[3208]: E0302 13:32:00.488816 3208 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6cd4748474-c2zkg" Mar 2 13:32:00.489364 kubelet[3208]: E0302 13:32:00.488881 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cd4748474-c2zkg_calico-system(0ce87770-1399-4d16-9b2e-f92d3470b97a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cd4748474-c2zkg_calico-system(0ce87770-1399-4d16-9b2e-f92d3470b97a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6cd4748474-c2zkg" podUID="0ce87770-1399-4d16-9b2e-f92d3470b97a" Mar 2 13:32:00.543525 containerd[1729]: time="2026-03-02T13:32:00.539262696Z" level=error msg="Failed to destroy network for sandbox \"5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.544670 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe-shm.mount: Deactivated successfully. Mar 2 13:32:00.549034 containerd[1729]: time="2026-03-02T13:32:00.548913107Z" level=error msg="encountered an error cleaning up failed sandbox \"5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.549262 containerd[1729]: time="2026-03-02T13:32:00.549217347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gfmzk,Uid:cbb205fa-9365-49ad-a5f5-156be4f3e7b0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.549565 kubelet[3208]: E0302 13:32:00.549530 3208 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.551907 kubelet[3208]: E0302 13:32:00.549704 3208 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gfmzk" Mar 2 13:32:00.551907 kubelet[3208]: E0302 13:32:00.551538 3208 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gfmzk" Mar 2 13:32:00.551907 kubelet[3208]: E0302 13:32:00.551623 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gfmzk_kube-system(cbb205fa-9365-49ad-a5f5-156be4f3e7b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gfmzk_kube-system(cbb205fa-9365-49ad-a5f5-156be4f3e7b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gfmzk" podUID="cbb205fa-9365-49ad-a5f5-156be4f3e7b0" Mar 2 13:32:00.641461 containerd[1729]: time="2026-03-02T13:32:00.641308576Z" level=error msg="Failed to destroy network for sandbox \"7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.641765 containerd[1729]: time="2026-03-02T13:32:00.641733457Z" level=error msg="encountered an error cleaning up failed sandbox \"7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.641824 containerd[1729]: time="2026-03-02T13:32:00.641793337Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jm7xw,Uid:8a206f66-6255-4ca6-b52b-ffcb569d488d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.644362 kubelet[3208]: E0302 13:32:00.643935 3208 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.644362 kubelet[3208]: E0302 13:32:00.643993 3208 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jm7xw" Mar 2 13:32:00.644362 kubelet[3208]: E0302 13:32:00.644012 3208 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jm7xw" Mar 2 13:32:00.644596 kubelet[3208]: E0302 13:32:00.644066 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jm7xw_calico-system(8a206f66-6255-4ca6-b52b-ffcb569d488d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jm7xw_calico-system(8a206f66-6255-4ca6-b52b-ffcb569d488d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jm7xw" podUID="8a206f66-6255-4ca6-b52b-ffcb569d488d" Mar 2 13:32:00.644790 containerd[1729]: time="2026-03-02T13:32:00.644756420Z" level=error msg="Failed to destroy network for sandbox \"909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.645189 containerd[1729]: time="2026-03-02T13:32:00.645161381Z" level=error msg="encountered an error cleaning up failed sandbox \"909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.645307 containerd[1729]: time="2026-03-02T13:32:00.645285501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d877f87b-svvn6,Uid:03e66469-eeaa-4abc-aa42-114144b38f24,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.645606 kubelet[3208]: E0302 13:32:00.645557 3208 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.645902 kubelet[3208]: E0302 13:32:00.645725 3208 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57d877f87b-svvn6" Mar 2 13:32:00.645902 kubelet[3208]: E0302 13:32:00.645806 3208 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57d877f87b-svvn6" Mar 2 13:32:00.645902 kubelet[3208]: E0302 13:32:00.645857 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57d877f87b-svvn6_calico-system(03e66469-eeaa-4abc-aa42-114144b38f24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57d877f87b-svvn6_calico-system(03e66469-eeaa-4abc-aa42-114144b38f24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57d877f87b-svvn6" podUID="03e66469-eeaa-4abc-aa42-114144b38f24" Mar 2 13:32:00.656229 containerd[1729]: time="2026-03-02T13:32:00.656172554Z" level=info msg="CreateContainer within sandbox \"c717decd33d700cd05a75d2cc0a524fb143853b188777fd8e7393edd0edae8ea\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"76dad6686355124d37e58e09a6aeb65b33e23e6a4086da7534448412664961eb\"" Mar 2 13:32:00.656857 containerd[1729]: time="2026-03-02T13:32:00.656823314Z" level=info msg="StartContainer for \"76dad6686355124d37e58e09a6aeb65b33e23e6a4086da7534448412664961eb\"" Mar 2 13:32:00.668752 containerd[1729]: time="2026-03-02T13:32:00.668684008Z" level=error msg="StopPodSandbox for \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\" failed" error="failed to destroy network for sandbox \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.669440 kubelet[3208]: E0302 13:32:00.669185 3208 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Mar 2 13:32:00.669440 kubelet[3208]: E0302 13:32:00.669267 3208 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28"} Mar 2 13:32:00.669440 kubelet[3208]: E0302 13:32:00.669345 3208 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"08101d6f-0f5b-4f23-bfc4-ff79bac2631a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 13:32:00.669440 kubelet[3208]: E0302 13:32:00.669407 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"08101d6f-0f5b-4f23-bfc4-ff79bac2631a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6cd4748474-xxvfj" podUID="08101d6f-0f5b-4f23-bfc4-ff79bac2631a" Mar 2 13:32:00.675743 containerd[1729]: time="2026-03-02T13:32:00.675691897Z" level=error msg="StopPodSandbox for \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\" failed" error="failed to destroy network for sandbox \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.675950 containerd[1729]: time="2026-03-02T13:32:00.675702177Z" level=error msg="StopPodSandbox for \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\" failed" error="failed to destroy network for sandbox \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.676248 kubelet[3208]: E0302 13:32:00.676097 3208 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Mar 2 13:32:00.676248 kubelet[3208]: E0302 13:32:00.676154 3208 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885"} Mar 2 13:32:00.676248 kubelet[3208]: E0302 13:32:00.676165 3208 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Mar 2 13:32:00.676248 kubelet[3208]: E0302 13:32:00.676187 3208 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7638e352-eaab-46e6-a147-f204ad1cab74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 13:32:00.676248 kubelet[3208]: E0302 13:32:00.676205 3208 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1"} Mar 2 13:32:00.676558 kubelet[3208]: E0302 13:32:00.676211 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7638e352-eaab-46e6-a147-f204ad1cab74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5dqvc" podUID="7638e352-eaab-46e6-a147-f204ad1cab74" Mar 2 13:32:00.676558 kubelet[3208]: E0302 13:32:00.676245 3208 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"72b2de10-35e9-4af6-93f9-e337a5d88bbb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 2 13:32:00.676558 kubelet[3208]: E0302 13:32:00.676349 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"72b2de10-35e9-4af6-93f9-e337a5d88bbb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75dbc7c697-wkcts" podUID="72b2de10-35e9-4af6-93f9-e337a5d88bbb" Mar 2 13:32:00.680726 containerd[1729]: time="2026-03-02T13:32:00.680656823Z" level=error msg="Failed to destroy network for sandbox \"f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.681384 containerd[1729]: time="2026-03-02T13:32:00.681258943Z" level=error msg="encountered an error cleaning up failed sandbox \"f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.681384 containerd[1729]: time="2026-03-02T13:32:00.681334943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9566f57b5-nnh7j,Uid:07045e73-ca01-465e-919f-8f478e008766,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.682186 kubelet[3208]: E0302 13:32:00.681744 3208 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 2 13:32:00.682186 kubelet[3208]: E0302 13:32:00.681808 3208 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9566f57b5-nnh7j" Mar 2 13:32:00.682186 kubelet[3208]: E0302 13:32:00.681830 3208 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9566f57b5-nnh7j" Mar 2 13:32:00.682360 kubelet[3208]: E0302 13:32:00.681888 3208 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9566f57b5-nnh7j_calico-system(07045e73-ca01-465e-919f-8f478e008766)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9566f57b5-nnh7j_calico-system(07045e73-ca01-465e-919f-8f478e008766)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9566f57b5-nnh7j" podUID="07045e73-ca01-465e-919f-8f478e008766" Mar 2 13:32:00.700650 systemd[1]: Started cri-containerd-76dad6686355124d37e58e09a6aeb65b33e23e6a4086da7534448412664961eb.scope - libcontainer container 76dad6686355124d37e58e09a6aeb65b33e23e6a4086da7534448412664961eb. Mar 2 13:32:00.732778 containerd[1729]: time="2026-03-02T13:32:00.732728884Z" level=info msg="StartContainer for \"76dad6686355124d37e58e09a6aeb65b33e23e6a4086da7534448412664961eb\" returns successfully" Mar 2 13:32:01.371280 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7-shm.mount: Deactivated successfully. Mar 2 13:32:01.371369 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb-shm.mount: Deactivated successfully. Mar 2 13:32:01.371419 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd-shm.mount: Deactivated successfully. Mar 2 13:32:01.478553 kubelet[3208]: I0302 13:32:01.478216 3208 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Mar 2 13:32:01.480646 containerd[1729]: time="2026-03-02T13:32:01.479260805Z" level=info msg="StopPodSandbox for \"909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb\"" Mar 2 13:32:01.480646 containerd[1729]: time="2026-03-02T13:32:01.479497605Z" level=info msg="Ensure that sandbox 909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb in task-service has been cleanup successfully" Mar 2 13:32:01.493445 kubelet[3208]: I0302 13:32:01.493388 3208 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Mar 2 13:32:01.495171 containerd[1729]: time="2026-03-02T13:32:01.495127103Z" level=info msg="StopPodSandbox for \"e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740\"" Mar 2 13:32:01.496402 containerd[1729]: time="2026-03-02T13:32:01.495298184Z" level=info msg="Ensure that sandbox e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740 in task-service has been cleanup successfully" Mar 2 13:32:01.498704 kubelet[3208]: I0302 13:32:01.498555 3208 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Mar 2 13:32:01.499177 containerd[1729]: time="2026-03-02T13:32:01.499134388Z" level=info msg="StopPodSandbox for \"5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe\"" Mar 2 13:32:01.499371 containerd[1729]: time="2026-03-02T13:32:01.499301348Z" level=info msg="Ensure that sandbox 5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe in task-service has been cleanup successfully" Mar 2 13:32:01.509052 kubelet[3208]: I0302 13:32:01.508936 3208 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Mar 2 13:32:01.511529 containerd[1729]: time="2026-03-02T13:32:01.510651842Z" level=info msg="StopPodSandbox for \"7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7\"" Mar 2 13:32:01.511623 containerd[1729]: time="2026-03-02T13:32:01.511597203Z" level=info msg="Ensure that sandbox 7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7 in task-service has been cleanup successfully" Mar 2 13:32:01.512523 kubelet[3208]: I0302 13:32:01.512499 3208 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Mar 2 13:32:01.513369 containerd[1729]: time="2026-03-02T13:32:01.513218885Z" level=info msg="StopPodSandbox for \"f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd\"" Mar 2 13:32:01.513677 containerd[1729]: time="2026-03-02T13:32:01.513486845Z" level=info msg="Ensure that sandbox f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd in task-service has been cleanup successfully" Mar 2 13:32:01.654763 kubelet[3208]: I0302 13:32:01.654260 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jf24n" podStartSLOduration=4.884911392 podStartE2EDuration="25.654236651s" podCreationTimestamp="2026-03-02 13:31:36 +0000 UTC" firstStartedPulling="2026-03-02 13:31:36.79726053 +0000 UTC m=+21.637691013" lastFinishedPulling="2026-03-02 13:31:57.566585789 +0000 UTC m=+42.407016272" observedRunningTime="2026-03-02 13:32:01.503969354 +0000 UTC m=+46.344399837" watchObservedRunningTime="2026-03-02 13:32:01.654236651 +0000 UTC m=+46.494667134" Mar 2 13:32:01.777347 containerd[1729]: 2026-03-02 13:32:01.662 [INFO][4384] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Mar 2 13:32:01.777347 containerd[1729]: 2026-03-02 13:32:01.662 [INFO][4384] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" iface="eth0" netns="/var/run/netns/cni-92cb84c0-5b77-2865-776f-697bda4459ce" Mar 2 13:32:01.777347 containerd[1729]: 2026-03-02 13:32:01.663 [INFO][4384] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" iface="eth0" netns="/var/run/netns/cni-92cb84c0-5b77-2865-776f-697bda4459ce" Mar 2 13:32:01.777347 containerd[1729]: 2026-03-02 13:32:01.663 [INFO][4384] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" iface="eth0" netns="/var/run/netns/cni-92cb84c0-5b77-2865-776f-697bda4459ce" Mar 2 13:32:01.777347 containerd[1729]: 2026-03-02 13:32:01.664 [INFO][4384] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Mar 2 13:32:01.777347 containerd[1729]: 2026-03-02 13:32:01.664 [INFO][4384] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Mar 2 13:32:01.777347 containerd[1729]: 2026-03-02 13:32:01.716 [INFO][4467] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" HandleID="k8s-pod-network.909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Workload="ci--4081.3.101--160832fd4e-k8s-whisker--57d877f87b--svvn6-eth0" Mar 2 13:32:01.777347 containerd[1729]: 2026-03-02 13:32:01.717 [INFO][4467] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:01.777347 containerd[1729]: 2026-03-02 13:32:01.719 [INFO][4467] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:01.777347 containerd[1729]: 2026-03-02 13:32:01.741 [WARNING][4467] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" HandleID="k8s-pod-network.909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Workload="ci--4081.3.101--160832fd4e-k8s-whisker--57d877f87b--svvn6-eth0" Mar 2 13:32:01.777347 containerd[1729]: 2026-03-02 13:32:01.741 [INFO][4467] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" HandleID="k8s-pod-network.909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Workload="ci--4081.3.101--160832fd4e-k8s-whisker--57d877f87b--svvn6-eth0" Mar 2 13:32:01.777347 containerd[1729]: 2026-03-02 13:32:01.761 [INFO][4467] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:01.777347 containerd[1729]: 2026-03-02 13:32:01.770 [INFO][4384] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Mar 2 13:32:01.781351 systemd[1]: run-netns-cni\x2d92cb84c0\x2d5b77\x2d2865\x2d776f\x2d697bda4459ce.mount: Deactivated successfully. Mar 2 13:32:01.783357 containerd[1729]: time="2026-03-02T13:32:01.782888083Z" level=info msg="TearDown network for sandbox \"909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb\" successfully" Mar 2 13:32:01.783357 containerd[1729]: time="2026-03-02T13:32:01.782961203Z" level=info msg="StopPodSandbox for \"909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb\" returns successfully" Mar 2 13:32:01.818459 containerd[1729]: 2026-03-02 13:32:01.655 [INFO][4430] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Mar 2 13:32:01.818459 containerd[1729]: 2026-03-02 13:32:01.656 [INFO][4430] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" iface="eth0" netns="/var/run/netns/cni-1fee472b-b013-e565-67d7-24cd1d0b5535" Mar 2 13:32:01.818459 containerd[1729]: 2026-03-02 13:32:01.656 [INFO][4430] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" iface="eth0" netns="/var/run/netns/cni-1fee472b-b013-e565-67d7-24cd1d0b5535" Mar 2 13:32:01.818459 containerd[1729]: 2026-03-02 13:32:01.658 [INFO][4430] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" iface="eth0" netns="/var/run/netns/cni-1fee472b-b013-e565-67d7-24cd1d0b5535" Mar 2 13:32:01.818459 containerd[1729]: 2026-03-02 13:32:01.658 [INFO][4430] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Mar 2 13:32:01.818459 containerd[1729]: 2026-03-02 13:32:01.658 [INFO][4430] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Mar 2 13:32:01.818459 containerd[1729]: 2026-03-02 13:32:01.724 [INFO][4465] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" HandleID="k8s-pod-network.5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0" Mar 2 13:32:01.818459 containerd[1729]: 2026-03-02 13:32:01.724 [INFO][4465] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:01.818459 containerd[1729]: 2026-03-02 13:32:01.764 [INFO][4465] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:01.818459 containerd[1729]: 2026-03-02 13:32:01.800 [WARNING][4465] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" HandleID="k8s-pod-network.5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0" Mar 2 13:32:01.818459 containerd[1729]: 2026-03-02 13:32:01.800 [INFO][4465] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" HandleID="k8s-pod-network.5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0" Mar 2 13:32:01.818459 containerd[1729]: 2026-03-02 13:32:01.804 [INFO][4465] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:01.818459 containerd[1729]: 2026-03-02 13:32:01.813 [INFO][4430] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Mar 2 13:32:01.822169 containerd[1729]: time="2026-03-02T13:32:01.819616726Z" level=info msg="TearDown network for sandbox \"5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe\" successfully" Mar 2 13:32:01.822169 containerd[1729]: time="2026-03-02T13:32:01.819651806Z" level=info msg="StopPodSandbox for \"5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe\" returns successfully" Mar 2 13:32:01.820988 systemd[1]: run-netns-cni\x2d1fee472b\x2db013\x2de565\x2d67d7\x2d24cd1d0b5535.mount: Deactivated successfully. Mar 2 13:32:01.823336 containerd[1729]: time="2026-03-02T13:32:01.822438730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gfmzk,Uid:cbb205fa-9365-49ad-a5f5-156be4f3e7b0,Namespace:kube-system,Attempt:1,}" Mar 2 13:32:01.836620 containerd[1729]: 2026-03-02 13:32:01.753 [INFO][4438] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Mar 2 13:32:01.836620 containerd[1729]: 2026-03-02 13:32:01.753 [INFO][4438] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" iface="eth0" netns="/var/run/netns/cni-893ad51b-4e13-23f3-7517-352884487946" Mar 2 13:32:01.836620 containerd[1729]: 2026-03-02 13:32:01.753 [INFO][4438] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" iface="eth0" netns="/var/run/netns/cni-893ad51b-4e13-23f3-7517-352884487946" Mar 2 13:32:01.836620 containerd[1729]: 2026-03-02 13:32:01.753 [INFO][4438] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" iface="eth0" netns="/var/run/netns/cni-893ad51b-4e13-23f3-7517-352884487946" Mar 2 13:32:01.836620 containerd[1729]: 2026-03-02 13:32:01.753 [INFO][4438] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Mar 2 13:32:01.836620 containerd[1729]: 2026-03-02 13:32:01.753 [INFO][4438] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Mar 2 13:32:01.836620 containerd[1729]: 2026-03-02 13:32:01.808 [INFO][4487] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" HandleID="k8s-pod-network.7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Workload="ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0" Mar 2 13:32:01.836620 containerd[1729]: 2026-03-02 13:32:01.808 [INFO][4487] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:01.836620 containerd[1729]: 2026-03-02 13:32:01.808 [INFO][4487] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:01.836620 containerd[1729]: 2026-03-02 13:32:01.827 [WARNING][4487] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" HandleID="k8s-pod-network.7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Workload="ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0" Mar 2 13:32:01.836620 containerd[1729]: 2026-03-02 13:32:01.827 [INFO][4487] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" HandleID="k8s-pod-network.7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Workload="ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0" Mar 2 13:32:01.836620 containerd[1729]: 2026-03-02 13:32:01.829 [INFO][4487] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:01.836620 containerd[1729]: 2026-03-02 13:32:01.832 [INFO][4438] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Mar 2 13:32:01.837409 containerd[1729]: time="2026-03-02T13:32:01.837377107Z" level=info msg="TearDown network for sandbox \"7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7\" successfully" Mar 2 13:32:01.838100 containerd[1729]: time="2026-03-02T13:32:01.837503827Z" level=info msg="StopPodSandbox for \"7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7\" returns successfully" Mar 2 13:32:01.838314 containerd[1729]: time="2026-03-02T13:32:01.838271028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jm7xw,Uid:8a206f66-6255-4ca6-b52b-ffcb569d488d,Namespace:calico-system,Attempt:1,}" Mar 2 13:32:01.855843 containerd[1729]: 2026-03-02 13:32:01.739 [INFO][4416] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Mar 2 13:32:01.855843 containerd[1729]: 2026-03-02 13:32:01.740 [INFO][4416] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" iface="eth0" netns="/var/run/netns/cni-337a9144-1d69-f764-a702-924fd7a2479b" Mar 2 13:32:01.855843 containerd[1729]: 2026-03-02 13:32:01.742 [INFO][4416] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" iface="eth0" netns="/var/run/netns/cni-337a9144-1d69-f764-a702-924fd7a2479b" Mar 2 13:32:01.855843 containerd[1729]: 2026-03-02 13:32:01.742 [INFO][4416] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" iface="eth0" netns="/var/run/netns/cni-337a9144-1d69-f764-a702-924fd7a2479b" Mar 2 13:32:01.855843 containerd[1729]: 2026-03-02 13:32:01.742 [INFO][4416] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Mar 2 13:32:01.855843 containerd[1729]: 2026-03-02 13:32:01.742 [INFO][4416] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Mar 2 13:32:01.855843 containerd[1729]: 2026-03-02 13:32:01.828 [INFO][4482] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" HandleID="k8s-pod-network.e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0" Mar 2 13:32:01.855843 containerd[1729]: 2026-03-02 13:32:01.828 [INFO][4482] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:01.855843 containerd[1729]: 2026-03-02 13:32:01.831 [INFO][4482] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:01.855843 containerd[1729]: 2026-03-02 13:32:01.845 [WARNING][4482] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" HandleID="k8s-pod-network.e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0" Mar 2 13:32:01.855843 containerd[1729]: 2026-03-02 13:32:01.845 [INFO][4482] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" HandleID="k8s-pod-network.e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0" Mar 2 13:32:01.855843 containerd[1729]: 2026-03-02 13:32:01.847 [INFO][4482] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:01.855843 containerd[1729]: 2026-03-02 13:32:01.850 [INFO][4416] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Mar 2 13:32:01.856460 containerd[1729]: time="2026-03-02T13:32:01.856298050Z" level=info msg="TearDown network for sandbox \"e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740\" successfully" Mar 2 13:32:01.856460 containerd[1729]: time="2026-03-02T13:32:01.856330570Z" level=info msg="StopPodSandbox for \"e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740\" returns successfully" Mar 2 13:32:01.857679 containerd[1729]: time="2026-03-02T13:32:01.857644811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd4748474-c2zkg,Uid:0ce87770-1399-4d16-9b2e-f92d3470b97a,Namespace:calico-system,Attempt:1,}" Mar 2 13:32:01.867513 kubelet[3208]: I0302 13:32:01.867160 3208 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gt9b\" (UniqueName: \"kubernetes.io/projected/03e66469-eeaa-4abc-aa42-114144b38f24-kube-api-access-9gt9b\") pod \"03e66469-eeaa-4abc-aa42-114144b38f24\" (UID: \"03e66469-eeaa-4abc-aa42-114144b38f24\") " Mar 2 13:32:01.867513 kubelet[3208]: I0302 13:32:01.867217 3208 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/03e66469-eeaa-4abc-aa42-114144b38f24-whisker-backend-key-pair\") pod \"03e66469-eeaa-4abc-aa42-114144b38f24\" (UID: \"03e66469-eeaa-4abc-aa42-114144b38f24\") " Mar 2 13:32:01.867513 kubelet[3208]: I0302 13:32:01.867244 3208 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/03e66469-eeaa-4abc-aa42-114144b38f24-nginx-config\") pod \"03e66469-eeaa-4abc-aa42-114144b38f24\" (UID: \"03e66469-eeaa-4abc-aa42-114144b38f24\") " Mar 2 13:32:01.867513 kubelet[3208]: I0302 13:32:01.867271 3208 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03e66469-eeaa-4abc-aa42-114144b38f24-whisker-ca-bundle\") pod \"03e66469-eeaa-4abc-aa42-114144b38f24\" (UID: \"03e66469-eeaa-4abc-aa42-114144b38f24\") " Mar 2 13:32:01.868129 containerd[1729]: 2026-03-02 13:32:01.775 [INFO][4435] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Mar 2 13:32:01.868129 containerd[1729]: 2026-03-02 13:32:01.781 [INFO][4435] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" iface="eth0" netns="/var/run/netns/cni-c747874d-5a47-8927-f3e0-4c31122380d4" Mar 2 13:32:01.868129 containerd[1729]: 2026-03-02 13:32:01.783 [INFO][4435] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" iface="eth0" netns="/var/run/netns/cni-c747874d-5a47-8927-f3e0-4c31122380d4" Mar 2 13:32:01.868129 containerd[1729]: 2026-03-02 13:32:01.783 [INFO][4435] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" iface="eth0" netns="/var/run/netns/cni-c747874d-5a47-8927-f3e0-4c31122380d4" Mar 2 13:32:01.868129 containerd[1729]: 2026-03-02 13:32:01.784 [INFO][4435] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Mar 2 13:32:01.868129 containerd[1729]: 2026-03-02 13:32:01.784 [INFO][4435] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Mar 2 13:32:01.868129 containerd[1729]: 2026-03-02 13:32:01.843 [INFO][4494] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" HandleID="k8s-pod-network.f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Workload="ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0" Mar 2 13:32:01.868129 containerd[1729]: 2026-03-02 13:32:01.843 [INFO][4494] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:01.868129 containerd[1729]: 2026-03-02 13:32:01.847 [INFO][4494] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:01.868129 containerd[1729]: 2026-03-02 13:32:01.861 [WARNING][4494] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" HandleID="k8s-pod-network.f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Workload="ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0" Mar 2 13:32:01.868129 containerd[1729]: 2026-03-02 13:32:01.862 [INFO][4494] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" HandleID="k8s-pod-network.f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Workload="ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0" Mar 2 13:32:01.868129 containerd[1729]: 2026-03-02 13:32:01.864 [INFO][4494] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:01.868129 containerd[1729]: 2026-03-02 13:32:01.866 [INFO][4435] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Mar 2 13:32:01.868462 containerd[1729]: time="2026-03-02T13:32:01.868238304Z" level=info msg="TearDown network for sandbox \"f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd\" successfully" Mar 2 13:32:01.868462 containerd[1729]: time="2026-03-02T13:32:01.868265304Z" level=info msg="StopPodSandbox for \"f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd\" returns successfully" Mar 2 13:32:01.869118 kubelet[3208]: I0302 13:32:01.869085 3208 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03e66469-eeaa-4abc-aa42-114144b38f24-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "03e66469-eeaa-4abc-aa42-114144b38f24" (UID: "03e66469-eeaa-4abc-aa42-114144b38f24"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:32:01.871133 kubelet[3208]: I0302 13:32:01.870816 3208 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03e66469-eeaa-4abc-aa42-114144b38f24-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "03e66469-eeaa-4abc-aa42-114144b38f24" (UID: "03e66469-eeaa-4abc-aa42-114144b38f24"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:32:01.872578 kubelet[3208]: I0302 13:32:01.872534 3208 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03e66469-eeaa-4abc-aa42-114144b38f24-kube-api-access-9gt9b" (OuterVolumeSpecName: "kube-api-access-9gt9b") pod "03e66469-eeaa-4abc-aa42-114144b38f24" (UID: "03e66469-eeaa-4abc-aa42-114144b38f24"). InnerVolumeSpecName "kube-api-access-9gt9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:32:01.873560 containerd[1729]: time="2026-03-02T13:32:01.873525150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9566f57b5-nnh7j,Uid:07045e73-ca01-465e-919f-8f478e008766,Namespace:calico-system,Attempt:1,}" Mar 2 13:32:01.874533 kubelet[3208]: I0302 13:32:01.874414 3208 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03e66469-eeaa-4abc-aa42-114144b38f24-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "03e66469-eeaa-4abc-aa42-114144b38f24" (UID: "03e66469-eeaa-4abc-aa42-114144b38f24"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 2 13:32:01.968844 kubelet[3208]: I0302 13:32:01.968650 3208 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/03e66469-eeaa-4abc-aa42-114144b38f24-whisker-backend-key-pair\") on node \"ci-4081.3.101-160832fd4e\" DevicePath \"\"" Mar 2 13:32:01.968844 kubelet[3208]: I0302 13:32:01.968698 3208 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/03e66469-eeaa-4abc-aa42-114144b38f24-nginx-config\") on node \"ci-4081.3.101-160832fd4e\" DevicePath \"\"" Mar 2 13:32:01.968844 kubelet[3208]: I0302 13:32:01.968710 3208 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03e66469-eeaa-4abc-aa42-114144b38f24-whisker-ca-bundle\") on node \"ci-4081.3.101-160832fd4e\" DevicePath \"\"" Mar 2 13:32:01.968844 kubelet[3208]: I0302 13:32:01.968720 3208 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9gt9b\" (UniqueName: \"kubernetes.io/projected/03e66469-eeaa-4abc-aa42-114144b38f24-kube-api-access-9gt9b\") on node \"ci-4081.3.101-160832fd4e\" DevicePath \"\"" Mar 2 13:32:02.079338 systemd-networkd[1352]: calia708df760fa: Link UP Mar 2 13:32:02.080870 systemd-networkd[1352]: calia708df760fa: Gained carrier Mar 2 13:32:02.121303 containerd[1729]: 2026-03-02 13:32:01.896 [ERROR][4508] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:32:02.121303 containerd[1729]: 2026-03-02 13:32:01.912 [INFO][4508] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0 coredns-674b8bbfcf- kube-system cbb205fa-9365-49ad-a5f5-156be4f3e7b0 902 0 2026-03-02 13:31:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.101-160832fd4e coredns-674b8bbfcf-gfmzk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia708df760fa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfmzk" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-" Mar 2 13:32:02.121303 containerd[1729]: 2026-03-02 13:32:01.912 [INFO][4508] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfmzk" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0" Mar 2 13:32:02.121303 containerd[1729]: 2026-03-02 13:32:01.952 [INFO][4524] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262" HandleID="k8s-pod-network.c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0" Mar 2 13:32:02.121303 containerd[1729]: 2026-03-02 13:32:01.976 [INFO][4524] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262" HandleID="k8s-pod-network.c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273890), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.101-160832fd4e", "pod":"coredns-674b8bbfcf-gfmzk", "timestamp":"2026-03-02 13:32:01.952103563 +0000 UTC"}, Hostname:"ci-4081.3.101-160832fd4e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002df080)} Mar 2 13:32:02.121303 containerd[1729]: 2026-03-02 13:32:01.977 [INFO][4524] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:02.121303 containerd[1729]: 2026-03-02 13:32:01.977 [INFO][4524] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:02.121303 containerd[1729]: 2026-03-02 13:32:01.977 [INFO][4524] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-160832fd4e' Mar 2 13:32:02.121303 containerd[1729]: 2026-03-02 13:32:01.980 [INFO][4524] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.121303 containerd[1729]: 2026-03-02 13:32:01.990 [INFO][4524] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.121303 containerd[1729]: 2026-03-02 13:32:01.997 [INFO][4524] ipam/ipam.go 526: Trying affinity for 192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.121303 containerd[1729]: 2026-03-02 13:32:02.000 [INFO][4524] ipam/ipam.go 160: Attempting to load block cidr=192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.121303 containerd[1729]: 2026-03-02 13:32:02.004 [INFO][4524] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.121303 containerd[1729]: 2026-03-02 13:32:02.004 [INFO][4524] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.121303 containerd[1729]: 2026-03-02 13:32:02.006 [INFO][4524] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262 Mar 2 13:32:02.121303 containerd[1729]: 2026-03-02 13:32:02.011 [INFO][4524] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.121303 containerd[1729]: 2026-03-02 13:32:02.022 [INFO][4524] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.113.129/26] block=192.168.113.128/26 handle="k8s-pod-network.c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.121303 containerd[1729]: 2026-03-02 13:32:02.022 [INFO][4524] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.113.129/26] handle="k8s-pod-network.c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.121303 containerd[1729]: 2026-03-02 13:32:02.022 [INFO][4524] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:02.121303 containerd[1729]: 2026-03-02 13:32:02.022 [INFO][4524] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.113.129/26] IPv6=[] ContainerID="c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262" HandleID="k8s-pod-network.c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0" Mar 2 13:32:02.121916 containerd[1729]: 2026-03-02 13:32:02.027 [INFO][4508] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfmzk" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cbb205fa-9365-49ad-a5f5-156be4f3e7b0", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"", Pod:"coredns-674b8bbfcf-gfmzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia708df760fa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:02.121916 containerd[1729]: 2026-03-02 13:32:02.028 [INFO][4508] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.129/32] ContainerID="c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfmzk" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0" Mar 2 13:32:02.121916 containerd[1729]: 2026-03-02 13:32:02.028 [INFO][4508] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia708df760fa ContainerID="c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfmzk" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0" Mar 2 13:32:02.121916 containerd[1729]: 2026-03-02 13:32:02.079 [INFO][4508] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfmzk" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0" Mar 2 13:32:02.121916 containerd[1729]: 2026-03-02 13:32:02.080 [INFO][4508] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfmzk" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cbb205fa-9365-49ad-a5f5-156be4f3e7b0", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262", Pod:"coredns-674b8bbfcf-gfmzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia708df760fa", MAC:"ce:84:ba:0e:fc:3b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:02.121916 containerd[1729]: 2026-03-02 13:32:02.116 [INFO][4508] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262" Namespace="kube-system" Pod="coredns-674b8bbfcf-gfmzk" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0" Mar 2 13:32:02.181343 systemd-networkd[1352]: calif8e9bdb62cd: Link UP Mar 2 13:32:02.182975 systemd-networkd[1352]: calif8e9bdb62cd: Gained carrier Mar 2 13:32:02.213843 containerd[1729]: time="2026-03-02T13:32:02.213249191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:32:02.213843 containerd[1729]: time="2026-03-02T13:32:02.213311071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:32:02.213843 containerd[1729]: time="2026-03-02T13:32:02.213322671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:32:02.213843 containerd[1729]: time="2026-03-02T13:32:02.213399311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:32:02.228575 containerd[1729]: 2026-03-02 13:32:01.950 [ERROR][4519] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:32:02.228575 containerd[1729]: 2026-03-02 13:32:01.988 [INFO][4519] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0 csi-node-driver- calico-system 8a206f66-6255-4ca6-b52b-ffcb569d488d 906 0 2026-03-02 13:31:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7494d65b57 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.101-160832fd4e csi-node-driver-jm7xw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif8e9bdb62cd [] [] }} ContainerID="f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb" Namespace="calico-system" Pod="csi-node-driver-jm7xw" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-" Mar 2 13:32:02.228575 containerd[1729]: 2026-03-02 13:32:01.988 [INFO][4519] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb" Namespace="calico-system" Pod="csi-node-driver-jm7xw" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0" Mar 2 13:32:02.228575 containerd[1729]: 2026-03-02 13:32:02.050 [INFO][4559] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb" HandleID="k8s-pod-network.f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb" Workload="ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0" Mar 2 13:32:02.228575 containerd[1729]: 2026-03-02 13:32:02.067 [INFO][4559] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb" HandleID="k8s-pod-network.f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb" Workload="ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002737f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-160832fd4e", "pod":"csi-node-driver-jm7xw", "timestamp":"2026-03-02 13:32:02.050928319 +0000 UTC"}, Hostname:"ci-4081.3.101-160832fd4e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004b6f20)} Mar 2 13:32:02.228575 containerd[1729]: 2026-03-02 13:32:02.067 [INFO][4559] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:02.228575 containerd[1729]: 2026-03-02 13:32:02.067 [INFO][4559] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:02.228575 containerd[1729]: 2026-03-02 13:32:02.067 [INFO][4559] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-160832fd4e' Mar 2 13:32:02.228575 containerd[1729]: 2026-03-02 13:32:02.093 [INFO][4559] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.228575 containerd[1729]: 2026-03-02 13:32:02.104 [INFO][4559] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.228575 containerd[1729]: 2026-03-02 13:32:02.120 [INFO][4559] ipam/ipam.go 526: Trying affinity for 192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.228575 containerd[1729]: 2026-03-02 13:32:02.123 [INFO][4559] ipam/ipam.go 160: Attempting to load block cidr=192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.228575 containerd[1729]: 2026-03-02 13:32:02.129 [INFO][4559] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.228575 containerd[1729]: 2026-03-02 13:32:02.129 [INFO][4559] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.228575 containerd[1729]: 2026-03-02 13:32:02.133 [INFO][4559] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb Mar 2 13:32:02.228575 containerd[1729]: 2026-03-02 13:32:02.148 [INFO][4559] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.228575 containerd[1729]: 2026-03-02 13:32:02.163 [INFO][4559] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.113.130/26] block=192.168.113.128/26 handle="k8s-pod-network.f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.228575 containerd[1729]: 2026-03-02 13:32:02.163 [INFO][4559] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.113.130/26] handle="k8s-pod-network.f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.228575 containerd[1729]: 2026-03-02 13:32:02.163 [INFO][4559] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:02.228575 containerd[1729]: 2026-03-02 13:32:02.168 [INFO][4559] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.113.130/26] IPv6=[] ContainerID="f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb" HandleID="k8s-pod-network.f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb" Workload="ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0" Mar 2 13:32:02.229157 containerd[1729]: 2026-03-02 13:32:02.176 [INFO][4519] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb" Namespace="calico-system" Pod="csi-node-driver-jm7xw" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8a206f66-6255-4ca6-b52b-ffcb569d488d", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7494d65b57", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"", Pod:"csi-node-driver-jm7xw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif8e9bdb62cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:02.229157 containerd[1729]: 2026-03-02 13:32:02.176 [INFO][4519] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.130/32] ContainerID="f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb" Namespace="calico-system" Pod="csi-node-driver-jm7xw" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0" Mar 2 13:32:02.229157 containerd[1729]: 2026-03-02 13:32:02.177 [INFO][4519] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8e9bdb62cd ContainerID="f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb" Namespace="calico-system" Pod="csi-node-driver-jm7xw" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0" Mar 2 13:32:02.229157 containerd[1729]: 2026-03-02 13:32:02.188 [INFO][4519] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb" Namespace="calico-system" Pod="csi-node-driver-jm7xw" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0" Mar 2 13:32:02.229157 containerd[1729]: 2026-03-02 13:32:02.189 [INFO][4519] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb" Namespace="calico-system" Pod="csi-node-driver-jm7xw" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8a206f66-6255-4ca6-b52b-ffcb569d488d", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7494d65b57", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb", Pod:"csi-node-driver-jm7xw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif8e9bdb62cd", MAC:"2e:32:a5:57:06:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:02.229157 containerd[1729]: 2026-03-02 13:32:02.219 [INFO][4519] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb" Namespace="calico-system" Pod="csi-node-driver-jm7xw" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0" Mar 2 13:32:02.252793 systemd[1]: Started cri-containerd-c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262.scope - libcontainer container c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262. Mar 2 13:32:02.287013 systemd-networkd[1352]: calic443914566a: Link UP Mar 2 13:32:02.287370 systemd-networkd[1352]: calic443914566a: Gained carrier Mar 2 13:32:02.296781 containerd[1729]: time="2026-03-02T13:32:02.296595889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:32:02.296781 containerd[1729]: time="2026-03-02T13:32:02.296731809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:32:02.297128 containerd[1729]: time="2026-03-02T13:32:02.296762729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:32:02.297208 containerd[1729]: time="2026-03-02T13:32:02.297102930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:32:02.314834 containerd[1729]: 2026-03-02 13:32:01.994 [ERROR][4546] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:32:02.314834 containerd[1729]: 2026-03-02 13:32:02.022 [INFO][4546] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0 goldmane-9566f57b5- calico-system 07045e73-ca01-465e-919f-8f478e008766 907 0 2026-03-02 13:31:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9566f57b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.101-160832fd4e goldmane-9566f57b5-nnh7j eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic443914566a [] [] }} ContainerID="5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2" Namespace="calico-system" Pod="goldmane-9566f57b5-nnh7j" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-" Mar 2 13:32:02.314834 containerd[1729]: 2026-03-02 13:32:02.022 [INFO][4546] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2" Namespace="calico-system" Pod="goldmane-9566f57b5-nnh7j" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0" Mar 2 13:32:02.314834 containerd[1729]: 2026-03-02 13:32:02.074 [INFO][4568] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2" HandleID="k8s-pod-network.5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2" Workload="ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0" Mar 2 13:32:02.314834 containerd[1729]: 2026-03-02 13:32:02.108 [INFO][4568] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2" HandleID="k8s-pod-network.5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2" Workload="ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e3e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-160832fd4e", "pod":"goldmane-9566f57b5-nnh7j", "timestamp":"2026-03-02 13:32:02.074380787 +0000 UTC"}, Hostname:"ci-4081.3.101-160832fd4e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001686e0)} Mar 2 13:32:02.314834 containerd[1729]: 2026-03-02 13:32:02.108 [INFO][4568] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:02.314834 containerd[1729]: 2026-03-02 13:32:02.164 [INFO][4568] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:02.314834 containerd[1729]: 2026-03-02 13:32:02.164 [INFO][4568] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-160832fd4e' Mar 2 13:32:02.314834 containerd[1729]: 2026-03-02 13:32:02.190 [INFO][4568] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.314834 containerd[1729]: 2026-03-02 13:32:02.213 [INFO][4568] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.314834 containerd[1729]: 2026-03-02 13:32:02.226 [INFO][4568] ipam/ipam.go 526: Trying affinity for 192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.314834 containerd[1729]: 2026-03-02 13:32:02.234 [INFO][4568] ipam/ipam.go 160: Attempting to load block cidr=192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.314834 containerd[1729]: 2026-03-02 13:32:02.239 [INFO][4568] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.314834 containerd[1729]: 2026-03-02 13:32:02.239 [INFO][4568] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.314834 containerd[1729]: 2026-03-02 13:32:02.243 [INFO][4568] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2 Mar 2 13:32:02.314834 containerd[1729]: 2026-03-02 13:32:02.258 [INFO][4568] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.314834 containerd[1729]: 2026-03-02 13:32:02.276 [INFO][4568] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.113.131/26] block=192.168.113.128/26 handle="k8s-pod-network.5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.314834 containerd[1729]: 2026-03-02 13:32:02.276 [INFO][4568] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.113.131/26] handle="k8s-pod-network.5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.314834 containerd[1729]: 2026-03-02 13:32:02.276 [INFO][4568] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:02.314834 containerd[1729]: 2026-03-02 13:32:02.276 [INFO][4568] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.113.131/26] IPv6=[] ContainerID="5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2" HandleID="k8s-pod-network.5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2" Workload="ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0" Mar 2 13:32:02.315400 containerd[1729]: 2026-03-02 13:32:02.281 [INFO][4546] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2" Namespace="calico-system" Pod="goldmane-9566f57b5-nnh7j" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0", GenerateName:"goldmane-9566f57b5-", Namespace:"calico-system", SelfLink:"", UID:"07045e73-ca01-465e-919f-8f478e008766", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9566f57b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"", Pod:"goldmane-9566f57b5-nnh7j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.113.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic443914566a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:02.315400 containerd[1729]: 2026-03-02 13:32:02.281 [INFO][4546] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.131/32] ContainerID="5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2" Namespace="calico-system" Pod="goldmane-9566f57b5-nnh7j" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0" Mar 2 13:32:02.315400 containerd[1729]: 2026-03-02 13:32:02.281 [INFO][4546] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic443914566a ContainerID="5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2" Namespace="calico-system" Pod="goldmane-9566f57b5-nnh7j" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0" Mar 2 13:32:02.315400 containerd[1729]: 2026-03-02 13:32:02.287 [INFO][4546] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2" Namespace="calico-system" Pod="goldmane-9566f57b5-nnh7j" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0" Mar 2 13:32:02.315400 containerd[1729]: 2026-03-02 13:32:02.289 [INFO][4546] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2" Namespace="calico-system" Pod="goldmane-9566f57b5-nnh7j" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0", GenerateName:"goldmane-9566f57b5-", Namespace:"calico-system", SelfLink:"", UID:"07045e73-ca01-465e-919f-8f478e008766", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9566f57b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2", Pod:"goldmane-9566f57b5-nnh7j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.113.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic443914566a", MAC:"de:12:bf:c3:71:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:02.315400 containerd[1729]: 2026-03-02 13:32:02.311 [INFO][4546] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2" Namespace="calico-system" Pod="goldmane-9566f57b5-nnh7j" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0" Mar 2 13:32:02.366891 containerd[1729]: time="2026-03-02T13:32:02.366785572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:32:02.367886 containerd[1729]: time="2026-03-02T13:32:02.367421293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:32:02.367886 containerd[1729]: time="2026-03-02T13:32:02.367452213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:32:02.367886 containerd[1729]: time="2026-03-02T13:32:02.367732253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:32:02.380785 systemd[1]: Started cri-containerd-f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb.scope - libcontainer container f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb. Mar 2 13:32:02.396097 systemd[1]: run-netns-cni\x2d893ad51b\x2d4e13\x2d23f3\x2d7517\x2d352884487946.mount: Deactivated successfully. Mar 2 13:32:02.396189 systemd[1]: run-netns-cni\x2dc747874d\x2d5a47\x2d8927\x2df3e0\x2d4c31122380d4.mount: Deactivated successfully. Mar 2 13:32:02.396241 systemd[1]: run-netns-cni\x2d337a9144\x2d1d69\x2df764\x2da702\x2d924fd7a2479b.mount: Deactivated successfully. Mar 2 13:32:02.396294 systemd[1]: var-lib-kubelet-pods-03e66469\x2deeaa\x2d4abc\x2daa42\x2d114144b38f24-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9gt9b.mount: Deactivated successfully. Mar 2 13:32:02.396345 systemd[1]: var-lib-kubelet-pods-03e66469\x2deeaa\x2d4abc\x2daa42\x2d114144b38f24-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 2 13:32:02.401109 systemd-networkd[1352]: calic5efec2ed78: Link UP Mar 2 13:32:02.405692 systemd-networkd[1352]: calic5efec2ed78: Gained carrier Mar 2 13:32:02.415198 systemd[1]: Started cri-containerd-5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2.scope - libcontainer container 5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2. Mar 2 13:32:02.448502 containerd[1729]: 2026-03-02 13:32:02.018 [ERROR][4534] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 2 13:32:02.448502 containerd[1729]: 2026-03-02 13:32:02.046 [INFO][4534] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0 calico-apiserver-6cd4748474- calico-system 0ce87770-1399-4d16-9b2e-f92d3470b97a 905 0 2026-03-02 13:31:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cd4748474 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.101-160832fd4e calico-apiserver-6cd4748474-c2zkg eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calic5efec2ed78 [] [] }} ContainerID="67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6" Namespace="calico-system" Pod="calico-apiserver-6cd4748474-c2zkg" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-" Mar 2 13:32:02.448502 containerd[1729]: 2026-03-02 13:32:02.046 [INFO][4534] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6" Namespace="calico-system" Pod="calico-apiserver-6cd4748474-c2zkg" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0" Mar 2 13:32:02.448502 containerd[1729]: 2026-03-02 13:32:02.137 [INFO][4576] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6" HandleID="k8s-pod-network.67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0" Mar 2 13:32:02.448502 containerd[1729]: 2026-03-02 13:32:02.152 [INFO][4576] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6" HandleID="k8s-pod-network.67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000380910), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-160832fd4e", "pod":"calico-apiserver-6cd4748474-c2zkg", "timestamp":"2026-03-02 13:32:02.137087141 +0000 UTC"}, Hostname:"ci-4081.3.101-160832fd4e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000302000)} Mar 2 13:32:02.448502 containerd[1729]: 2026-03-02 13:32:02.152 [INFO][4576] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:02.448502 containerd[1729]: 2026-03-02 13:32:02.276 [INFO][4576] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:02.448502 containerd[1729]: 2026-03-02 13:32:02.276 [INFO][4576] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-160832fd4e' Mar 2 13:32:02.448502 containerd[1729]: 2026-03-02 13:32:02.294 [INFO][4576] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.448502 containerd[1729]: 2026-03-02 13:32:02.319 [INFO][4576] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.448502 containerd[1729]: 2026-03-02 13:32:02.328 [INFO][4576] ipam/ipam.go 526: Trying affinity for 192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.448502 containerd[1729]: 2026-03-02 13:32:02.331 [INFO][4576] ipam/ipam.go 160: Attempting to load block cidr=192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.448502 containerd[1729]: 2026-03-02 13:32:02.336 [INFO][4576] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.448502 containerd[1729]: 2026-03-02 13:32:02.336 [INFO][4576] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.448502 containerd[1729]: 2026-03-02 13:32:02.342 [INFO][4576] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6 Mar 2 13:32:02.448502 containerd[1729]: 2026-03-02 13:32:02.361 [INFO][4576] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.448502 containerd[1729]: 2026-03-02 13:32:02.372 [INFO][4576] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.113.132/26] block=192.168.113.128/26 handle="k8s-pod-network.67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.448502 containerd[1729]: 2026-03-02 13:32:02.372 [INFO][4576] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.113.132/26] handle="k8s-pod-network.67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:02.448502 containerd[1729]: 2026-03-02 13:32:02.372 [INFO][4576] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:02.448502 containerd[1729]: 2026-03-02 13:32:02.372 [INFO][4576] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.113.132/26] IPv6=[] ContainerID="67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6" HandleID="k8s-pod-network.67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0" Mar 2 13:32:02.449155 containerd[1729]: 2026-03-02 13:32:02.377 [INFO][4534] cni-plugin/k8s.go 418: Populated endpoint ContainerID="67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6" Namespace="calico-system" Pod="calico-apiserver-6cd4748474-c2zkg" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0", GenerateName:"calico-apiserver-6cd4748474-", Namespace:"calico-system", SelfLink:"", UID:"0ce87770-1399-4d16-9b2e-f92d3470b97a", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd4748474", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"", Pod:"calico-apiserver-6cd4748474-c2zkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic5efec2ed78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:02.449155 containerd[1729]: 2026-03-02 13:32:02.378 [INFO][4534] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.132/32] ContainerID="67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6" Namespace="calico-system" Pod="calico-apiserver-6cd4748474-c2zkg" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0" Mar 2 13:32:02.449155 containerd[1729]: 2026-03-02 13:32:02.378 [INFO][4534] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5efec2ed78 ContainerID="67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6" Namespace="calico-system" Pod="calico-apiserver-6cd4748474-c2zkg" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0" Mar 2 13:32:02.449155 containerd[1729]: 2026-03-02 13:32:02.406 [INFO][4534] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6" Namespace="calico-system" Pod="calico-apiserver-6cd4748474-c2zkg" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0" Mar 2 13:32:02.449155 containerd[1729]: 2026-03-02 13:32:02.407 [INFO][4534] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6" Namespace="calico-system" Pod="calico-apiserver-6cd4748474-c2zkg" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0", GenerateName:"calico-apiserver-6cd4748474-", Namespace:"calico-system", SelfLink:"", UID:"0ce87770-1399-4d16-9b2e-f92d3470b97a", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd4748474", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6", Pod:"calico-apiserver-6cd4748474-c2zkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic5efec2ed78", MAC:"3a:59:66:11:49:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:02.449155 containerd[1729]: 2026-03-02 13:32:02.433 [INFO][4534] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6" Namespace="calico-system" Pod="calico-apiserver-6cd4748474-c2zkg" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0" Mar 2 13:32:02.466379 containerd[1729]: time="2026-03-02T13:32:02.466340649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gfmzk,Uid:cbb205fa-9365-49ad-a5f5-156be4f3e7b0,Namespace:kube-system,Attempt:1,} returns sandbox id \"c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262\"" Mar 2 13:32:02.486708 containerd[1729]: time="2026-03-02T13:32:02.486665433Z" level=info msg="CreateContainer within sandbox \"c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:32:02.508128 containerd[1729]: time="2026-03-02T13:32:02.508088018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jm7xw,Uid:8a206f66-6255-4ca6-b52b-ffcb569d488d,Namespace:calico-system,Attempt:1,} returns sandbox id \"f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb\"" Mar 2 13:32:02.530655 containerd[1729]: time="2026-03-02T13:32:02.518298591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:32:02.530655 containerd[1729]: time="2026-03-02T13:32:02.518350951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:32:02.530655 containerd[1729]: time="2026-03-02T13:32:02.518362471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:32:02.530655 containerd[1729]: time="2026-03-02T13:32:02.518438631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:32:02.538528 containerd[1729]: time="2026-03-02T13:32:02.537682613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.3\"" Mar 2 13:32:02.558523 systemd[1]: run-containerd-runc-k8s.io-67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6-runc.4PvkDX.mount: Deactivated successfully. Mar 2 13:32:02.570646 systemd[1]: Started cri-containerd-67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6.scope - libcontainer container 67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6. Mar 2 13:32:02.578874 systemd[1]: Removed slice kubepods-besteffort-pod03e66469_eeaa_4abc_aa42_114144b38f24.slice - libcontainer container kubepods-besteffort-pod03e66469_eeaa_4abc_aa42_114144b38f24.slice. Mar 2 13:32:02.581249 containerd[1729]: time="2026-03-02T13:32:02.581087985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9566f57b5-nnh7j,Uid:07045e73-ca01-465e-919f-8f478e008766,Namespace:calico-system,Attempt:1,} returns sandbox id \"5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2\"" Mar 2 13:32:02.582405 containerd[1729]: time="2026-03-02T13:32:02.582153146Z" level=info msg="CreateContainer within sandbox \"c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c759aad0eef471001ab73f520ddbfb33f5d9e9e38a628627366ec9e082d6ee9a\"" Mar 2 13:32:02.583977 containerd[1729]: time="2026-03-02T13:32:02.583864708Z" level=info msg="StartContainer for \"c759aad0eef471001ab73f520ddbfb33f5d9e9e38a628627366ec9e082d6ee9a\"" Mar 2 13:32:02.657569 systemd[1]: Started cri-containerd-c759aad0eef471001ab73f520ddbfb33f5d9e9e38a628627366ec9e082d6ee9a.scope - libcontainer container c759aad0eef471001ab73f520ddbfb33f5d9e9e38a628627366ec9e082d6ee9a. Mar 2 13:32:02.689261 systemd[1]: Created slice kubepods-besteffort-pod314d053c_c824_4760_bdd5_b145d15320bc.slice - libcontainer container kubepods-besteffort-pod314d053c_c824_4760_bdd5_b145d15320bc.slice. Mar 2 13:32:02.704131 containerd[1729]: time="2026-03-02T13:32:02.703751569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd4748474-c2zkg,Uid:0ce87770-1399-4d16-9b2e-f92d3470b97a,Namespace:calico-system,Attempt:1,} returns sandbox id \"67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6\"" Mar 2 13:32:02.738275 containerd[1729]: time="2026-03-02T13:32:02.738133410Z" level=info msg="StartContainer for \"c759aad0eef471001ab73f520ddbfb33f5d9e9e38a628627366ec9e082d6ee9a\" returns successfully" Mar 2 13:32:02.777137 kubelet[3208]: I0302 13:32:02.776882 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95nsn\" (UniqueName: \"kubernetes.io/projected/314d053c-c824-4760-bdd5-b145d15320bc-kube-api-access-95nsn\") pod \"whisker-755469947b-sb77n\" (UID: \"314d053c-c824-4760-bdd5-b145d15320bc\") " pod="calico-system/whisker-755469947b-sb77n" Mar 2 13:32:02.777137 kubelet[3208]: I0302 13:32:02.776934 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/314d053c-c824-4760-bdd5-b145d15320bc-nginx-config\") pod \"whisker-755469947b-sb77n\" (UID: \"314d053c-c824-4760-bdd5-b145d15320bc\") " pod="calico-system/whisker-755469947b-sb77n" Mar 2 13:32:02.777137 kubelet[3208]: I0302 13:32:02.776952 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/314d053c-c824-4760-bdd5-b145d15320bc-whisker-ca-bundle\") pod \"whisker-755469947b-sb77n\" (UID: \"314d053c-c824-4760-bdd5-b145d15320bc\") " pod="calico-system/whisker-755469947b-sb77n" Mar 2 13:32:02.777137 kubelet[3208]: I0302 13:32:02.776972 3208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/314d053c-c824-4760-bdd5-b145d15320bc-whisker-backend-key-pair\") pod \"whisker-755469947b-sb77n\" (UID: \"314d053c-c824-4760-bdd5-b145d15320bc\") " pod="calico-system/whisker-755469947b-sb77n" Mar 2 13:32:02.995849 containerd[1729]: time="2026-03-02T13:32:02.995358793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-755469947b-sb77n,Uid:314d053c-c824-4760-bdd5-b145d15320bc,Namespace:calico-system,Attempt:0,}" Mar 2 13:32:03.025639 kernel: calico-node[4860]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 2 13:32:03.233422 systemd-networkd[1352]: calic616c01e326: Link UP Mar 2 13:32:03.233630 systemd-networkd[1352]: calic616c01e326: Gained carrier Mar 2 13:32:03.257620 containerd[1729]: 2026-03-02 13:32:03.136 [INFO][4957] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--160832fd4e-k8s-whisker--755469947b--sb77n-eth0 whisker-755469947b- calico-system 314d053c-c824-4760-bdd5-b145d15320bc 939 0 2026-03-02 13:32:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:755469947b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.101-160832fd4e whisker-755469947b-sb77n eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic616c01e326 [] [] }} ContainerID="bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a" Namespace="calico-system" Pod="whisker-755469947b-sb77n" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-whisker--755469947b--sb77n-" Mar 2 13:32:03.257620 containerd[1729]: 2026-03-02 13:32:03.136 [INFO][4957] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a" Namespace="calico-system" Pod="whisker-755469947b-sb77n" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-whisker--755469947b--sb77n-eth0" Mar 2 13:32:03.257620 containerd[1729]: 2026-03-02 13:32:03.165 [INFO][4969] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a" HandleID="k8s-pod-network.bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a" Workload="ci--4081.3.101--160832fd4e-k8s-whisker--755469947b--sb77n-eth0" Mar 2 13:32:03.257620 containerd[1729]: 2026-03-02 13:32:03.175 [INFO][4969] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a" HandleID="k8s-pod-network.bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a" Workload="ci--4081.3.101--160832fd4e-k8s-whisker--755469947b--sb77n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273270), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-160832fd4e", "pod":"whisker-755469947b-sb77n", "timestamp":"2026-03-02 13:32:03.165147914 +0000 UTC"}, Hostname:"ci-4081.3.101-160832fd4e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400038ef20)} Mar 2 13:32:03.257620 containerd[1729]: 2026-03-02 13:32:03.175 [INFO][4969] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:03.257620 containerd[1729]: 2026-03-02 13:32:03.176 [INFO][4969] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:03.257620 containerd[1729]: 2026-03-02 13:32:03.176 [INFO][4969] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-160832fd4e' Mar 2 13:32:03.257620 containerd[1729]: 2026-03-02 13:32:03.178 [INFO][4969] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:03.257620 containerd[1729]: 2026-03-02 13:32:03.182 [INFO][4969] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-160832fd4e" Mar 2 13:32:03.257620 containerd[1729]: 2026-03-02 13:32:03.196 [INFO][4969] ipam/ipam.go 526: Trying affinity for 192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:03.257620 containerd[1729]: 2026-03-02 13:32:03.201 [INFO][4969] ipam/ipam.go 160: Attempting to load block cidr=192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:03.257620 containerd[1729]: 2026-03-02 13:32:03.204 [INFO][4969] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:03.257620 containerd[1729]: 2026-03-02 13:32:03.204 [INFO][4969] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:03.257620 containerd[1729]: 2026-03-02 13:32:03.206 [INFO][4969] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a Mar 2 13:32:03.257620 containerd[1729]: 2026-03-02 13:32:03.216 [INFO][4969] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:03.257620 containerd[1729]: 2026-03-02 13:32:03.228 [INFO][4969] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.113.133/26] block=192.168.113.128/26 handle="k8s-pod-network.bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:03.257620 containerd[1729]: 2026-03-02 13:32:03.228 [INFO][4969] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.113.133/26] handle="k8s-pod-network.bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:03.257620 containerd[1729]: 2026-03-02 13:32:03.228 [INFO][4969] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:03.257620 containerd[1729]: 2026-03-02 13:32:03.228 [INFO][4969] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.113.133/26] IPv6=[] ContainerID="bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a" HandleID="k8s-pod-network.bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a" Workload="ci--4081.3.101--160832fd4e-k8s-whisker--755469947b--sb77n-eth0" Mar 2 13:32:03.259127 containerd[1729]: 2026-03-02 13:32:03.231 [INFO][4957] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a" Namespace="calico-system" Pod="whisker-755469947b-sb77n" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-whisker--755469947b--sb77n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-whisker--755469947b--sb77n-eth0", GenerateName:"whisker-755469947b-", Namespace:"calico-system", SelfLink:"", UID:"314d053c-c824-4760-bdd5-b145d15320bc", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 32, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"755469947b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"", Pod:"whisker-755469947b-sb77n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.113.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic616c01e326", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:03.259127 containerd[1729]: 2026-03-02 13:32:03.231 [INFO][4957] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.133/32] ContainerID="bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a" Namespace="calico-system" Pod="whisker-755469947b-sb77n" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-whisker--755469947b--sb77n-eth0" Mar 2 13:32:03.259127 containerd[1729]: 2026-03-02 13:32:03.231 [INFO][4957] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic616c01e326 ContainerID="bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a" Namespace="calico-system" Pod="whisker-755469947b-sb77n" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-whisker--755469947b--sb77n-eth0" Mar 2 13:32:03.259127 containerd[1729]: 2026-03-02 13:32:03.233 [INFO][4957] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a" Namespace="calico-system" Pod="whisker-755469947b-sb77n" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-whisker--755469947b--sb77n-eth0" Mar 2 13:32:03.259127 containerd[1729]: 2026-03-02 13:32:03.234 [INFO][4957] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a" Namespace="calico-system" Pod="whisker-755469947b-sb77n" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-whisker--755469947b--sb77n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-whisker--755469947b--sb77n-eth0", GenerateName:"whisker-755469947b-", Namespace:"calico-system", SelfLink:"", UID:"314d053c-c824-4760-bdd5-b145d15320bc", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 32, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"755469947b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a", Pod:"whisker-755469947b-sb77n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.113.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic616c01e326", MAC:"82:24:3b:9b:54:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:03.259127 containerd[1729]: 2026-03-02 13:32:03.253 [INFO][4957] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a" Namespace="calico-system" Pod="whisker-755469947b-sb77n" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-whisker--755469947b--sb77n-eth0" Mar 2 13:32:03.262884 kubelet[3208]: I0302 13:32:03.261976 3208 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03e66469-eeaa-4abc-aa42-114144b38f24" path="/var/lib/kubelet/pods/03e66469-eeaa-4abc-aa42-114144b38f24/volumes" Mar 2 13:32:03.291031 containerd[1729]: time="2026-03-02T13:32:03.290918262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:32:03.291031 containerd[1729]: time="2026-03-02T13:32:03.290984222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:32:03.291031 containerd[1729]: time="2026-03-02T13:32:03.291005182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:32:03.291332 containerd[1729]: time="2026-03-02T13:32:03.291096742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:32:03.293687 systemd-networkd[1352]: calia708df760fa: Gained IPv6LL Mar 2 13:32:03.314641 systemd[1]: Started cri-containerd-bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a.scope - libcontainer container bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a. Mar 2 13:32:03.348025 containerd[1729]: time="2026-03-02T13:32:03.347956649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-755469947b-sb77n,Uid:314d053c-c824-4760-bdd5-b145d15320bc,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a\"" Mar 2 13:32:03.357621 systemd-networkd[1352]: calif8e9bdb62cd: Gained IPv6LL Mar 2 13:32:03.586348 systemd-networkd[1352]: vxlan.calico: Link UP Mar 2 13:32:03.586356 systemd-networkd[1352]: vxlan.calico: Gained carrier Mar 2 13:32:03.618730 kubelet[3208]: I0302 13:32:03.618081 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gfmzk" podStartSLOduration=42.618063248 podStartE2EDuration="42.618063248s" podCreationTimestamp="2026-03-02 13:31:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:32:03.597408584 +0000 UTC m=+48.437839067" watchObservedRunningTime="2026-03-02 13:32:03.618063248 +0000 UTC m=+48.458493691" Mar 2 13:32:04.062631 systemd-networkd[1352]: calic5efec2ed78: Gained IPv6LL Mar 2 13:32:04.189620 systemd-networkd[1352]: calic443914566a: Gained IPv6LL Mar 2 13:32:04.318673 systemd-networkd[1352]: calic616c01e326: Gained IPv6LL Mar 2 13:32:04.341353 containerd[1729]: time="2026-03-02T13:32:04.341293268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:04.344118 containerd[1729]: time="2026-03-02T13:32:04.343920791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.3: active requests=0, bytes read=8255947" Mar 2 13:32:04.347783 containerd[1729]: time="2026-03-02T13:32:04.347733475Z" level=info msg="ImageCreate event name:\"sha256:a7b37b6d011a8219915c610022e2c5ef47396285db6e7e10d7694ff3dea87dc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:04.354864 containerd[1729]: time="2026-03-02T13:32:04.354643603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:3d04cd6265f850f0420b413351275ebfd244991b1b9e69c64efe8b4eff45b53f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:04.356093 containerd[1729]: time="2026-03-02T13:32:04.355509564Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.3\" with image id \"sha256:a7b37b6d011a8219915c610022e2c5ef47396285db6e7e10d7694ff3dea87dc5\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:3d04cd6265f850f0420b413351275ebfd244991b1b9e69c64efe8b4eff45b53f\", size \"9653472\" in 1.817781031s" Mar 2 13:32:04.356093 containerd[1729]: time="2026-03-02T13:32:04.355546884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.3\" returns image reference \"sha256:a7b37b6d011a8219915c610022e2c5ef47396285db6e7e10d7694ff3dea87dc5\"" Mar 2 13:32:04.359133 containerd[1729]: time="2026-03-02T13:32:04.358703288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.3\"" Mar 2 13:32:04.364931 containerd[1729]: time="2026-03-02T13:32:04.364895894Z" level=info msg="CreateContainer within sandbox \"f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 2 13:32:04.397918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3518896565.mount: Deactivated successfully. Mar 2 13:32:04.411549 containerd[1729]: time="2026-03-02T13:32:04.411505226Z" level=info msg="CreateContainer within sandbox \"f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d43d32958ba81702df8be75c705fab4e2bca051cd7e47e5367a2c59a0e261b2a\"" Mar 2 13:32:04.413402 containerd[1729]: time="2026-03-02T13:32:04.412061187Z" level=info msg="StartContainer for \"d43d32958ba81702df8be75c705fab4e2bca051cd7e47e5367a2c59a0e261b2a\"" Mar 2 13:32:04.465744 systemd[1]: Started cri-containerd-d43d32958ba81702df8be75c705fab4e2bca051cd7e47e5367a2c59a0e261b2a.scope - libcontainer container d43d32958ba81702df8be75c705fab4e2bca051cd7e47e5367a2c59a0e261b2a. Mar 2 13:32:04.505313 containerd[1729]: time="2026-03-02T13:32:04.505019690Z" level=info msg="StartContainer for \"d43d32958ba81702df8be75c705fab4e2bca051cd7e47e5367a2c59a0e261b2a\" returns successfully" Mar 2 13:32:05.405620 systemd-networkd[1352]: vxlan.calico: Gained IPv6LL Mar 2 13:32:06.533238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2021599213.mount: Deactivated successfully. Mar 2 13:32:06.880529 containerd[1729]: time="2026-03-02T13:32:06.879773479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:06.886065 containerd[1729]: time="2026-03-02T13:32:06.885803686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.3: active requests=0, bytes read=51600693" Mar 2 13:32:06.889038 containerd[1729]: time="2026-03-02T13:32:06.888792449Z" level=info msg="ImageCreate event name:\"sha256:d40b2a23702c4c62ef242fb10a0dae8b80d5b5a0fd36ecec29e43b227f22611d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:06.893815 containerd[1729]: time="2026-03-02T13:32:06.893732335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:e85ffa1d9468908b0bd44664de0d023da6669faefb3e1013b3a15b63dfa1f9a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:06.894766 containerd[1729]: time="2026-03-02T13:32:06.894638376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.3\" with image id \"sha256:d40b2a23702c4c62ef242fb10a0dae8b80d5b5a0fd36ecec29e43b227f22611d\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:e85ffa1d9468908b0bd44664de0d023da6669faefb3e1013b3a15b63dfa1f9a9\", size \"51600539\" in 2.535891408s" Mar 2 13:32:06.894766 containerd[1729]: time="2026-03-02T13:32:06.894678016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.3\" returns image reference \"sha256:d40b2a23702c4c62ef242fb10a0dae8b80d5b5a0fd36ecec29e43b227f22611d\"" Mar 2 13:32:06.896243 containerd[1729]: time="2026-03-02T13:32:06.896202457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\"" Mar 2 13:32:06.903803 containerd[1729]: time="2026-03-02T13:32:06.903632106Z" level=info msg="CreateContainer within sandbox \"5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 2 13:32:06.939234 containerd[1729]: time="2026-03-02T13:32:06.939106025Z" level=info msg="CreateContainer within sandbox \"5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"190f252380241aba7cf9c07a6b96b7193eb74bffc8d3fe42f20bf9f0a70b7801\"" Mar 2 13:32:06.941052 containerd[1729]: time="2026-03-02T13:32:06.940428546Z" level=info msg="StartContainer for \"190f252380241aba7cf9c07a6b96b7193eb74bffc8d3fe42f20bf9f0a70b7801\"" Mar 2 13:32:06.970652 systemd[1]: Started cri-containerd-190f252380241aba7cf9c07a6b96b7193eb74bffc8d3fe42f20bf9f0a70b7801.scope - libcontainer container 190f252380241aba7cf9c07a6b96b7193eb74bffc8d3fe42f20bf9f0a70b7801. Mar 2 13:32:07.008964 containerd[1729]: time="2026-03-02T13:32:07.008913382Z" level=info msg="StartContainer for \"190f252380241aba7cf9c07a6b96b7193eb74bffc8d3fe42f20bf9f0a70b7801\" returns successfully" Mar 2 13:32:07.617115 kubelet[3208]: I0302 13:32:07.617041 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-9566f57b5-nnh7j" podStartSLOduration=29.310253993 podStartE2EDuration="33.617024136s" podCreationTimestamp="2026-03-02 13:31:34 +0000 UTC" firstStartedPulling="2026-03-02 13:32:02.589328434 +0000 UTC m=+47.429758877" lastFinishedPulling="2026-03-02 13:32:06.896098577 +0000 UTC m=+51.736529020" observedRunningTime="2026-03-02 13:32:07.612957691 +0000 UTC m=+52.453388174" watchObservedRunningTime="2026-03-02 13:32:07.617024136 +0000 UTC m=+52.457454579" Mar 2 13:32:10.423505 containerd[1729]: time="2026-03-02T13:32:10.423345283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:10.426537 containerd[1729]: time="2026-03-02T13:32:10.426380726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.3: active requests=0, bytes read=45512258" Mar 2 13:32:10.431640 containerd[1729]: time="2026-03-02T13:32:10.430453571Z" level=info msg="ImageCreate event name:\"sha256:6c1d6f109ccbdc040de9bade4e1d6f18ad2b7e93a2479f2ff827985a6b5c9653\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:10.434964 containerd[1729]: time="2026-03-02T13:32:10.434911136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:10.435774 containerd[1729]: time="2026-03-02T13:32:10.435733617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" with image id \"sha256:6c1d6f109ccbdc040de9bade4e1d6f18ad2b7e93a2479f2ff827985a6b5c9653\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:c2def03be7412561bd678df17fcf2467cac990dbb42278dcfe193aa5a43128d4\", size \"46909799\" in 3.5394894s" Mar 2 13:32:10.435774 containerd[1729]: time="2026-03-02T13:32:10.435772977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.3\" returns image reference \"sha256:6c1d6f109ccbdc040de9bade4e1d6f18ad2b7e93a2479f2ff827985a6b5c9653\"" Mar 2 13:32:10.437720 containerd[1729]: time="2026-03-02T13:32:10.437691259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.3\"" Mar 2 13:32:10.445679 containerd[1729]: time="2026-03-02T13:32:10.445335147Z" level=info msg="CreateContainer within sandbox \"67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 2 13:32:10.483133 containerd[1729]: time="2026-03-02T13:32:10.482949949Z" level=info msg="CreateContainer within sandbox \"67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"992572acce8ae15f68a66f86fec4941ed31efa7aae86a72c7eed3963092a3541\"" Mar 2 13:32:10.485106 containerd[1729]: time="2026-03-02T13:32:10.483962710Z" level=info msg="StartContainer for \"992572acce8ae15f68a66f86fec4941ed31efa7aae86a72c7eed3963092a3541\"" Mar 2 13:32:10.521666 systemd[1]: Started cri-containerd-992572acce8ae15f68a66f86fec4941ed31efa7aae86a72c7eed3963092a3541.scope - libcontainer container 992572acce8ae15f68a66f86fec4941ed31efa7aae86a72c7eed3963092a3541. Mar 2 13:32:10.569513 containerd[1729]: time="2026-03-02T13:32:10.569442405Z" level=info msg="StartContainer for \"992572acce8ae15f68a66f86fec4941ed31efa7aae86a72c7eed3963092a3541\" returns successfully" Mar 2 13:32:11.251771 containerd[1729]: time="2026-03-02T13:32:11.251369840Z" level=info msg="StopPodSandbox for \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\"" Mar 2 13:32:11.344870 kubelet[3208]: I0302 13:32:11.343373 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6cd4748474-c2zkg" podStartSLOduration=29.612794816 podStartE2EDuration="37.343354782s" podCreationTimestamp="2026-03-02 13:31:34 +0000 UTC" firstStartedPulling="2026-03-02 13:32:02.706248612 +0000 UTC m=+47.546679095" lastFinishedPulling="2026-03-02 13:32:10.436808618 +0000 UTC m=+55.277239061" observedRunningTime="2026-03-02 13:32:10.644720208 +0000 UTC m=+55.485150691" watchObservedRunningTime="2026-03-02 13:32:11.343354782 +0000 UTC m=+56.183785265" Mar 2 13:32:11.388978 containerd[1729]: 2026-03-02 13:32:11.341 [INFO][5347] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Mar 2 13:32:11.388978 containerd[1729]: 2026-03-02 13:32:11.341 [INFO][5347] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" iface="eth0" netns="/var/run/netns/cni-f92f5f94-b692-ee61-2bbd-c4cc98d9fb65" Mar 2 13:32:11.388978 containerd[1729]: 2026-03-02 13:32:11.341 [INFO][5347] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" iface="eth0" netns="/var/run/netns/cni-f92f5f94-b692-ee61-2bbd-c4cc98d9fb65" Mar 2 13:32:11.388978 containerd[1729]: 2026-03-02 13:32:11.342 [INFO][5347] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" iface="eth0" netns="/var/run/netns/cni-f92f5f94-b692-ee61-2bbd-c4cc98d9fb65" Mar 2 13:32:11.388978 containerd[1729]: 2026-03-02 13:32:11.342 [INFO][5347] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Mar 2 13:32:11.388978 containerd[1729]: 2026-03-02 13:32:11.344 [INFO][5347] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Mar 2 13:32:11.388978 containerd[1729]: 2026-03-02 13:32:11.370 [INFO][5354] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" HandleID="k8s-pod-network.43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0" Mar 2 13:32:11.388978 containerd[1729]: 2026-03-02 13:32:11.370 [INFO][5354] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:11.388978 containerd[1729]: 2026-03-02 13:32:11.371 [INFO][5354] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:11.388978 containerd[1729]: 2026-03-02 13:32:11.382 [WARNING][5354] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" HandleID="k8s-pod-network.43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0" Mar 2 13:32:11.388978 containerd[1729]: 2026-03-02 13:32:11.382 [INFO][5354] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" HandleID="k8s-pod-network.43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0" Mar 2 13:32:11.388978 containerd[1729]: 2026-03-02 13:32:11.384 [INFO][5354] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:11.388978 containerd[1729]: 2026-03-02 13:32:11.386 [INFO][5347] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Mar 2 13:32:11.392162 containerd[1729]: time="2026-03-02T13:32:11.389210832Z" level=info msg="TearDown network for sandbox \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\" successfully" Mar 2 13:32:11.392162 containerd[1729]: time="2026-03-02T13:32:11.389240673Z" level=info msg="StopPodSandbox for \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\" returns successfully" Mar 2 13:32:11.392162 containerd[1729]: time="2026-03-02T13:32:11.390659514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5dqvc,Uid:7638e352-eaab-46e6-a147-f204ad1cab74,Namespace:kube-system,Attempt:1,}" Mar 2 13:32:11.396027 systemd[1]: run-netns-cni\x2df92f5f94\x2db692\x2dee61\x2d2bbd\x2dc4cc98d9fb65.mount: Deactivated successfully. Mar 2 13:32:11.620921 kubelet[3208]: I0302 13:32:11.620492 3208 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:32:11.991778 systemd-networkd[1352]: calid5bf1220b91: Link UP Mar 2 13:32:11.993066 systemd-networkd[1352]: calid5bf1220b91: Gained carrier Mar 2 13:32:12.023094 containerd[1729]: 2026-03-02 13:32:11.884 [INFO][5360] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0 coredns-674b8bbfcf- kube-system 7638e352-eaab-46e6-a147-f204ad1cab74 994 0 2026-03-02 13:31:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.101-160832fd4e coredns-674b8bbfcf-5dqvc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid5bf1220b91 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854" Namespace="kube-system" Pod="coredns-674b8bbfcf-5dqvc" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-" Mar 2 13:32:12.023094 containerd[1729]: 2026-03-02 13:32:11.884 [INFO][5360] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854" Namespace="kube-system" Pod="coredns-674b8bbfcf-5dqvc" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0" Mar 2 13:32:12.023094 containerd[1729]: 2026-03-02 13:32:11.917 [INFO][5372] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854" HandleID="k8s-pod-network.be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0" Mar 2 13:32:12.023094 containerd[1729]: 2026-03-02 13:32:11.931 [INFO][5372] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854" HandleID="k8s-pod-network.be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400036d520), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.101-160832fd4e", "pod":"coredns-674b8bbfcf-5dqvc", "timestamp":"2026-03-02 13:32:11.917365596 +0000 UTC"}, Hostname:"ci-4081.3.101-160832fd4e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004e8f20)} Mar 2 13:32:12.023094 containerd[1729]: 2026-03-02 13:32:11.931 [INFO][5372] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:12.023094 containerd[1729]: 2026-03-02 13:32:11.931 [INFO][5372] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:12.023094 containerd[1729]: 2026-03-02 13:32:11.931 [INFO][5372] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-160832fd4e' Mar 2 13:32:12.023094 containerd[1729]: 2026-03-02 13:32:11.940 [INFO][5372] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:12.023094 containerd[1729]: 2026-03-02 13:32:11.945 [INFO][5372] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-160832fd4e" Mar 2 13:32:12.023094 containerd[1729]: 2026-03-02 13:32:11.950 [INFO][5372] ipam/ipam.go 526: Trying affinity for 192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:12.023094 containerd[1729]: 2026-03-02 13:32:11.954 [INFO][5372] ipam/ipam.go 160: Attempting to load block cidr=192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:12.023094 containerd[1729]: 2026-03-02 13:32:11.957 [INFO][5372] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:12.023094 containerd[1729]: 2026-03-02 13:32:11.957 [INFO][5372] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:12.023094 containerd[1729]: 2026-03-02 13:32:11.958 [INFO][5372] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854 Mar 2 13:32:12.023094 containerd[1729]: 2026-03-02 13:32:11.968 [INFO][5372] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:12.023094 containerd[1729]: 2026-03-02 13:32:11.983 [INFO][5372] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.113.134/26] block=192.168.113.128/26 handle="k8s-pod-network.be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:12.023094 containerd[1729]: 2026-03-02 13:32:11.983 [INFO][5372] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.113.134/26] handle="k8s-pod-network.be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:12.023094 containerd[1729]: 2026-03-02 13:32:11.983 [INFO][5372] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:12.023094 containerd[1729]: 2026-03-02 13:32:11.983 [INFO][5372] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.113.134/26] IPv6=[] ContainerID="be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854" HandleID="k8s-pod-network.be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0" Mar 2 13:32:12.025445 containerd[1729]: 2026-03-02 13:32:11.986 [INFO][5360] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854" Namespace="kube-system" Pod="coredns-674b8bbfcf-5dqvc" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7638e352-eaab-46e6-a147-f204ad1cab74", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"", Pod:"coredns-674b8bbfcf-5dqvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid5bf1220b91", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:12.025445 containerd[1729]: 2026-03-02 13:32:11.986 [INFO][5360] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.134/32] ContainerID="be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854" Namespace="kube-system" Pod="coredns-674b8bbfcf-5dqvc" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0" Mar 2 13:32:12.025445 containerd[1729]: 2026-03-02 13:32:11.986 [INFO][5360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid5bf1220b91 ContainerID="be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854" Namespace="kube-system" Pod="coredns-674b8bbfcf-5dqvc" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0" Mar 2 13:32:12.025445 containerd[1729]: 2026-03-02 13:32:11.992 [INFO][5360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854" Namespace="kube-system" Pod="coredns-674b8bbfcf-5dqvc" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0" Mar 2 13:32:12.025445 containerd[1729]: 2026-03-02 13:32:11.995 [INFO][5360] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854" Namespace="kube-system" Pod="coredns-674b8bbfcf-5dqvc" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7638e352-eaab-46e6-a147-f204ad1cab74", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854", Pod:"coredns-674b8bbfcf-5dqvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid5bf1220b91", MAC:"1e:e3:4d:9f:57:df", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:12.025445 containerd[1729]: 2026-03-02 13:32:12.019 [INFO][5360] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854" Namespace="kube-system" Pod="coredns-674b8bbfcf-5dqvc" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0" Mar 2 13:32:12.055251 containerd[1729]: time="2026-03-02T13:32:12.055017752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:32:12.055251 containerd[1729]: time="2026-03-02T13:32:12.055067632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:32:12.055251 containerd[1729]: time="2026-03-02T13:32:12.055078232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:32:12.055251 containerd[1729]: time="2026-03-02T13:32:12.055145432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:32:12.088679 systemd[1]: Started cri-containerd-be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854.scope - libcontainer container be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854. Mar 2 13:32:12.132771 containerd[1729]: time="2026-03-02T13:32:12.132726400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5dqvc,Uid:7638e352-eaab-46e6-a147-f204ad1cab74,Namespace:kube-system,Attempt:1,} returns sandbox id \"be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854\"" Mar 2 13:32:12.143432 containerd[1729]: time="2026-03-02T13:32:12.143342012Z" level=info msg="CreateContainer within sandbox \"be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:32:12.196522 containerd[1729]: time="2026-03-02T13:32:12.196453673Z" level=info msg="CreateContainer within sandbox \"be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a90a766b9546a63362de4941f5fd0578c84e03c3d0a6d1f9a66d48c878b6c29\"" Mar 2 13:32:12.197531 containerd[1729]: time="2026-03-02T13:32:12.197460714Z" level=info msg="StartContainer for \"8a90a766b9546a63362de4941f5fd0578c84e03c3d0a6d1f9a66d48c878b6c29\"" Mar 2 13:32:12.227658 systemd[1]: Started cri-containerd-8a90a766b9546a63362de4941f5fd0578c84e03c3d0a6d1f9a66d48c878b6c29.scope - libcontainer container 8a90a766b9546a63362de4941f5fd0578c84e03c3d0a6d1f9a66d48c878b6c29. Mar 2 13:32:12.261209 containerd[1729]: time="2026-03-02T13:32:12.260983986Z" level=info msg="StartContainer for \"8a90a766b9546a63362de4941f5fd0578c84e03c3d0a6d1f9a66d48c878b6c29\" returns successfully" Mar 2 13:32:12.641731 kubelet[3208]: I0302 13:32:12.641621 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5dqvc" podStartSLOduration=51.641600938 podStartE2EDuration="51.641600938s" podCreationTimestamp="2026-03-02 13:31:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:32:12.639924816 +0000 UTC m=+57.480355259" watchObservedRunningTime="2026-03-02 13:32:12.641600938 +0000 UTC m=+57.482031421" Mar 2 13:32:12.994633 containerd[1729]: time="2026-03-02T13:32:12.993726138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:12.996824 containerd[1729]: time="2026-03-02T13:32:12.996782822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.3: active requests=0, bytes read=5881068" Mar 2 13:32:13.000249 containerd[1729]: time="2026-03-02T13:32:13.000217265Z" level=info msg="ImageCreate event name:\"sha256:860a7f2cdb9123795f95a07e0cc91bc6b511927d1a4d1d588c303c9c59e0fa59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:13.004789 containerd[1729]: time="2026-03-02T13:32:13.004732391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:3a388b567fff5cc31c64399d4af0fd03d2f4d243ef26e6f6b77a49386dbadeca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:13.006248 containerd[1729]: time="2026-03-02T13:32:13.005719792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.3\" with image id \"sha256:860a7f2cdb9123795f95a07e0cc91bc6b511927d1a4d1d588c303c9c59e0fa59\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:3a388b567fff5cc31c64399d4af0fd03d2f4d243ef26e6f6b77a49386dbadeca\", size \"7278585\" in 2.567883853s" Mar 2 13:32:13.006248 containerd[1729]: time="2026-03-02T13:32:13.005754072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.3\" returns image reference \"sha256:860a7f2cdb9123795f95a07e0cc91bc6b511927d1a4d1d588c303c9c59e0fa59\"" Mar 2 13:32:13.007045 containerd[1729]: time="2026-03-02T13:32:13.007017953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\"" Mar 2 13:32:13.014015 containerd[1729]: time="2026-03-02T13:32:13.013975441Z" level=info msg="CreateContainer within sandbox \"bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 2 13:32:13.056411 containerd[1729]: time="2026-03-02T13:32:13.056121689Z" level=info msg="CreateContainer within sandbox \"bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6fcb5a1ecf90056f5143cd2001b6f5116594e92e6575c84b2cf1c22c2cfe4492\"" Mar 2 13:32:13.057970 containerd[1729]: time="2026-03-02T13:32:13.057843571Z" level=info msg="StartContainer for \"6fcb5a1ecf90056f5143cd2001b6f5116594e92e6575c84b2cf1c22c2cfe4492\"" Mar 2 13:32:13.098648 systemd[1]: Started cri-containerd-6fcb5a1ecf90056f5143cd2001b6f5116594e92e6575c84b2cf1c22c2cfe4492.scope - libcontainer container 6fcb5a1ecf90056f5143cd2001b6f5116594e92e6575c84b2cf1c22c2cfe4492. Mar 2 13:32:13.132873 containerd[1729]: time="2026-03-02T13:32:13.132830856Z" level=info msg="StartContainer for \"6fcb5a1ecf90056f5143cd2001b6f5116594e92e6575c84b2cf1c22c2cfe4492\" returns successfully" Mar 2 13:32:13.251112 containerd[1729]: time="2026-03-02T13:32:13.250350910Z" level=info msg="StopPodSandbox for \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\"" Mar 2 13:32:13.346733 containerd[1729]: 2026-03-02 13:32:13.307 [INFO][5534] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Mar 2 13:32:13.346733 containerd[1729]: 2026-03-02 13:32:13.307 [INFO][5534] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" iface="eth0" netns="/var/run/netns/cni-b8a597fb-4201-6695-9cc9-093587666831" Mar 2 13:32:13.346733 containerd[1729]: 2026-03-02 13:32:13.307 [INFO][5534] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" iface="eth0" netns="/var/run/netns/cni-b8a597fb-4201-6695-9cc9-093587666831" Mar 2 13:32:13.346733 containerd[1729]: 2026-03-02 13:32:13.308 [INFO][5534] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" iface="eth0" netns="/var/run/netns/cni-b8a597fb-4201-6695-9cc9-093587666831" Mar 2 13:32:13.346733 containerd[1729]: 2026-03-02 13:32:13.308 [INFO][5534] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Mar 2 13:32:13.346733 containerd[1729]: 2026-03-02 13:32:13.308 [INFO][5534] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Mar 2 13:32:13.346733 containerd[1729]: 2026-03-02 13:32:13.329 [INFO][5541] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" HandleID="k8s-pod-network.77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Workload="ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0" Mar 2 13:32:13.346733 containerd[1729]: 2026-03-02 13:32:13.330 [INFO][5541] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:13.346733 containerd[1729]: 2026-03-02 13:32:13.330 [INFO][5541] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:13.346733 containerd[1729]: 2026-03-02 13:32:13.340 [WARNING][5541] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" HandleID="k8s-pod-network.77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Workload="ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0" Mar 2 13:32:13.346733 containerd[1729]: 2026-03-02 13:32:13.340 [INFO][5541] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" HandleID="k8s-pod-network.77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Workload="ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0" Mar 2 13:32:13.346733 containerd[1729]: 2026-03-02 13:32:13.342 [INFO][5541] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:13.346733 containerd[1729]: 2026-03-02 13:32:13.344 [INFO][5534] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Mar 2 13:32:13.347576 containerd[1729]: time="2026-03-02T13:32:13.347233500Z" level=info msg="TearDown network for sandbox \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\" successfully" Mar 2 13:32:13.347576 containerd[1729]: time="2026-03-02T13:32:13.347265460Z" level=info msg="StopPodSandbox for \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\" returns successfully" Mar 2 13:32:13.348221 containerd[1729]: time="2026-03-02T13:32:13.348192941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75dbc7c697-wkcts,Uid:72b2de10-35e9-4af6-93f9-e337a5d88bbb,Namespace:calico-system,Attempt:1,}" Mar 2 13:32:13.397762 systemd[1]: run-netns-cni\x2db8a597fb\x2d4201\x2d6695\x2d9cc9\x2d093587666831.mount: Deactivated successfully. Mar 2 13:32:13.406183 systemd-networkd[1352]: calid5bf1220b91: Gained IPv6LL Mar 2 13:32:13.507561 systemd-networkd[1352]: cali5ee50ff6fdb: Link UP Mar 2 13:32:13.508412 systemd-networkd[1352]: cali5ee50ff6fdb: Gained carrier Mar 2 13:32:13.526776 containerd[1729]: 2026-03-02 13:32:13.425 [INFO][5549] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0 calico-kube-controllers-75dbc7c697- calico-system 72b2de10-35e9-4af6-93f9-e337a5d88bbb 1017 0 2026-03-02 13:31:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75dbc7c697 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.101-160832fd4e calico-kube-controllers-75dbc7c697-wkcts eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5ee50ff6fdb [] [] }} ContainerID="40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682" Namespace="calico-system" Pod="calico-kube-controllers-75dbc7c697-wkcts" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-" Mar 2 13:32:13.526776 containerd[1729]: 2026-03-02 13:32:13.425 [INFO][5549] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682" Namespace="calico-system" Pod="calico-kube-controllers-75dbc7c697-wkcts" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0" Mar 2 13:32:13.526776 containerd[1729]: 2026-03-02 13:32:13.453 [INFO][5564] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682" HandleID="k8s-pod-network.40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682" Workload="ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0" Mar 2 13:32:13.526776 containerd[1729]: 2026-03-02 13:32:13.462 [INFO][5564] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682" HandleID="k8s-pod-network.40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682" Workload="ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273270), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-160832fd4e", "pod":"calico-kube-controllers-75dbc7c697-wkcts", "timestamp":"2026-03-02 13:32:13.4533837 +0000 UTC"}, Hostname:"ci-4081.3.101-160832fd4e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003a6dc0)} Mar 2 13:32:13.526776 containerd[1729]: 2026-03-02 13:32:13.463 [INFO][5564] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:13.526776 containerd[1729]: 2026-03-02 13:32:13.463 [INFO][5564] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:13.526776 containerd[1729]: 2026-03-02 13:32:13.463 [INFO][5564] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-160832fd4e' Mar 2 13:32:13.526776 containerd[1729]: 2026-03-02 13:32:13.466 [INFO][5564] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:13.526776 containerd[1729]: 2026-03-02 13:32:13.470 [INFO][5564] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-160832fd4e" Mar 2 13:32:13.526776 containerd[1729]: 2026-03-02 13:32:13.475 [INFO][5564] ipam/ipam.go 526: Trying affinity for 192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:13.526776 containerd[1729]: 2026-03-02 13:32:13.477 [INFO][5564] ipam/ipam.go 160: Attempting to load block cidr=192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:13.526776 containerd[1729]: 2026-03-02 13:32:13.480 [INFO][5564] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:13.526776 containerd[1729]: 2026-03-02 13:32:13.480 [INFO][5564] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:13.526776 containerd[1729]: 2026-03-02 13:32:13.481 [INFO][5564] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682 Mar 2 13:32:13.526776 containerd[1729]: 2026-03-02 13:32:13.490 [INFO][5564] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:13.526776 containerd[1729]: 2026-03-02 13:32:13.502 [INFO][5564] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.113.135/26] block=192.168.113.128/26 handle="k8s-pod-network.40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:13.526776 containerd[1729]: 2026-03-02 13:32:13.502 [INFO][5564] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.113.135/26] handle="k8s-pod-network.40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:13.526776 containerd[1729]: 2026-03-02 13:32:13.502 [INFO][5564] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:13.526776 containerd[1729]: 2026-03-02 13:32:13.502 [INFO][5564] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.113.135/26] IPv6=[] ContainerID="40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682" HandleID="k8s-pod-network.40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682" Workload="ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0" Mar 2 13:32:13.527334 containerd[1729]: 2026-03-02 13:32:13.505 [INFO][5549] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682" Namespace="calico-system" Pod="calico-kube-controllers-75dbc7c697-wkcts" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0", GenerateName:"calico-kube-controllers-75dbc7c697-", Namespace:"calico-system", SelfLink:"", UID:"72b2de10-35e9-4af6-93f9-e337a5d88bbb", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75dbc7c697", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"", Pod:"calico-kube-controllers-75dbc7c697-wkcts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ee50ff6fdb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:13.527334 containerd[1729]: 2026-03-02 13:32:13.505 [INFO][5549] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.135/32] ContainerID="40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682" Namespace="calico-system" Pod="calico-kube-controllers-75dbc7c697-wkcts" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0" Mar 2 13:32:13.527334 containerd[1729]: 2026-03-02 13:32:13.505 [INFO][5549] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ee50ff6fdb ContainerID="40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682" Namespace="calico-system" Pod="calico-kube-controllers-75dbc7c697-wkcts" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0" Mar 2 13:32:13.527334 containerd[1729]: 2026-03-02 13:32:13.508 [INFO][5549] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682" Namespace="calico-system" Pod="calico-kube-controllers-75dbc7c697-wkcts" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0" Mar 2 13:32:13.527334 containerd[1729]: 2026-03-02 13:32:13.509 [INFO][5549] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682" Namespace="calico-system" Pod="calico-kube-controllers-75dbc7c697-wkcts" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0", GenerateName:"calico-kube-controllers-75dbc7c697-", Namespace:"calico-system", SelfLink:"", UID:"72b2de10-35e9-4af6-93f9-e337a5d88bbb", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75dbc7c697", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682", Pod:"calico-kube-controllers-75dbc7c697-wkcts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ee50ff6fdb", MAC:"22:8a:23:74:39:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:13.527334 containerd[1729]: 2026-03-02 13:32:13.522 [INFO][5549] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682" Namespace="calico-system" Pod="calico-kube-controllers-75dbc7c697-wkcts" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0" Mar 2 13:32:13.556995 containerd[1729]: time="2026-03-02T13:32:13.556863258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:32:13.557486 containerd[1729]: time="2026-03-02T13:32:13.557364738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:32:13.557486 containerd[1729]: time="2026-03-02T13:32:13.557396618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:32:13.557756 containerd[1729]: time="2026-03-02T13:32:13.557621618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:32:13.589662 systemd[1]: Started cri-containerd-40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682.scope - libcontainer container 40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682. Mar 2 13:32:13.629371 containerd[1729]: time="2026-03-02T13:32:13.629304740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75dbc7c697-wkcts,Uid:72b2de10-35e9-4af6-93f9-e337a5d88bbb,Namespace:calico-system,Attempt:1,} returns sandbox id \"40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682\"" Mar 2 13:32:14.249652 containerd[1729]: time="2026-03-02T13:32:14.249324844Z" level=info msg="StopPodSandbox for \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\"" Mar 2 13:32:14.348496 containerd[1729]: 2026-03-02 13:32:14.305 [INFO][5640] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Mar 2 13:32:14.348496 containerd[1729]: 2026-03-02 13:32:14.306 [INFO][5640] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" iface="eth0" netns="/var/run/netns/cni-aa870580-723e-b063-013d-75dd4760fc85" Mar 2 13:32:14.348496 containerd[1729]: 2026-03-02 13:32:14.306 [INFO][5640] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" iface="eth0" netns="/var/run/netns/cni-aa870580-723e-b063-013d-75dd4760fc85" Mar 2 13:32:14.348496 containerd[1729]: 2026-03-02 13:32:14.306 [INFO][5640] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" iface="eth0" netns="/var/run/netns/cni-aa870580-723e-b063-013d-75dd4760fc85" Mar 2 13:32:14.348496 containerd[1729]: 2026-03-02 13:32:14.306 [INFO][5640] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Mar 2 13:32:14.348496 containerd[1729]: 2026-03-02 13:32:14.306 [INFO][5640] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Mar 2 13:32:14.348496 containerd[1729]: 2026-03-02 13:32:14.330 [INFO][5655] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" HandleID="k8s-pod-network.b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0" Mar 2 13:32:14.348496 containerd[1729]: 2026-03-02 13:32:14.330 [INFO][5655] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:14.348496 containerd[1729]: 2026-03-02 13:32:14.330 [INFO][5655] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:14.348496 containerd[1729]: 2026-03-02 13:32:14.342 [WARNING][5655] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" HandleID="k8s-pod-network.b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0" Mar 2 13:32:14.348496 containerd[1729]: 2026-03-02 13:32:14.342 [INFO][5655] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" HandleID="k8s-pod-network.b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0" Mar 2 13:32:14.348496 containerd[1729]: 2026-03-02 13:32:14.343 [INFO][5655] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:14.348496 containerd[1729]: 2026-03-02 13:32:14.346 [INFO][5640] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Mar 2 13:32:14.349538 containerd[1729]: time="2026-03-02T13:32:14.349453918Z" level=info msg="TearDown network for sandbox \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\" successfully" Mar 2 13:32:14.349538 containerd[1729]: time="2026-03-02T13:32:14.349521638Z" level=info msg="StopPodSandbox for \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\" returns successfully" Mar 2 13:32:14.354799 systemd[1]: run-netns-cni\x2daa870580\x2d723e\x2db063\x2d013d\x2d75dd4760fc85.mount: Deactivated successfully. Mar 2 13:32:14.360462 containerd[1729]: time="2026-03-02T13:32:14.360420650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd4748474-xxvfj,Uid:08101d6f-0f5b-4f23-bfc4-ff79bac2631a,Namespace:calico-system,Attempt:1,}" Mar 2 13:32:14.525843 systemd-networkd[1352]: cali77b93d53597: Link UP Mar 2 13:32:14.527394 systemd-networkd[1352]: cali77b93d53597: Gained carrier Mar 2 13:32:14.555014 containerd[1729]: 2026-03-02 13:32:14.441 [INFO][5667] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0 calico-apiserver-6cd4748474- calico-system 08101d6f-0f5b-4f23-bfc4-ff79bac2631a 1026 0 2026-03-02 13:31:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cd4748474 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.101-160832fd4e calico-apiserver-6cd4748474-xxvfj eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali77b93d53597 [] [] }} ContainerID="7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494" Namespace="calico-system" Pod="calico-apiserver-6cd4748474-xxvfj" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-" Mar 2 13:32:14.555014 containerd[1729]: 2026-03-02 13:32:14.441 [INFO][5667] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494" Namespace="calico-system" Pod="calico-apiserver-6cd4748474-xxvfj" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0" Mar 2 13:32:14.555014 containerd[1729]: 2026-03-02 13:32:14.467 [INFO][5681] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494" HandleID="k8s-pod-network.7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0" Mar 2 13:32:14.555014 containerd[1729]: 2026-03-02 13:32:14.480 [INFO][5681] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494" HandleID="k8s-pod-network.7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273240), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-160832fd4e", "pod":"calico-apiserver-6cd4748474-xxvfj", "timestamp":"2026-03-02 13:32:14.467161131 +0000 UTC"}, Hostname:"ci-4081.3.101-160832fd4e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003a9080)} Mar 2 13:32:14.555014 containerd[1729]: 2026-03-02 13:32:14.480 [INFO][5681] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:14.555014 containerd[1729]: 2026-03-02 13:32:14.480 [INFO][5681] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:14.555014 containerd[1729]: 2026-03-02 13:32:14.480 [INFO][5681] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-160832fd4e' Mar 2 13:32:14.555014 containerd[1729]: 2026-03-02 13:32:14.483 [INFO][5681] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:14.555014 containerd[1729]: 2026-03-02 13:32:14.488 [INFO][5681] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-160832fd4e" Mar 2 13:32:14.555014 containerd[1729]: 2026-03-02 13:32:14.496 [INFO][5681] ipam/ipam.go 526: Trying affinity for 192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:14.555014 containerd[1729]: 2026-03-02 13:32:14.498 [INFO][5681] ipam/ipam.go 160: Attempting to load block cidr=192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:14.555014 containerd[1729]: 2026-03-02 13:32:14.500 [INFO][5681] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ci-4081.3.101-160832fd4e" Mar 2 13:32:14.555014 containerd[1729]: 2026-03-02 13:32:14.500 [INFO][5681] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:14.555014 containerd[1729]: 2026-03-02 13:32:14.502 [INFO][5681] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494 Mar 2 13:32:14.555014 containerd[1729]: 2026-03-02 13:32:14.508 [INFO][5681] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:14.555014 containerd[1729]: 2026-03-02 13:32:14.520 [INFO][5681] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.113.136/26] block=192.168.113.128/26 handle="k8s-pod-network.7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:14.555014 containerd[1729]: 2026-03-02 13:32:14.520 [INFO][5681] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.113.136/26] handle="k8s-pod-network.7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494" host="ci-4081.3.101-160832fd4e" Mar 2 13:32:14.555014 containerd[1729]: 2026-03-02 13:32:14.520 [INFO][5681] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:14.555014 containerd[1729]: 2026-03-02 13:32:14.520 [INFO][5681] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.113.136/26] IPv6=[] ContainerID="7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494" HandleID="k8s-pod-network.7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0" Mar 2 13:32:14.556449 containerd[1729]: 2026-03-02 13:32:14.522 [INFO][5667] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494" Namespace="calico-system" Pod="calico-apiserver-6cd4748474-xxvfj" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0", GenerateName:"calico-apiserver-6cd4748474-", Namespace:"calico-system", SelfLink:"", UID:"08101d6f-0f5b-4f23-bfc4-ff79bac2631a", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd4748474", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"", Pod:"calico-apiserver-6cd4748474-xxvfj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali77b93d53597", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:14.556449 containerd[1729]: 2026-03-02 13:32:14.522 [INFO][5667] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.136/32] ContainerID="7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494" Namespace="calico-system" Pod="calico-apiserver-6cd4748474-xxvfj" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0" Mar 2 13:32:14.556449 containerd[1729]: 2026-03-02 13:32:14.522 [INFO][5667] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77b93d53597 ContainerID="7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494" Namespace="calico-system" Pod="calico-apiserver-6cd4748474-xxvfj" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0" Mar 2 13:32:14.556449 containerd[1729]: 2026-03-02 13:32:14.528 [INFO][5667] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494" Namespace="calico-system" Pod="calico-apiserver-6cd4748474-xxvfj" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0" Mar 2 13:32:14.556449 containerd[1729]: 2026-03-02 13:32:14.529 [INFO][5667] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494" Namespace="calico-system" Pod="calico-apiserver-6cd4748474-xxvfj" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0", GenerateName:"calico-apiserver-6cd4748474-", Namespace:"calico-system", SelfLink:"", UID:"08101d6f-0f5b-4f23-bfc4-ff79bac2631a", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd4748474", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494", Pod:"calico-apiserver-6cd4748474-xxvfj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali77b93d53597", MAC:"16:28:fe:eb:45:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:14.556449 containerd[1729]: 2026-03-02 13:32:14.549 [INFO][5667] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494" Namespace="calico-system" Pod="calico-apiserver-6cd4748474-xxvfj" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0" Mar 2 13:32:14.641031 containerd[1729]: time="2026-03-02T13:32:14.640580328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:32:14.641031 containerd[1729]: time="2026-03-02T13:32:14.640631408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:32:14.641031 containerd[1729]: time="2026-03-02T13:32:14.640678288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:32:14.641031 containerd[1729]: time="2026-03-02T13:32:14.640774768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:32:14.669636 systemd[1]: Started cri-containerd-7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494.scope - libcontainer container 7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494. Mar 2 13:32:14.743543 containerd[1729]: time="2026-03-02T13:32:14.743413925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd4748474-xxvfj,Uid:08101d6f-0f5b-4f23-bfc4-ff79bac2631a,Namespace:calico-system,Attempt:1,} returns sandbox id \"7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494\"" Mar 2 13:32:15.069603 systemd-networkd[1352]: cali5ee50ff6fdb: Gained IPv6LL Mar 2 13:32:15.266124 containerd[1729]: time="2026-03-02T13:32:15.265970198Z" level=info msg="CreateContainer within sandbox \"7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 2 13:32:15.267235 containerd[1729]: time="2026-03-02T13:32:15.267118200Z" level=info msg="StopPodSandbox for \"7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7\"" Mar 2 13:32:15.295842 containerd[1729]: time="2026-03-02T13:32:15.295710272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:15.305049 containerd[1729]: time="2026-03-02T13:32:15.305001643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3: active requests=0, bytes read=13755078" Mar 2 13:32:15.325171 containerd[1729]: time="2026-03-02T13:32:15.323590384Z" level=info msg="ImageCreate event name:\"sha256:c55251c1db32bbbf386d6ef9309a13d39443eef28f12c0883c2fd06bc5561b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:15.331838 containerd[1729]: time="2026-03-02T13:32:15.331668553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:2bdced3111efc84af5b77534155b084a55a3f839010807e7e83e75faefc8cf33\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:15.332673 containerd[1729]: time="2026-03-02T13:32:15.332321474Z" level=info msg="CreateContainer within sandbox \"7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f2006ef1b634a5f5de838a17ebeb6e775aa94fc13510b03b769b45d4feb7393e\"" Mar 2 13:32:15.332673 containerd[1729]: time="2026-03-02T13:32:15.332434274Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" with image id \"sha256:c55251c1db32bbbf386d6ef9309a13d39443eef28f12c0883c2fd06bc5561b09\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:2bdced3111efc84af5b77534155b084a55a3f839010807e7e83e75faefc8cf33\", size \"15152555\" in 2.325381881s" Mar 2 13:32:15.332673 containerd[1729]: time="2026-03-02T13:32:15.332453874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.3\" returns image reference \"sha256:c55251c1db32bbbf386d6ef9309a13d39443eef28f12c0883c2fd06bc5561b09\"" Mar 2 13:32:15.336030 containerd[1729]: time="2026-03-02T13:32:15.334628956Z" level=info msg="StartContainer for \"f2006ef1b634a5f5de838a17ebeb6e775aa94fc13510b03b769b45d4feb7393e\"" Mar 2 13:32:15.346597 containerd[1729]: time="2026-03-02T13:32:15.346557850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\"" Mar 2 13:32:15.360306 containerd[1729]: time="2026-03-02T13:32:15.359438225Z" level=info msg="CreateContainer within sandbox \"f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 2 13:32:15.384662 systemd[1]: Started cri-containerd-f2006ef1b634a5f5de838a17ebeb6e775aa94fc13510b03b769b45d4feb7393e.scope - libcontainer container f2006ef1b634a5f5de838a17ebeb6e775aa94fc13510b03b769b45d4feb7393e. Mar 2 13:32:15.397875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569614424.mount: Deactivated successfully. Mar 2 13:32:15.399542 containerd[1729]: 2026-03-02 13:32:15.306 [WARNING][5769] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8a206f66-6255-4ca6-b52b-ffcb569d488d", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7494d65b57", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb", Pod:"csi-node-driver-jm7xw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif8e9bdb62cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:15.399542 containerd[1729]: 2026-03-02 13:32:15.306 [INFO][5769] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Mar 2 13:32:15.399542 containerd[1729]: 2026-03-02 13:32:15.306 [INFO][5769] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" iface="eth0" netns="" Mar 2 13:32:15.399542 containerd[1729]: 2026-03-02 13:32:15.306 [INFO][5769] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Mar 2 13:32:15.399542 containerd[1729]: 2026-03-02 13:32:15.306 [INFO][5769] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Mar 2 13:32:15.399542 containerd[1729]: 2026-03-02 13:32:15.367 [INFO][5777] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" HandleID="k8s-pod-network.7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Workload="ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0" Mar 2 13:32:15.399542 containerd[1729]: 2026-03-02 13:32:15.367 [INFO][5777] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:15.399542 containerd[1729]: 2026-03-02 13:32:15.367 [INFO][5777] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:15.399542 containerd[1729]: 2026-03-02 13:32:15.382 [WARNING][5777] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" HandleID="k8s-pod-network.7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Workload="ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0" Mar 2 13:32:15.399542 containerd[1729]: 2026-03-02 13:32:15.382 [INFO][5777] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" HandleID="k8s-pod-network.7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Workload="ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0" Mar 2 13:32:15.399542 containerd[1729]: 2026-03-02 13:32:15.388 [INFO][5777] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:15.399542 containerd[1729]: 2026-03-02 13:32:15.393 [INFO][5769] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Mar 2 13:32:15.401795 containerd[1729]: time="2026-03-02T13:32:15.401232232Z" level=info msg="TearDown network for sandbox \"7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7\" successfully" Mar 2 13:32:15.401795 containerd[1729]: time="2026-03-02T13:32:15.401266872Z" level=info msg="StopPodSandbox for \"7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7\" returns successfully" Mar 2 13:32:15.404605 containerd[1729]: time="2026-03-02T13:32:15.404560756Z" level=info msg="RemovePodSandbox for \"7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7\"" Mar 2 13:32:15.406334 containerd[1729]: time="2026-03-02T13:32:15.405869957Z" level=info msg="Forcibly stopping sandbox \"7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7\"" Mar 2 13:32:15.421654 containerd[1729]: time="2026-03-02T13:32:15.421605895Z" level=info msg="CreateContainer within sandbox \"f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e01d602a4c6b39aa7c95cca95a1d45a0e9bc737e8d6c65352d7c59af00fac1fb\"" Mar 2 13:32:15.422599 containerd[1729]: time="2026-03-02T13:32:15.422575136Z" level=info msg="StartContainer for \"e01d602a4c6b39aa7c95cca95a1d45a0e9bc737e8d6c65352d7c59af00fac1fb\"" Mar 2 13:32:15.473781 containerd[1729]: time="2026-03-02T13:32:15.473660874Z" level=info msg="StartContainer for \"f2006ef1b634a5f5de838a17ebeb6e775aa94fc13510b03b769b45d4feb7393e\" returns successfully" Mar 2 13:32:15.483457 systemd[1]: Started cri-containerd-e01d602a4c6b39aa7c95cca95a1d45a0e9bc737e8d6c65352d7c59af00fac1fb.scope - libcontainer container e01d602a4c6b39aa7c95cca95a1d45a0e9bc737e8d6c65352d7c59af00fac1fb. Mar 2 13:32:15.547656 containerd[1729]: 2026-03-02 13:32:15.477 [WARNING][5818] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8a206f66-6255-4ca6-b52b-ffcb569d488d", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7494d65b57", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"f6edd576d7d86cb1caa86025a15939e0690595651406ff12a3a04d4a29a546eb", Pod:"csi-node-driver-jm7xw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif8e9bdb62cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:15.547656 containerd[1729]: 2026-03-02 13:32:15.479 [INFO][5818] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Mar 2 13:32:15.547656 containerd[1729]: 2026-03-02 13:32:15.479 [INFO][5818] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" iface="eth0" netns="" Mar 2 13:32:15.547656 containerd[1729]: 2026-03-02 13:32:15.479 [INFO][5818] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Mar 2 13:32:15.547656 containerd[1729]: 2026-03-02 13:32:15.479 [INFO][5818] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Mar 2 13:32:15.547656 containerd[1729]: 2026-03-02 13:32:15.517 [INFO][5855] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" HandleID="k8s-pod-network.7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Workload="ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0" Mar 2 13:32:15.547656 containerd[1729]: 2026-03-02 13:32:15.518 [INFO][5855] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:15.547656 containerd[1729]: 2026-03-02 13:32:15.518 [INFO][5855] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:15.547656 containerd[1729]: 2026-03-02 13:32:15.532 [WARNING][5855] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" HandleID="k8s-pod-network.7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Workload="ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0" Mar 2 13:32:15.547656 containerd[1729]: 2026-03-02 13:32:15.532 [INFO][5855] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" HandleID="k8s-pod-network.7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Workload="ci--4081.3.101--160832fd4e-k8s-csi--node--driver--jm7xw-eth0" Mar 2 13:32:15.547656 containerd[1729]: 2026-03-02 13:32:15.536 [INFO][5855] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:15.547656 containerd[1729]: 2026-03-02 13:32:15.542 [INFO][5818] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7" Mar 2 13:32:15.547656 containerd[1729]: time="2026-03-02T13:32:15.547382238Z" level=info msg="TearDown network for sandbox \"7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7\" successfully" Mar 2 13:32:15.561797 containerd[1729]: time="2026-03-02T13:32:15.561515334Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:32:15.562490 containerd[1729]: time="2026-03-02T13:32:15.562175415Z" level=info msg="RemovePodSandbox \"7bfcafd09b8590db0d9f5c5cf7547aa809dddaeafe601a9da95bc863ca3b8ed7\" returns successfully" Mar 2 13:32:15.564926 containerd[1729]: time="2026-03-02T13:32:15.564360377Z" level=info msg="StopPodSandbox for \"909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb\"" Mar 2 13:32:15.570657 containerd[1729]: time="2026-03-02T13:32:15.570495824Z" level=info msg="StartContainer for \"e01d602a4c6b39aa7c95cca95a1d45a0e9bc737e8d6c65352d7c59af00fac1fb\" returns successfully" Mar 2 13:32:15.685538 containerd[1729]: 2026-03-02 13:32:15.624 [WARNING][5893] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-whisker--57d877f87b--svvn6-eth0" Mar 2 13:32:15.685538 containerd[1729]: 2026-03-02 13:32:15.624 [INFO][5893] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Mar 2 13:32:15.685538 containerd[1729]: 2026-03-02 13:32:15.624 [INFO][5893] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" iface="eth0" netns="" Mar 2 13:32:15.685538 containerd[1729]: 2026-03-02 13:32:15.624 [INFO][5893] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Mar 2 13:32:15.685538 containerd[1729]: 2026-03-02 13:32:15.624 [INFO][5893] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Mar 2 13:32:15.685538 containerd[1729]: 2026-03-02 13:32:15.653 [INFO][5904] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" HandleID="k8s-pod-network.909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Workload="ci--4081.3.101--160832fd4e-k8s-whisker--57d877f87b--svvn6-eth0" Mar 2 13:32:15.685538 containerd[1729]: 2026-03-02 13:32:15.654 [INFO][5904] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:15.685538 containerd[1729]: 2026-03-02 13:32:15.654 [INFO][5904] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:15.685538 containerd[1729]: 2026-03-02 13:32:15.669 [WARNING][5904] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" HandleID="k8s-pod-network.909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Workload="ci--4081.3.101--160832fd4e-k8s-whisker--57d877f87b--svvn6-eth0" Mar 2 13:32:15.685538 containerd[1729]: 2026-03-02 13:32:15.669 [INFO][5904] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" HandleID="k8s-pod-network.909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Workload="ci--4081.3.101--160832fd4e-k8s-whisker--57d877f87b--svvn6-eth0" Mar 2 13:32:15.685538 containerd[1729]: 2026-03-02 13:32:15.676 [INFO][5904] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:15.685538 containerd[1729]: 2026-03-02 13:32:15.681 [INFO][5893] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Mar 2 13:32:15.686271 containerd[1729]: time="2026-03-02T13:32:15.685840435Z" level=info msg="TearDown network for sandbox \"909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb\" successfully" Mar 2 13:32:15.686271 containerd[1729]: time="2026-03-02T13:32:15.685870675Z" level=info msg="StopPodSandbox for \"909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb\" returns successfully" Mar 2 13:32:15.687563 containerd[1729]: time="2026-03-02T13:32:15.686977397Z" level=info msg="RemovePodSandbox for \"909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb\"" Mar 2 13:32:15.687563 containerd[1729]: time="2026-03-02T13:32:15.687016037Z" level=info msg="Forcibly stopping sandbox \"909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb\"" Mar 2 13:32:15.700509 kubelet[3208]: I0302 13:32:15.700437 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6cd4748474-xxvfj" podStartSLOduration=41.700418532 podStartE2EDuration="41.700418532s" podCreationTimestamp="2026-03-02 13:31:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:32:15.668914336 +0000 UTC m=+60.509344819" watchObservedRunningTime="2026-03-02 13:32:15.700418532 +0000 UTC m=+60.540849015" Mar 2 13:32:15.701503 kubelet[3208]: I0302 13:32:15.701282 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jm7xw" podStartSLOduration=26.867214946 podStartE2EDuration="39.701264173s" podCreationTimestamp="2026-03-02 13:31:36 +0000 UTC" firstStartedPulling="2026-03-02 13:32:02.50971562 +0000 UTC m=+47.350146103" lastFinishedPulling="2026-03-02 13:32:15.343764847 +0000 UTC m=+60.184195330" observedRunningTime="2026-03-02 13:32:15.701198893 +0000 UTC m=+60.541629376" watchObservedRunningTime="2026-03-02 13:32:15.701264173 +0000 UTC m=+60.541694656" Mar 2 13:32:15.796616 containerd[1729]: 2026-03-02 13:32:15.747 [WARNING][5918] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" WorkloadEndpoint="ci--4081.3.101--160832fd4e-k8s-whisker--57d877f87b--svvn6-eth0" Mar 2 13:32:15.796616 containerd[1729]: 2026-03-02 13:32:15.747 [INFO][5918] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Mar 2 13:32:15.796616 containerd[1729]: 2026-03-02 13:32:15.747 [INFO][5918] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" iface="eth0" netns="" Mar 2 13:32:15.796616 containerd[1729]: 2026-03-02 13:32:15.747 [INFO][5918] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Mar 2 13:32:15.796616 containerd[1729]: 2026-03-02 13:32:15.747 [INFO][5918] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Mar 2 13:32:15.796616 containerd[1729]: 2026-03-02 13:32:15.774 [INFO][5927] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" HandleID="k8s-pod-network.909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Workload="ci--4081.3.101--160832fd4e-k8s-whisker--57d877f87b--svvn6-eth0" Mar 2 13:32:15.796616 containerd[1729]: 2026-03-02 13:32:15.775 [INFO][5927] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:15.796616 containerd[1729]: 2026-03-02 13:32:15.775 [INFO][5927] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:15.796616 containerd[1729]: 2026-03-02 13:32:15.788 [WARNING][5927] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" HandleID="k8s-pod-network.909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Workload="ci--4081.3.101--160832fd4e-k8s-whisker--57d877f87b--svvn6-eth0" Mar 2 13:32:15.796616 containerd[1729]: 2026-03-02 13:32:15.789 [INFO][5927] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" HandleID="k8s-pod-network.909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Workload="ci--4081.3.101--160832fd4e-k8s-whisker--57d877f87b--svvn6-eth0" Mar 2 13:32:15.796616 containerd[1729]: 2026-03-02 13:32:15.791 [INFO][5927] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:15.796616 containerd[1729]: 2026-03-02 13:32:15.793 [INFO][5918] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb" Mar 2 13:32:15.796616 containerd[1729]: time="2026-03-02T13:32:15.795632040Z" level=info msg="TearDown network for sandbox \"909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb\" successfully" Mar 2 13:32:15.811123 containerd[1729]: time="2026-03-02T13:32:15.810244656Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:32:15.811367 containerd[1729]: time="2026-03-02T13:32:15.811342618Z" level=info msg="RemovePodSandbox \"909786b6ddb78da0de4d78ff47e2ea9993ad89fecb826d6fca50f28420f2e3cb\" returns successfully" Mar 2 13:32:15.812440 containerd[1729]: time="2026-03-02T13:32:15.812157819Z" level=info msg="StopPodSandbox for \"f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd\"" Mar 2 13:32:15.897047 containerd[1729]: 2026-03-02 13:32:15.850 [WARNING][5941] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0", GenerateName:"goldmane-9566f57b5-", Namespace:"calico-system", SelfLink:"", UID:"07045e73-ca01-465e-919f-8f478e008766", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9566f57b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2", Pod:"goldmane-9566f57b5-nnh7j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.113.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic443914566a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:15.897047 containerd[1729]: 2026-03-02 13:32:15.853 [INFO][5941] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Mar 2 13:32:15.897047 containerd[1729]: 2026-03-02 13:32:15.853 [INFO][5941] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" iface="eth0" netns="" Mar 2 13:32:15.897047 containerd[1729]: 2026-03-02 13:32:15.853 [INFO][5941] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Mar 2 13:32:15.897047 containerd[1729]: 2026-03-02 13:32:15.853 [INFO][5941] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Mar 2 13:32:15.897047 containerd[1729]: 2026-03-02 13:32:15.880 [INFO][5948] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" HandleID="k8s-pod-network.f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Workload="ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0" Mar 2 13:32:15.897047 containerd[1729]: 2026-03-02 13:32:15.881 [INFO][5948] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:15.897047 containerd[1729]: 2026-03-02 13:32:15.881 [INFO][5948] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:15.897047 containerd[1729]: 2026-03-02 13:32:15.890 [WARNING][5948] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" HandleID="k8s-pod-network.f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Workload="ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0" Mar 2 13:32:15.897047 containerd[1729]: 2026-03-02 13:32:15.890 [INFO][5948] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" HandleID="k8s-pod-network.f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Workload="ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0" Mar 2 13:32:15.897047 containerd[1729]: 2026-03-02 13:32:15.892 [INFO][5948] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:15.897047 containerd[1729]: 2026-03-02 13:32:15.895 [INFO][5941] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Mar 2 13:32:15.897592 containerd[1729]: time="2026-03-02T13:32:15.897561396Z" level=info msg="TearDown network for sandbox \"f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd\" successfully" Mar 2 13:32:15.897775 containerd[1729]: time="2026-03-02T13:32:15.897664836Z" level=info msg="StopPodSandbox for \"f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd\" returns successfully" Mar 2 13:32:15.898572 containerd[1729]: time="2026-03-02T13:32:15.898194556Z" level=info msg="RemovePodSandbox for \"f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd\"" Mar 2 13:32:15.898572 containerd[1729]: time="2026-03-02T13:32:15.898226156Z" level=info msg="Forcibly stopping sandbox \"f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd\"" Mar 2 13:32:15.983591 containerd[1729]: 2026-03-02 13:32:15.934 [WARNING][5963] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0", GenerateName:"goldmane-9566f57b5-", Namespace:"calico-system", SelfLink:"", UID:"07045e73-ca01-465e-919f-8f478e008766", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9566f57b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"5d33bcf40567beb49b50d36d4b8d5bd72f9c9af92668f8da1ff7766626fc32d2", Pod:"goldmane-9566f57b5-nnh7j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.113.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic443914566a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:15.983591 containerd[1729]: 2026-03-02 13:32:15.935 [INFO][5963] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Mar 2 13:32:15.983591 containerd[1729]: 2026-03-02 13:32:15.935 [INFO][5963] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" iface="eth0" netns="" Mar 2 13:32:15.983591 containerd[1729]: 2026-03-02 13:32:15.935 [INFO][5963] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Mar 2 13:32:15.983591 containerd[1729]: 2026-03-02 13:32:15.935 [INFO][5963] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Mar 2 13:32:15.983591 containerd[1729]: 2026-03-02 13:32:15.962 [INFO][5970] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" HandleID="k8s-pod-network.f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Workload="ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0" Mar 2 13:32:15.983591 containerd[1729]: 2026-03-02 13:32:15.962 [INFO][5970] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:15.983591 containerd[1729]: 2026-03-02 13:32:15.962 [INFO][5970] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:15.983591 containerd[1729]: 2026-03-02 13:32:15.977 [WARNING][5970] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" HandleID="k8s-pod-network.f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Workload="ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0" Mar 2 13:32:15.983591 containerd[1729]: 2026-03-02 13:32:15.977 [INFO][5970] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" HandleID="k8s-pod-network.f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Workload="ci--4081.3.101--160832fd4e-k8s-goldmane--9566f57b5--nnh7j-eth0" Mar 2 13:32:15.983591 containerd[1729]: 2026-03-02 13:32:15.979 [INFO][5970] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:15.983591 containerd[1729]: 2026-03-02 13:32:15.981 [INFO][5963] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd" Mar 2 13:32:15.984679 containerd[1729]: time="2026-03-02T13:32:15.984131854Z" level=info msg="TearDown network for sandbox \"f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd\" successfully" Mar 2 13:32:15.995131 containerd[1729]: time="2026-03-02T13:32:15.995080746Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:32:15.995354 containerd[1729]: time="2026-03-02T13:32:15.995334427Z" level=info msg="RemovePodSandbox \"f26bac3fecd0162b6f6c79cbff419c65c9faa992fa363f3e5782d297c2db62bd\" returns successfully" Mar 2 13:32:15.996320 containerd[1729]: time="2026-03-02T13:32:15.995964947Z" level=info msg="StopPodSandbox for \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\"" Mar 2 13:32:16.075645 containerd[1729]: 2026-03-02 13:32:16.035 [WARNING][5984] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0", GenerateName:"calico-kube-controllers-75dbc7c697-", Namespace:"calico-system", SelfLink:"", UID:"72b2de10-35e9-4af6-93f9-e337a5d88bbb", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75dbc7c697", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682", Pod:"calico-kube-controllers-75dbc7c697-wkcts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ee50ff6fdb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:16.075645 containerd[1729]: 2026-03-02 13:32:16.035 [INFO][5984] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Mar 2 13:32:16.075645 containerd[1729]: 2026-03-02 13:32:16.035 [INFO][5984] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" iface="eth0" netns="" Mar 2 13:32:16.075645 containerd[1729]: 2026-03-02 13:32:16.035 [INFO][5984] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Mar 2 13:32:16.075645 containerd[1729]: 2026-03-02 13:32:16.035 [INFO][5984] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Mar 2 13:32:16.075645 containerd[1729]: 2026-03-02 13:32:16.059 [INFO][5991] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" HandleID="k8s-pod-network.77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Workload="ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0" Mar 2 13:32:16.075645 containerd[1729]: 2026-03-02 13:32:16.059 [INFO][5991] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:16.075645 containerd[1729]: 2026-03-02 13:32:16.059 [INFO][5991] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:16.075645 containerd[1729]: 2026-03-02 13:32:16.068 [WARNING][5991] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" HandleID="k8s-pod-network.77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Workload="ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0" Mar 2 13:32:16.075645 containerd[1729]: 2026-03-02 13:32:16.069 [INFO][5991] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" HandleID="k8s-pod-network.77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Workload="ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0" Mar 2 13:32:16.075645 containerd[1729]: 2026-03-02 13:32:16.070 [INFO][5991] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:16.075645 containerd[1729]: 2026-03-02 13:32:16.072 [INFO][5984] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Mar 2 13:32:16.075645 containerd[1729]: time="2026-03-02T13:32:16.074596637Z" level=info msg="TearDown network for sandbox \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\" successfully" Mar 2 13:32:16.075645 containerd[1729]: time="2026-03-02T13:32:16.074624597Z" level=info msg="StopPodSandbox for \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\" returns successfully" Mar 2 13:32:16.076105 containerd[1729]: time="2026-03-02T13:32:16.075756878Z" level=info msg="RemovePodSandbox for \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\"" Mar 2 13:32:16.076105 containerd[1729]: time="2026-03-02T13:32:16.075788118Z" level=info msg="Forcibly stopping sandbox \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\"" Mar 2 13:32:16.157667 systemd-networkd[1352]: cali77b93d53597: Gained IPv6LL Mar 2 13:32:16.170501 containerd[1729]: 2026-03-02 13:32:16.115 [WARNING][6005] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0", GenerateName:"calico-kube-controllers-75dbc7c697-", Namespace:"calico-system", SelfLink:"", UID:"72b2de10-35e9-4af6-93f9-e337a5d88bbb", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75dbc7c697", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682", Pod:"calico-kube-controllers-75dbc7c697-wkcts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ee50ff6fdb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:16.170501 containerd[1729]: 2026-03-02 13:32:16.116 [INFO][6005] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Mar 2 13:32:16.170501 containerd[1729]: 2026-03-02 13:32:16.116 [INFO][6005] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" iface="eth0" netns="" Mar 2 13:32:16.170501 containerd[1729]: 2026-03-02 13:32:16.116 [INFO][6005] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Mar 2 13:32:16.170501 containerd[1729]: 2026-03-02 13:32:16.117 [INFO][6005] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Mar 2 13:32:16.170501 containerd[1729]: 2026-03-02 13:32:16.148 [INFO][6013] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" HandleID="k8s-pod-network.77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Workload="ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0" Mar 2 13:32:16.170501 containerd[1729]: 2026-03-02 13:32:16.148 [INFO][6013] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:16.170501 containerd[1729]: 2026-03-02 13:32:16.148 [INFO][6013] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:16.170501 containerd[1729]: 2026-03-02 13:32:16.160 [WARNING][6013] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" HandleID="k8s-pod-network.77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Workload="ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0" Mar 2 13:32:16.170501 containerd[1729]: 2026-03-02 13:32:16.161 [INFO][6013] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" HandleID="k8s-pod-network.77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Workload="ci--4081.3.101--160832fd4e-k8s-calico--kube--controllers--75dbc7c697--wkcts-eth0" Mar 2 13:32:16.170501 containerd[1729]: 2026-03-02 13:32:16.162 [INFO][6013] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:16.170501 containerd[1729]: 2026-03-02 13:32:16.166 [INFO][6005] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1" Mar 2 13:32:16.171447 containerd[1729]: time="2026-03-02T13:32:16.170536386Z" level=info msg="TearDown network for sandbox \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\" successfully" Mar 2 13:32:16.183198 containerd[1729]: time="2026-03-02T13:32:16.183147320Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:32:16.183331 containerd[1729]: time="2026-03-02T13:32:16.183231920Z" level=info msg="RemovePodSandbox \"77be2d37490d2547c9c0c286ec1c61216da543f8451a0450ce9bc13d4f26e9d1\" returns successfully" Mar 2 13:32:16.183764 containerd[1729]: time="2026-03-02T13:32:16.183702121Z" level=info msg="StopPodSandbox for \"e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740\"" Mar 2 13:32:16.267069 containerd[1729]: 2026-03-02 13:32:16.224 [WARNING][6030] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0", GenerateName:"calico-apiserver-6cd4748474-", Namespace:"calico-system", SelfLink:"", UID:"0ce87770-1399-4d16-9b2e-f92d3470b97a", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd4748474", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6", Pod:"calico-apiserver-6cd4748474-c2zkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic5efec2ed78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:16.267069 containerd[1729]: 2026-03-02 13:32:16.224 [INFO][6030] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Mar 2 13:32:16.267069 containerd[1729]: 2026-03-02 13:32:16.224 [INFO][6030] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" iface="eth0" netns="" Mar 2 13:32:16.267069 containerd[1729]: 2026-03-02 13:32:16.224 [INFO][6030] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Mar 2 13:32:16.267069 containerd[1729]: 2026-03-02 13:32:16.224 [INFO][6030] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Mar 2 13:32:16.267069 containerd[1729]: 2026-03-02 13:32:16.249 [INFO][6038] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" HandleID="k8s-pod-network.e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0" Mar 2 13:32:16.267069 containerd[1729]: 2026-03-02 13:32:16.249 [INFO][6038] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:16.267069 containerd[1729]: 2026-03-02 13:32:16.249 [INFO][6038] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:16.267069 containerd[1729]: 2026-03-02 13:32:16.260 [WARNING][6038] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" HandleID="k8s-pod-network.e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0" Mar 2 13:32:16.267069 containerd[1729]: 2026-03-02 13:32:16.260 [INFO][6038] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" HandleID="k8s-pod-network.e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0" Mar 2 13:32:16.267069 containerd[1729]: 2026-03-02 13:32:16.262 [INFO][6038] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:16.267069 containerd[1729]: 2026-03-02 13:32:16.264 [INFO][6030] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Mar 2 13:32:16.269738 containerd[1729]: time="2026-03-02T13:32:16.267114975Z" level=info msg="TearDown network for sandbox \"e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740\" successfully" Mar 2 13:32:16.269738 containerd[1729]: time="2026-03-02T13:32:16.267140055Z" level=info msg="StopPodSandbox for \"e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740\" returns successfully" Mar 2 13:32:16.269738 containerd[1729]: time="2026-03-02T13:32:16.267796416Z" level=info msg="RemovePodSandbox for \"e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740\"" Mar 2 13:32:16.269738 containerd[1729]: time="2026-03-02T13:32:16.267830216Z" level=info msg="Forcibly stopping sandbox \"e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740\"" Mar 2 13:32:16.359557 containerd[1729]: 2026-03-02 13:32:16.318 [WARNING][6053] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0", GenerateName:"calico-apiserver-6cd4748474-", Namespace:"calico-system", SelfLink:"", UID:"0ce87770-1399-4d16-9b2e-f92d3470b97a", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd4748474", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"67e63bd845ed7d8ce591198019ae5480fc9ca2fea87ae3acb60e84ac92ce82c6", Pod:"calico-apiserver-6cd4748474-c2zkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic5efec2ed78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:16.359557 containerd[1729]: 2026-03-02 13:32:16.319 [INFO][6053] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Mar 2 13:32:16.359557 containerd[1729]: 2026-03-02 13:32:16.319 [INFO][6053] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" iface="eth0" netns="" Mar 2 13:32:16.359557 containerd[1729]: 2026-03-02 13:32:16.319 [INFO][6053] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Mar 2 13:32:16.359557 containerd[1729]: 2026-03-02 13:32:16.319 [INFO][6053] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Mar 2 13:32:16.359557 containerd[1729]: 2026-03-02 13:32:16.342 [INFO][6060] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" HandleID="k8s-pod-network.e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0" Mar 2 13:32:16.359557 containerd[1729]: 2026-03-02 13:32:16.343 [INFO][6060] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:16.359557 containerd[1729]: 2026-03-02 13:32:16.343 [INFO][6060] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:16.359557 containerd[1729]: 2026-03-02 13:32:16.352 [WARNING][6060] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" HandleID="k8s-pod-network.e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0" Mar 2 13:32:16.359557 containerd[1729]: 2026-03-02 13:32:16.352 [INFO][6060] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" HandleID="k8s-pod-network.e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--c2zkg-eth0" Mar 2 13:32:16.359557 containerd[1729]: 2026-03-02 13:32:16.354 [INFO][6060] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:16.359557 containerd[1729]: 2026-03-02 13:32:16.356 [INFO][6053] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740" Mar 2 13:32:16.359557 containerd[1729]: time="2026-03-02T13:32:16.358607719Z" level=info msg="TearDown network for sandbox \"e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740\" successfully" Mar 2 13:32:16.365953 containerd[1729]: time="2026-03-02T13:32:16.365864007Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:32:16.366063 containerd[1729]: time="2026-03-02T13:32:16.366002248Z" level=info msg="RemovePodSandbox \"e07423fbbd4613e83c64d7a436603adaa13cf92a86201d054bc6b7f5eb042740\" returns successfully" Mar 2 13:32:16.366552 containerd[1729]: time="2026-03-02T13:32:16.366527128Z" level=info msg="StopPodSandbox for \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\"" Mar 2 13:32:16.379736 kubelet[3208]: I0302 13:32:16.379547 3208 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 2 13:32:16.380453 kubelet[3208]: I0302 13:32:16.380144 3208 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 2 13:32:16.491565 containerd[1729]: 2026-03-02 13:32:16.430 [WARNING][6074] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0", GenerateName:"calico-apiserver-6cd4748474-", Namespace:"calico-system", SelfLink:"", UID:"08101d6f-0f5b-4f23-bfc4-ff79bac2631a", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd4748474", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494", Pod:"calico-apiserver-6cd4748474-xxvfj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali77b93d53597", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:16.491565 containerd[1729]: 2026-03-02 13:32:16.430 [INFO][6074] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Mar 2 13:32:16.491565 containerd[1729]: 2026-03-02 13:32:16.430 [INFO][6074] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" iface="eth0" netns="" Mar 2 13:32:16.491565 containerd[1729]: 2026-03-02 13:32:16.430 [INFO][6074] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Mar 2 13:32:16.491565 containerd[1729]: 2026-03-02 13:32:16.430 [INFO][6074] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Mar 2 13:32:16.491565 containerd[1729]: 2026-03-02 13:32:16.472 [INFO][6082] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" HandleID="k8s-pod-network.b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0" Mar 2 13:32:16.491565 containerd[1729]: 2026-03-02 13:32:16.473 [INFO][6082] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:16.491565 containerd[1729]: 2026-03-02 13:32:16.473 [INFO][6082] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:16.491565 containerd[1729]: 2026-03-02 13:32:16.483 [WARNING][6082] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" HandleID="k8s-pod-network.b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0" Mar 2 13:32:16.491565 containerd[1729]: 2026-03-02 13:32:16.483 [INFO][6082] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" HandleID="k8s-pod-network.b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0" Mar 2 13:32:16.491565 containerd[1729]: 2026-03-02 13:32:16.485 [INFO][6082] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:16.491565 containerd[1729]: 2026-03-02 13:32:16.489 [INFO][6074] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Mar 2 13:32:16.492524 containerd[1729]: time="2026-03-02T13:32:16.492079751Z" level=info msg="TearDown network for sandbox \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\" successfully" Mar 2 13:32:16.492524 containerd[1729]: time="2026-03-02T13:32:16.492131991Z" level=info msg="StopPodSandbox for \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\" returns successfully" Mar 2 13:32:16.492865 containerd[1729]: time="2026-03-02T13:32:16.492833872Z" level=info msg="RemovePodSandbox for \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\"" Mar 2 13:32:16.492984 containerd[1729]: time="2026-03-02T13:32:16.492875552Z" level=info msg="Forcibly stopping sandbox \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\"" Mar 2 13:32:16.572499 containerd[1729]: 2026-03-02 13:32:16.530 [WARNING][6097] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0", GenerateName:"calico-apiserver-6cd4748474-", Namespace:"calico-system", SelfLink:"", UID:"08101d6f-0f5b-4f23-bfc4-ff79bac2631a", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd4748474", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"7d6cd17d665ce6703a939c8153591b0c12b6a0ca883fdd779d433497f479f494", Pod:"calico-apiserver-6cd4748474-xxvfj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali77b93d53597", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:16.572499 containerd[1729]: 2026-03-02 13:32:16.530 [INFO][6097] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Mar 2 13:32:16.572499 containerd[1729]: 2026-03-02 13:32:16.530 [INFO][6097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" iface="eth0" netns="" Mar 2 13:32:16.572499 containerd[1729]: 2026-03-02 13:32:16.530 [INFO][6097] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Mar 2 13:32:16.572499 containerd[1729]: 2026-03-02 13:32:16.530 [INFO][6097] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Mar 2 13:32:16.572499 containerd[1729]: 2026-03-02 13:32:16.552 [INFO][6104] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" HandleID="k8s-pod-network.b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0" Mar 2 13:32:16.572499 containerd[1729]: 2026-03-02 13:32:16.552 [INFO][6104] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:16.572499 containerd[1729]: 2026-03-02 13:32:16.552 [INFO][6104] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:16.572499 containerd[1729]: 2026-03-02 13:32:16.562 [WARNING][6104] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" HandleID="k8s-pod-network.b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0" Mar 2 13:32:16.572499 containerd[1729]: 2026-03-02 13:32:16.562 [INFO][6104] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" HandleID="k8s-pod-network.b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Workload="ci--4081.3.101--160832fd4e-k8s-calico--apiserver--6cd4748474--xxvfj-eth0" Mar 2 13:32:16.572499 containerd[1729]: 2026-03-02 13:32:16.565 [INFO][6104] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:16.572499 containerd[1729]: 2026-03-02 13:32:16.570 [INFO][6097] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28" Mar 2 13:32:16.572499 containerd[1729]: time="2026-03-02T13:32:16.572167242Z" level=info msg="TearDown network for sandbox \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\" successfully" Mar 2 13:32:16.581531 containerd[1729]: time="2026-03-02T13:32:16.581476212Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:32:16.581929 containerd[1729]: time="2026-03-02T13:32:16.581557812Z" level=info msg="RemovePodSandbox \"b8247314368e830d0f47710bca49d162b9745ba5d022efc84433028114482d28\" returns successfully" Mar 2 13:32:16.582025 containerd[1729]: time="2026-03-02T13:32:16.581998333Z" level=info msg="StopPodSandbox for \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\"" Mar 2 13:32:16.655060 kubelet[3208]: I0302 13:32:16.654482 3208 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:32:16.673844 containerd[1729]: 2026-03-02 13:32:16.629 [WARNING][6118] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7638e352-eaab-46e6-a147-f204ad1cab74", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854", Pod:"coredns-674b8bbfcf-5dqvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid5bf1220b91", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:16.673844 containerd[1729]: 2026-03-02 13:32:16.629 [INFO][6118] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Mar 2 13:32:16.673844 containerd[1729]: 2026-03-02 13:32:16.629 [INFO][6118] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" iface="eth0" netns="" Mar 2 13:32:16.673844 containerd[1729]: 2026-03-02 13:32:16.629 [INFO][6118] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Mar 2 13:32:16.673844 containerd[1729]: 2026-03-02 13:32:16.629 [INFO][6118] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Mar 2 13:32:16.673844 containerd[1729]: 2026-03-02 13:32:16.653 [INFO][6125] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" HandleID="k8s-pod-network.43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0" Mar 2 13:32:16.673844 containerd[1729]: 2026-03-02 13:32:16.654 [INFO][6125] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:16.673844 containerd[1729]: 2026-03-02 13:32:16.654 [INFO][6125] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:16.673844 containerd[1729]: 2026-03-02 13:32:16.668 [WARNING][6125] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" HandleID="k8s-pod-network.43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0" Mar 2 13:32:16.673844 containerd[1729]: 2026-03-02 13:32:16.668 [INFO][6125] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" HandleID="k8s-pod-network.43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0" Mar 2 13:32:16.673844 containerd[1729]: 2026-03-02 13:32:16.669 [INFO][6125] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:16.673844 containerd[1729]: 2026-03-02 13:32:16.671 [INFO][6118] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Mar 2 13:32:16.674550 containerd[1729]: time="2026-03-02T13:32:16.673961437Z" level=info msg="TearDown network for sandbox \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\" successfully" Mar 2 13:32:16.674550 containerd[1729]: time="2026-03-02T13:32:16.673994677Z" level=info msg="StopPodSandbox for \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\" returns successfully" Mar 2 13:32:16.674646 containerd[1729]: time="2026-03-02T13:32:16.674463438Z" level=info msg="RemovePodSandbox for \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\"" Mar 2 13:32:16.674679 containerd[1729]: time="2026-03-02T13:32:16.674646358Z" level=info msg="Forcibly stopping sandbox \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\"" Mar 2 13:32:16.762771 containerd[1729]: 2026-03-02 13:32:16.713 [WARNING][6140] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7638e352-eaab-46e6-a147-f204ad1cab74", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"be4328af6e0b68dcd38b3bdf22d942af5ef91dfc767eab254efb576b361b6854", Pod:"coredns-674b8bbfcf-5dqvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid5bf1220b91", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:16.762771 containerd[1729]: 2026-03-02 13:32:16.714 [INFO][6140] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Mar 2 13:32:16.762771 containerd[1729]: 2026-03-02 13:32:16.714 [INFO][6140] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" iface="eth0" netns="" Mar 2 13:32:16.762771 containerd[1729]: 2026-03-02 13:32:16.714 [INFO][6140] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Mar 2 13:32:16.762771 containerd[1729]: 2026-03-02 13:32:16.714 [INFO][6140] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Mar 2 13:32:16.762771 containerd[1729]: 2026-03-02 13:32:16.735 [INFO][6147] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" HandleID="k8s-pod-network.43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0" Mar 2 13:32:16.762771 containerd[1729]: 2026-03-02 13:32:16.735 [INFO][6147] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:16.762771 containerd[1729]: 2026-03-02 13:32:16.735 [INFO][6147] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:16.762771 containerd[1729]: 2026-03-02 13:32:16.747 [WARNING][6147] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" HandleID="k8s-pod-network.43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0" Mar 2 13:32:16.762771 containerd[1729]: 2026-03-02 13:32:16.747 [INFO][6147] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" HandleID="k8s-pod-network.43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--5dqvc-eth0" Mar 2 13:32:16.762771 containerd[1729]: 2026-03-02 13:32:16.756 [INFO][6147] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:16.762771 containerd[1729]: 2026-03-02 13:32:16.760 [INFO][6140] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885" Mar 2 13:32:16.763212 containerd[1729]: time="2026-03-02T13:32:16.762837938Z" level=info msg="TearDown network for sandbox \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\" successfully" Mar 2 13:32:16.771008 containerd[1729]: time="2026-03-02T13:32:16.770952627Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:32:16.771091 containerd[1729]: time="2026-03-02T13:32:16.771069628Z" level=info msg="RemovePodSandbox \"43d1211edc8833fbe28cccf3de96ac20df9c7d82d035d5cf804272847d617885\" returns successfully" Mar 2 13:32:16.771592 containerd[1729]: time="2026-03-02T13:32:16.771566948Z" level=info msg="StopPodSandbox for \"5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe\"" Mar 2 13:32:16.862498 containerd[1729]: 2026-03-02 13:32:16.819 [WARNING][6162] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cbb205fa-9365-49ad-a5f5-156be4f3e7b0", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262", Pod:"coredns-674b8bbfcf-gfmzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia708df760fa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:16.862498 containerd[1729]: 2026-03-02 13:32:16.820 [INFO][6162] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Mar 2 13:32:16.862498 containerd[1729]: 2026-03-02 13:32:16.820 [INFO][6162] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" iface="eth0" netns="" Mar 2 13:32:16.862498 containerd[1729]: 2026-03-02 13:32:16.820 [INFO][6162] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Mar 2 13:32:16.862498 containerd[1729]: 2026-03-02 13:32:16.820 [INFO][6162] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Mar 2 13:32:16.862498 containerd[1729]: 2026-03-02 13:32:16.841 [INFO][6169] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" HandleID="k8s-pod-network.5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0" Mar 2 13:32:16.862498 containerd[1729]: 2026-03-02 13:32:16.842 [INFO][6169] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:16.862498 containerd[1729]: 2026-03-02 13:32:16.842 [INFO][6169] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:16.862498 containerd[1729]: 2026-03-02 13:32:16.856 [WARNING][6169] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" HandleID="k8s-pod-network.5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0" Mar 2 13:32:16.862498 containerd[1729]: 2026-03-02 13:32:16.856 [INFO][6169] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" HandleID="k8s-pod-network.5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0" Mar 2 13:32:16.862498 containerd[1729]: 2026-03-02 13:32:16.857 [INFO][6169] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:16.862498 containerd[1729]: 2026-03-02 13:32:16.860 [INFO][6162] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Mar 2 13:32:16.862498 containerd[1729]: time="2026-03-02T13:32:16.862288691Z" level=info msg="TearDown network for sandbox \"5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe\" successfully" Mar 2 13:32:16.862498 containerd[1729]: time="2026-03-02T13:32:16.862314851Z" level=info msg="StopPodSandbox for \"5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe\" returns successfully" Mar 2 13:32:16.863556 containerd[1729]: time="2026-03-02T13:32:16.863450212Z" level=info msg="RemovePodSandbox for \"5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe\"" Mar 2 13:32:16.863605 containerd[1729]: time="2026-03-02T13:32:16.863587053Z" level=info msg="Forcibly stopping sandbox \"5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe\"" Mar 2 13:32:16.946269 containerd[1729]: 2026-03-02 13:32:16.903 [WARNING][6184] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cbb205fa-9365-49ad-a5f5-156be4f3e7b0", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.March, 2, 13, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-160832fd4e", ContainerID:"c92670309132439e375e5cd11bb8720f0e0fca5d18f9dacc8e6425ddee31a262", Pod:"coredns-674b8bbfcf-gfmzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia708df760fa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 2 13:32:16.946269 containerd[1729]: 2026-03-02 13:32:16.903 [INFO][6184] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Mar 2 13:32:16.946269 containerd[1729]: 2026-03-02 13:32:16.903 [INFO][6184] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" iface="eth0" netns="" Mar 2 13:32:16.946269 containerd[1729]: 2026-03-02 13:32:16.903 [INFO][6184] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Mar 2 13:32:16.946269 containerd[1729]: 2026-03-02 13:32:16.903 [INFO][6184] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Mar 2 13:32:16.946269 containerd[1729]: 2026-03-02 13:32:16.928 [INFO][6191] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" HandleID="k8s-pod-network.5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0" Mar 2 13:32:16.946269 containerd[1729]: 2026-03-02 13:32:16.929 [INFO][6191] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 2 13:32:16.946269 containerd[1729]: 2026-03-02 13:32:16.929 [INFO][6191] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 2 13:32:16.946269 containerd[1729]: 2026-03-02 13:32:16.940 [WARNING][6191] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" HandleID="k8s-pod-network.5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0" Mar 2 13:32:16.946269 containerd[1729]: 2026-03-02 13:32:16.940 [INFO][6191] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" HandleID="k8s-pod-network.5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Workload="ci--4081.3.101--160832fd4e-k8s-coredns--674b8bbfcf--gfmzk-eth0" Mar 2 13:32:16.946269 containerd[1729]: 2026-03-02 13:32:16.941 [INFO][6191] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 2 13:32:16.946269 containerd[1729]: 2026-03-02 13:32:16.944 [INFO][6184] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe" Mar 2 13:32:16.946969 containerd[1729]: time="2026-03-02T13:32:16.946250227Z" level=info msg="TearDown network for sandbox \"5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe\" successfully" Mar 2 13:32:16.956627 containerd[1729]: time="2026-03-02T13:32:16.956569958Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:32:16.956866 containerd[1729]: time="2026-03-02T13:32:16.956658558Z" level=info msg="RemovePodSandbox \"5d962e0ba517a6a76f695d3abf3ce043f360365790f3e0eca7e80d88c7fa9cbe\" returns successfully" Mar 2 13:32:17.686172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3920443153.mount: Deactivated successfully. Mar 2 13:32:17.812147 containerd[1729]: time="2026-03-02T13:32:17.811331009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:17.815124 containerd[1729]: time="2026-03-02T13:32:17.815082933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.3: active requests=0, bytes read=16420592" Mar 2 13:32:17.818429 containerd[1729]: time="2026-03-02T13:32:17.818375137Z" level=info msg="ImageCreate event name:\"sha256:d6c2d25ea514599ef2dbba86e46277491ee9c1e15519321c135bb514b2f46aeb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:17.823752 containerd[1729]: time="2026-03-02T13:32:17.823682423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:359cb5c751e049ac0bb62c4f7e49b1ac81c59935c70715f5ff4c39a757bf9f38\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:17.824748 containerd[1729]: time="2026-03-02T13:32:17.824668904Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" with image id \"sha256:d6c2d25ea514599ef2dbba86e46277491ee9c1e15519321c135bb514b2f46aeb\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:359cb5c751e049ac0bb62c4f7e49b1ac81c59935c70715f5ff4c39a757bf9f38\", size \"16420422\" in 2.478070894s" Mar 2 13:32:17.824748 containerd[1729]: time="2026-03-02T13:32:17.824704904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.3\" returns image reference \"sha256:d6c2d25ea514599ef2dbba86e46277491ee9c1e15519321c135bb514b2f46aeb\"" Mar 2 13:32:17.828438 containerd[1729]: time="2026-03-02T13:32:17.828240548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\"" Mar 2 13:32:17.835337 containerd[1729]: time="2026-03-02T13:32:17.835110596Z" level=info msg="CreateContainer within sandbox \"bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 2 13:32:17.863539 containerd[1729]: time="2026-03-02T13:32:17.863495228Z" level=info msg="CreateContainer within sandbox \"bc2d440aa09e66841799e5eb6c103e17efedcb0217fc1fefcc83dbe629bc0e0a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"fe67279e5a1671fea1db6538ff5d8e53cca4b569351537b2f09fef2f895da44b\"" Mar 2 13:32:17.864663 containerd[1729]: time="2026-03-02T13:32:17.864631749Z" level=info msg="StartContainer for \"fe67279e5a1671fea1db6538ff5d8e53cca4b569351537b2f09fef2f895da44b\"" Mar 2 13:32:17.892680 systemd[1]: Started cri-containerd-fe67279e5a1671fea1db6538ff5d8e53cca4b569351537b2f09fef2f895da44b.scope - libcontainer container fe67279e5a1671fea1db6538ff5d8e53cca4b569351537b2f09fef2f895da44b. Mar 2 13:32:17.930183 containerd[1729]: time="2026-03-02T13:32:17.930018104Z" level=info msg="StartContainer for \"fe67279e5a1671fea1db6538ff5d8e53cca4b569351537b2f09fef2f895da44b\" returns successfully" Mar 2 13:32:20.685389 containerd[1729]: time="2026-03-02T13:32:20.685160800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:20.687604 containerd[1729]: time="2026-03-02T13:32:20.687556398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.3: active requests=0, bytes read=49157508" Mar 2 13:32:20.694738 containerd[1729]: time="2026-03-02T13:32:20.694679152Z" level=info msg="ImageCreate event name:\"sha256:f91182157dd9b43afadc3f9d6dbd919b0ec222fc40e9fa608989310b81c1f18c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:20.701932 containerd[1729]: time="2026-03-02T13:32:20.701865946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:081fd6c3de7754ba9892532b2c7c6cae9ba7bd1cca4c42e4590ee8d0f5a5696b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:32:20.702680 containerd[1729]: time="2026-03-02T13:32:20.702645585Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" with image id \"sha256:f91182157dd9b43afadc3f9d6dbd919b0ec222fc40e9fa608989310b81c1f18c\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:081fd6c3de7754ba9892532b2c7c6cae9ba7bd1cca4c42e4590ee8d0f5a5696b\", size \"50555001\" in 2.874348037s" Mar 2 13:32:20.702742 containerd[1729]: time="2026-03-02T13:32:20.702681905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.3\" returns image reference \"sha256:f91182157dd9b43afadc3f9d6dbd919b0ec222fc40e9fa608989310b81c1f18c\"" Mar 2 13:32:20.723427 containerd[1729]: time="2026-03-02T13:32:20.723215447Z" level=info msg="CreateContainer within sandbox \"40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 2 13:32:20.762214 containerd[1729]: time="2026-03-02T13:32:20.762167933Z" level=info msg="CreateContainer within sandbox \"40c590675cc8f5bf84bce845418c5d67f23d4979d2394e75d79693ef4d33c682\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"708db2c2fa6c7dc636ba206d230de04b0ecc60f269f15ced48cb84a59e24cc97\"" Mar 2 13:32:20.762828 containerd[1729]: time="2026-03-02T13:32:20.762708613Z" level=info msg="StartContainer for \"708db2c2fa6c7dc636ba206d230de04b0ecc60f269f15ced48cb84a59e24cc97\"" Mar 2 13:32:20.794700 systemd[1]: Started cri-containerd-708db2c2fa6c7dc636ba206d230de04b0ecc60f269f15ced48cb84a59e24cc97.scope - libcontainer container 708db2c2fa6c7dc636ba206d230de04b0ecc60f269f15ced48cb84a59e24cc97. Mar 2 13:32:20.832292 containerd[1729]: time="2026-03-02T13:32:20.832228833Z" level=info msg="StartContainer for \"708db2c2fa6c7dc636ba206d230de04b0ecc60f269f15ced48cb84a59e24cc97\" returns successfully" Mar 2 13:32:21.718280 kubelet[3208]: I0302 13:32:21.718220 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-75dbc7c697-wkcts" podStartSLOduration=38.645807345 podStartE2EDuration="45.718204427s" podCreationTimestamp="2026-03-02 13:31:36 +0000 UTC" firstStartedPulling="2026-03-02 13:32:13.631341982 +0000 UTC m=+58.471772465" lastFinishedPulling="2026-03-02 13:32:20.703739064 +0000 UTC m=+65.544169547" observedRunningTime="2026-03-02 13:32:21.718045387 +0000 UTC m=+66.558475870" watchObservedRunningTime="2026-03-02 13:32:21.718204427 +0000 UTC m=+66.558634910" Mar 2 13:32:21.718748 kubelet[3208]: I0302 13:32:21.718442 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-755469947b-sb77n" podStartSLOduration=5.240927411 podStartE2EDuration="19.718437227s" podCreationTimestamp="2026-03-02 13:32:02 +0000 UTC" firstStartedPulling="2026-03-02 13:32:03.349609091 +0000 UTC m=+48.190039574" lastFinishedPulling="2026-03-02 13:32:17.827118947 +0000 UTC m=+62.667549390" observedRunningTime="2026-03-02 13:32:18.700741259 +0000 UTC m=+63.541171742" watchObservedRunningTime="2026-03-02 13:32:21.718437227 +0000 UTC m=+66.558867710" Mar 2 13:32:45.080602 kubelet[3208]: I0302 13:32:45.079643 3208 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:32:51.583958 kubelet[3208]: I0302 13:32:51.583918 3208 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 2 13:33:31.865767 systemd[1]: Started sshd@7-10.200.20.22:22-10.200.16.10:36794.service - OpenSSH per-connection server daemon (10.200.16.10:36794). Mar 2 13:33:32.358491 sshd[6570]: Accepted publickey for core from 10.200.16.10 port 36794 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:33:32.360373 sshd[6570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:33:32.364574 systemd-logind[1692]: New session 10 of user core. Mar 2 13:33:32.372672 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 2 13:33:32.787713 sshd[6570]: pam_unix(sshd:session): session closed for user core Mar 2 13:33:32.792362 systemd[1]: sshd@7-10.200.20.22:22-10.200.16.10:36794.service: Deactivated successfully. Mar 2 13:33:32.794846 systemd[1]: session-10.scope: Deactivated successfully. Mar 2 13:33:32.795834 systemd-logind[1692]: Session 10 logged out. Waiting for processes to exit. Mar 2 13:33:32.797086 systemd-logind[1692]: Removed session 10. Mar 2 13:33:37.881550 systemd[1]: Started sshd@8-10.200.20.22:22-10.200.16.10:36808.service - OpenSSH per-connection server daemon (10.200.16.10:36808). Mar 2 13:33:38.368508 sshd[6633]: Accepted publickey for core from 10.200.16.10 port 36808 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:33:38.369582 sshd[6633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:33:38.374562 systemd-logind[1692]: New session 11 of user core. Mar 2 13:33:38.378662 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 2 13:33:38.776559 sshd[6633]: pam_unix(sshd:session): session closed for user core Mar 2 13:33:38.780678 systemd[1]: sshd@8-10.200.20.22:22-10.200.16.10:36808.service: Deactivated successfully. Mar 2 13:33:38.783140 systemd[1]: session-11.scope: Deactivated successfully. Mar 2 13:33:38.783931 systemd-logind[1692]: Session 11 logged out. Waiting for processes to exit. Mar 2 13:33:38.785518 systemd-logind[1692]: Removed session 11. Mar 2 13:33:43.874251 systemd[1]: Started sshd@9-10.200.20.22:22-10.200.16.10:34320.service - OpenSSH per-connection server daemon (10.200.16.10:34320). Mar 2 13:33:44.361921 sshd[6679]: Accepted publickey for core from 10.200.16.10 port 34320 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:33:44.363389 sshd[6679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:33:44.367806 systemd-logind[1692]: New session 12 of user core. Mar 2 13:33:44.373658 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 2 13:33:44.782785 sshd[6679]: pam_unix(sshd:session): session closed for user core Mar 2 13:33:44.786031 systemd[1]: sshd@9-10.200.20.22:22-10.200.16.10:34320.service: Deactivated successfully. Mar 2 13:33:44.788122 systemd[1]: session-12.scope: Deactivated successfully. Mar 2 13:33:44.792506 systemd-logind[1692]: Session 12 logged out. Waiting for processes to exit. Mar 2 13:33:44.793953 systemd-logind[1692]: Removed session 12. Mar 2 13:33:49.880835 systemd[1]: Started sshd@10-10.200.20.22:22-10.200.16.10:34334.service - OpenSSH per-connection server daemon (10.200.16.10:34334). Mar 2 13:33:50.365507 sshd[6692]: Accepted publickey for core from 10.200.16.10 port 34334 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:33:50.366807 sshd[6692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:33:50.371006 systemd-logind[1692]: New session 13 of user core. Mar 2 13:33:50.376684 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 2 13:33:50.785557 sshd[6692]: pam_unix(sshd:session): session closed for user core Mar 2 13:33:50.790402 systemd[1]: sshd@10-10.200.20.22:22-10.200.16.10:34334.service: Deactivated successfully. Mar 2 13:33:50.794621 systemd[1]: session-13.scope: Deactivated successfully. Mar 2 13:33:50.796012 systemd-logind[1692]: Session 13 logged out. Waiting for processes to exit. Mar 2 13:33:50.797195 systemd-logind[1692]: Removed session 13. Mar 2 13:33:50.881430 systemd[1]: Started sshd@11-10.200.20.22:22-10.200.16.10:43680.service - OpenSSH per-connection server daemon (10.200.16.10:43680). Mar 2 13:33:51.367495 sshd[6705]: Accepted publickey for core from 10.200.16.10 port 43680 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:33:51.368720 sshd[6705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:33:51.374370 systemd-logind[1692]: New session 14 of user core. Mar 2 13:33:51.380701 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 2 13:33:51.884874 sshd[6705]: pam_unix(sshd:session): session closed for user core Mar 2 13:33:51.888967 systemd[1]: sshd@11-10.200.20.22:22-10.200.16.10:43680.service: Deactivated successfully. Mar 2 13:33:51.892492 systemd[1]: session-14.scope: Deactivated successfully. Mar 2 13:33:51.893906 systemd-logind[1692]: Session 14 logged out. Waiting for processes to exit. Mar 2 13:33:51.895787 systemd-logind[1692]: Removed session 14. Mar 2 13:33:51.980730 systemd[1]: Started sshd@12-10.200.20.22:22-10.200.16.10:43684.service - OpenSSH per-connection server daemon (10.200.16.10:43684). Mar 2 13:33:52.473220 sshd[6743]: Accepted publickey for core from 10.200.16.10 port 43684 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:33:52.474399 sshd[6743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:33:52.480090 systemd-logind[1692]: New session 15 of user core. Mar 2 13:33:52.485644 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 2 13:33:52.893152 sshd[6743]: pam_unix(sshd:session): session closed for user core Mar 2 13:33:52.896491 systemd[1]: sshd@12-10.200.20.22:22-10.200.16.10:43684.service: Deactivated successfully. Mar 2 13:33:52.898442 systemd[1]: session-15.scope: Deactivated successfully. Mar 2 13:33:52.899132 systemd-logind[1692]: Session 15 logged out. Waiting for processes to exit. Mar 2 13:33:52.900669 systemd-logind[1692]: Removed session 15. Mar 2 13:33:55.627921 systemd[1]: run-containerd-runc-k8s.io-190f252380241aba7cf9c07a6b96b7193eb74bffc8d3fe42f20bf9f0a70b7801-runc.z68X7j.mount: Deactivated successfully. Mar 2 13:33:57.986770 systemd[1]: Started sshd@13-10.200.20.22:22-10.200.16.10:43686.service - OpenSSH per-connection server daemon (10.200.16.10:43686). Mar 2 13:33:58.471762 sshd[6797]: Accepted publickey for core from 10.200.16.10 port 43686 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:33:58.473893 sshd[6797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:33:58.479718 systemd-logind[1692]: New session 16 of user core. Mar 2 13:33:58.482629 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 2 13:33:58.893160 sshd[6797]: pam_unix(sshd:session): session closed for user core Mar 2 13:33:58.896239 systemd-logind[1692]: Session 16 logged out. Waiting for processes to exit. Mar 2 13:33:58.897625 systemd[1]: sshd@13-10.200.20.22:22-10.200.16.10:43686.service: Deactivated successfully. Mar 2 13:33:58.900327 systemd[1]: session-16.scope: Deactivated successfully. Mar 2 13:33:58.901941 systemd-logind[1692]: Removed session 16. Mar 2 13:33:58.982830 systemd[1]: Started sshd@14-10.200.20.22:22-10.200.16.10:43690.service - OpenSSH per-connection server daemon (10.200.16.10:43690). Mar 2 13:33:59.479515 sshd[6811]: Accepted publickey for core from 10.200.16.10 port 43690 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:33:59.480555 sshd[6811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:33:59.485609 systemd-logind[1692]: New session 17 of user core. Mar 2 13:33:59.489646 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 2 13:34:00.028223 sshd[6811]: pam_unix(sshd:session): session closed for user core Mar 2 13:34:00.031244 systemd-logind[1692]: Session 17 logged out. Waiting for processes to exit. Mar 2 13:34:00.033092 systemd[1]: sshd@14-10.200.20.22:22-10.200.16.10:43690.service: Deactivated successfully. Mar 2 13:34:00.036682 systemd[1]: session-17.scope: Deactivated successfully. Mar 2 13:34:00.038429 systemd-logind[1692]: Removed session 17. Mar 2 13:34:00.121832 systemd[1]: Started sshd@15-10.200.20.22:22-10.200.16.10:46970.service - OpenSSH per-connection server daemon (10.200.16.10:46970). Mar 2 13:34:00.616427 sshd[6822]: Accepted publickey for core from 10.200.16.10 port 46970 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:34:00.617340 sshd[6822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:34:00.621874 systemd-logind[1692]: New session 18 of user core. Mar 2 13:34:00.628846 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 2 13:34:01.528347 sshd[6822]: pam_unix(sshd:session): session closed for user core Mar 2 13:34:01.533658 systemd[1]: sshd@15-10.200.20.22:22-10.200.16.10:46970.service: Deactivated successfully. Mar 2 13:34:01.537005 systemd[1]: session-18.scope: Deactivated successfully. Mar 2 13:34:01.538276 systemd-logind[1692]: Session 18 logged out. Waiting for processes to exit. Mar 2 13:34:01.540695 systemd-logind[1692]: Removed session 18. Mar 2 13:34:01.615997 systemd[1]: Started sshd@16-10.200.20.22:22-10.200.16.10:46976.service - OpenSSH per-connection server daemon (10.200.16.10:46976). Mar 2 13:34:02.105699 sshd[6849]: Accepted publickey for core from 10.200.16.10 port 46976 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:34:02.107407 sshd[6849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:34:02.113939 systemd-logind[1692]: New session 19 of user core. Mar 2 13:34:02.120493 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 2 13:34:02.638832 sshd[6849]: pam_unix(sshd:session): session closed for user core Mar 2 13:34:02.641760 systemd[1]: sshd@16-10.200.20.22:22-10.200.16.10:46976.service: Deactivated successfully. Mar 2 13:34:02.645687 systemd[1]: session-19.scope: Deactivated successfully. Mar 2 13:34:02.647656 systemd-logind[1692]: Session 19 logged out. Waiting for processes to exit. Mar 2 13:34:02.649075 systemd-logind[1692]: Removed session 19. Mar 2 13:34:02.733774 systemd[1]: Started sshd@17-10.200.20.22:22-10.200.16.10:46988.service - OpenSSH per-connection server daemon (10.200.16.10:46988). Mar 2 13:34:03.228332 sshd[6881]: Accepted publickey for core from 10.200.16.10 port 46988 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:34:03.230036 sshd[6881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:34:03.235179 systemd-logind[1692]: New session 20 of user core. Mar 2 13:34:03.240670 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 2 13:34:03.692254 sshd[6881]: pam_unix(sshd:session): session closed for user core Mar 2 13:34:03.699604 systemd[1]: sshd@17-10.200.20.22:22-10.200.16.10:46988.service: Deactivated successfully. Mar 2 13:34:03.702983 systemd[1]: session-20.scope: Deactivated successfully. Mar 2 13:34:03.704310 systemd-logind[1692]: Session 20 logged out. Waiting for processes to exit. Mar 2 13:34:03.706628 systemd-logind[1692]: Removed session 20. Mar 2 13:34:08.786895 systemd[1]: Started sshd@18-10.200.20.22:22-10.200.16.10:47002.service - OpenSSH per-connection server daemon (10.200.16.10:47002). Mar 2 13:34:09.274490 sshd[6914]: Accepted publickey for core from 10.200.16.10 port 47002 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:34:09.275218 sshd[6914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:34:09.279124 systemd-logind[1692]: New session 21 of user core. Mar 2 13:34:09.284688 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 2 13:34:09.709421 sshd[6914]: pam_unix(sshd:session): session closed for user core Mar 2 13:34:09.713849 systemd[1]: sshd@18-10.200.20.22:22-10.200.16.10:47002.service: Deactivated successfully. Mar 2 13:34:09.716141 systemd[1]: session-21.scope: Deactivated successfully. Mar 2 13:34:09.717174 systemd-logind[1692]: Session 21 logged out. Waiting for processes to exit. Mar 2 13:34:09.718189 systemd-logind[1692]: Removed session 21. Mar 2 13:34:14.780501 systemd[1]: Started sshd@19-10.200.20.22:22-10.200.16.10:46716.service - OpenSSH per-connection server daemon (10.200.16.10:46716). Mar 2 13:34:15.274297 sshd[6947]: Accepted publickey for core from 10.200.16.10 port 46716 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:34:15.275205 sshd[6947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:34:15.279457 systemd-logind[1692]: New session 22 of user core. Mar 2 13:34:15.284711 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 2 13:34:15.685954 sshd[6947]: pam_unix(sshd:session): session closed for user core Mar 2 13:34:15.689534 systemd-logind[1692]: Session 22 logged out. Waiting for processes to exit. Mar 2 13:34:15.689732 systemd[1]: sshd@19-10.200.20.22:22-10.200.16.10:46716.service: Deactivated successfully. Mar 2 13:34:15.692658 systemd[1]: session-22.scope: Deactivated successfully. Mar 2 13:34:15.695403 systemd-logind[1692]: Removed session 22. Mar 2 13:34:20.778855 systemd[1]: Started sshd@20-10.200.20.22:22-10.200.16.10:56062.service - OpenSSH per-connection server daemon (10.200.16.10:56062). Mar 2 13:34:21.264085 sshd[6961]: Accepted publickey for core from 10.200.16.10 port 56062 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:34:21.265488 sshd[6961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:34:21.271558 systemd-logind[1692]: New session 23 of user core. Mar 2 13:34:21.276653 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 2 13:34:21.676765 sshd[6961]: pam_unix(sshd:session): session closed for user core Mar 2 13:34:21.681429 systemd[1]: sshd@20-10.200.20.22:22-10.200.16.10:56062.service: Deactivated successfully. Mar 2 13:34:21.685399 systemd[1]: session-23.scope: Deactivated successfully. Mar 2 13:34:21.686243 systemd-logind[1692]: Session 23 logged out. Waiting for processes to exit. Mar 2 13:34:21.687598 systemd-logind[1692]: Removed session 23. Mar 2 13:34:26.764674 systemd[1]: Started sshd@21-10.200.20.22:22-10.200.16.10:56068.service - OpenSSH per-connection server daemon (10.200.16.10:56068). Mar 2 13:34:27.257498 sshd[6994]: Accepted publickey for core from 10.200.16.10 port 56068 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:34:27.258785 sshd[6994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:34:27.262711 systemd-logind[1692]: New session 24 of user core. Mar 2 13:34:27.270645 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 2 13:34:27.668757 sshd[6994]: pam_unix(sshd:session): session closed for user core Mar 2 13:34:27.673361 systemd[1]: sshd@21-10.200.20.22:22-10.200.16.10:56068.service: Deactivated successfully. Mar 2 13:34:27.676222 systemd[1]: session-24.scope: Deactivated successfully. Mar 2 13:34:27.677343 systemd-logind[1692]: Session 24 logged out. Waiting for processes to exit. Mar 2 13:34:27.678876 systemd-logind[1692]: Removed session 24. Mar 2 13:34:32.769916 systemd[1]: Started sshd@22-10.200.20.22:22-10.200.16.10:37474.service - OpenSSH per-connection server daemon (10.200.16.10:37474). Mar 2 13:34:33.259382 sshd[7027]: Accepted publickey for core from 10.200.16.10 port 37474 ssh2: RSA SHA256:P6GHSSGMSuv7dhw2Z8eg+dQY0cfeNqH37bclTqm/pu8 Mar 2 13:34:33.261745 sshd[7027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:34:33.266124 systemd-logind[1692]: New session 25 of user core. Mar 2 13:34:33.274641 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 2 13:34:33.676594 sshd[7027]: pam_unix(sshd:session): session closed for user core Mar 2 13:34:33.682119 systemd[1]: sshd@22-10.200.20.22:22-10.200.16.10:37474.service: Deactivated successfully. Mar 2 13:34:33.687573 systemd[1]: session-25.scope: Deactivated successfully. Mar 2 13:34:33.689646 systemd-logind[1692]: Session 25 logged out. Waiting for processes to exit. Mar 2 13:34:33.691021 systemd-logind[1692]: Removed session 25.