Mar 13 12:20:18.212542 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 13 12:20:18.212566 kernel: Linux version 6.6.129-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 13 08:56:28 -00 2026 Mar 13 12:20:18.212575 kernel: KASLR enabled Mar 13 12:20:18.212581 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 13 12:20:18.212588 kernel: printk: bootconsole [pl11] enabled Mar 13 12:20:18.212594 kernel: efi: EFI v2.7 by EDK II Mar 13 12:20:18.212601 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 13 12:20:18.212607 kernel: random: crng init done Mar 13 12:20:18.212614 kernel: ACPI: Early table checksum verification disabled Mar 13 12:20:18.212619 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 13 12:20:18.212626 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:20:18.212632 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:20:18.212639 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 13 12:20:18.212646 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:20:18.212653 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:20:18.212659 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:20:18.212666 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:20:18.212674 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:20:18.212680 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:20:18.212687 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 13 12:20:18.212693 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:20:18.212700 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 13 12:20:18.212706 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 13 12:20:18.212712 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 13 12:20:18.212719 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 13 12:20:18.212725 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 13 12:20:18.214757 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 13 12:20:18.214765 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 13 12:20:18.214778 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 13 12:20:18.214784 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 13 12:20:18.214791 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 13 12:20:18.214798 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 13 12:20:18.214805 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 13 12:20:18.214811 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 13 12:20:18.214818 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 13 12:20:18.214824 kernel: Zone ranges: Mar 13 12:20:18.214830 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 13 12:20:18.214838 kernel: DMA32 empty Mar 13 12:20:18.214844 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 13 12:20:18.214851 kernel: Movable zone start for each node Mar 13 12:20:18.214862 kernel: Early memory node ranges Mar 13 12:20:18.214869 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 13 12:20:18.214877 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 13 12:20:18.214885 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 13 12:20:18.214892 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 13 12:20:18.214901 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 13 12:20:18.214908 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 13 12:20:18.214915 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 13 12:20:18.214923 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 13 12:20:18.214930 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 13 12:20:18.214937 kernel: psci: probing for conduit method from ACPI. Mar 13 12:20:18.214944 kernel: psci: PSCIv1.1 detected in firmware. Mar 13 12:20:18.214950 kernel: psci: Using standard PSCI v0.2 function IDs Mar 13 12:20:18.214958 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 13 12:20:18.214965 kernel: psci: SMC Calling Convention v1.4 Mar 13 12:20:18.214972 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 13 12:20:18.214979 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 13 12:20:18.214988 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 13 12:20:18.214994 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 13 12:20:18.215002 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 13 12:20:18.215009 kernel: Detected PIPT I-cache on CPU0 Mar 13 12:20:18.215016 kernel: CPU features: detected: GIC system register CPU interface Mar 13 12:20:18.215023 kernel: CPU features: detected: Hardware dirty bit management Mar 13 12:20:18.215030 kernel: CPU features: detected: Spectre-BHB Mar 13 12:20:18.215038 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 13 12:20:18.215044 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 13 12:20:18.215051 kernel: CPU features: detected: ARM erratum 1418040 Mar 13 12:20:18.215058 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 13 12:20:18.215066 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 13 12:20:18.215073 kernel: alternatives: applying boot alternatives Mar 13 12:20:18.215083 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=74d23b4a193d6e906ee495726e120d1ddfae1935a31eb217879b4eafa2053949 Mar 13 12:20:18.215090 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 12:20:18.215097 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 13 12:20:18.215104 kernel: Fallback order for Node 0: 0 Mar 13 12:20:18.215111 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 13 12:20:18.215117 kernel: Policy zone: Normal Mar 13 12:20:18.215125 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 12:20:18.215132 kernel: software IO TLB: area num 2. Mar 13 12:20:18.215139 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 13 12:20:18.215148 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8120K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 13 12:20:18.215155 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 13 12:20:18.215162 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 12:20:18.215170 kernel: rcu: RCU event tracing is enabled. Mar 13 12:20:18.215178 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 13 12:20:18.215184 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 12:20:18.215191 kernel: Tracing variant of Tasks RCU enabled. Mar 13 12:20:18.215198 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 12:20:18.215205 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 13 12:20:18.215213 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 13 12:20:18.215219 kernel: GICv3: 960 SPIs implemented Mar 13 12:20:18.215228 kernel: GICv3: 0 Extended SPIs implemented Mar 13 12:20:18.215236 kernel: Root IRQ handler: gic_handle_irq Mar 13 12:20:18.215242 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 13 12:20:18.215249 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 13 12:20:18.215256 kernel: ITS: No ITS available, not enabling LPIs Mar 13 12:20:18.215263 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 12:20:18.215270 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 13 12:20:18.215277 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 13 12:20:18.215284 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 13 12:20:18.215291 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 13 12:20:18.215298 kernel: Console: colour dummy device 80x25 Mar 13 12:20:18.215307 kernel: printk: console [tty1] enabled Mar 13 12:20:18.215315 kernel: ACPI: Core revision 20230628 Mar 13 12:20:18.215322 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 13 12:20:18.215329 kernel: pid_max: default: 32768 minimum: 301 Mar 13 12:20:18.215336 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 13 12:20:18.215343 kernel: landlock: Up and running. Mar 13 12:20:18.215350 kernel: SELinux: Initializing. Mar 13 12:20:18.215357 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 12:20:18.215364 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 12:20:18.215373 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 12:20:18.215380 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 12:20:18.215388 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 13 12:20:18.215395 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 13 12:20:18.215402 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 13 12:20:18.215409 kernel: rcu: Hierarchical SRCU implementation. Mar 13 12:20:18.215416 kernel: rcu: Max phase no-delay instances is 400. Mar 13 12:20:18.215424 kernel: Remapping and enabling EFI services. Mar 13 12:20:18.215437 kernel: smp: Bringing up secondary CPUs ... Mar 13 12:20:18.215444 kernel: Detected PIPT I-cache on CPU1 Mar 13 12:20:18.215452 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 13 12:20:18.215459 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 13 12:20:18.215468 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 13 12:20:18.215476 kernel: smp: Brought up 1 node, 2 CPUs Mar 13 12:20:18.215483 kernel: SMP: Total of 2 processors activated. Mar 13 12:20:18.215491 kernel: CPU features: detected: 32-bit EL0 Support Mar 13 12:20:18.215498 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 13 12:20:18.215507 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 13 12:20:18.215515 kernel: CPU features: detected: CRC32 instructions Mar 13 12:20:18.215522 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 13 12:20:18.215530 kernel: CPU features: detected: LSE atomic instructions Mar 13 12:20:18.215537 kernel: CPU features: detected: Privileged Access Never Mar 13 12:20:18.215545 kernel: CPU: All CPU(s) started at EL1 Mar 13 12:20:18.215552 kernel: alternatives: applying system-wide alternatives Mar 13 12:20:18.215559 kernel: devtmpfs: initialized Mar 13 12:20:18.215567 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 12:20:18.215576 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 13 12:20:18.215583 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 12:20:18.215591 kernel: SMBIOS 3.1.0 present. Mar 13 12:20:18.215599 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 13 12:20:18.215606 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 12:20:18.215613 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 13 12:20:18.215621 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 13 12:20:18.215628 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 13 12:20:18.215636 kernel: audit: initializing netlink subsys (disabled) Mar 13 12:20:18.215645 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 13 12:20:18.215652 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 12:20:18.215660 kernel: cpuidle: using governor menu Mar 13 12:20:18.215667 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 13 12:20:18.215675 kernel: ASID allocator initialised with 32768 entries Mar 13 12:20:18.215682 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 12:20:18.215689 kernel: Serial: AMBA PL011 UART driver Mar 13 12:20:18.215697 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 13 12:20:18.215704 kernel: Modules: 0 pages in range for non-PLT usage Mar 13 12:20:18.215713 kernel: Modules: 509008 pages in range for PLT usage Mar 13 12:20:18.215721 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 12:20:18.217796 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 12:20:18.217812 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 13 12:20:18.217820 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 13 12:20:18.217827 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 12:20:18.217835 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 12:20:18.217843 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 13 12:20:18.217851 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 13 12:20:18.217864 kernel: ACPI: Added _OSI(Module Device) Mar 13 12:20:18.217871 kernel: ACPI: Added _OSI(Processor Device) Mar 13 12:20:18.217879 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 12:20:18.217886 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 13 12:20:18.217894 kernel: ACPI: Interpreter enabled Mar 13 12:20:18.217901 kernel: ACPI: Using GIC for interrupt routing Mar 13 12:20:18.217909 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 13 12:20:18.217916 kernel: printk: console [ttyAMA0] enabled Mar 13 12:20:18.217924 kernel: printk: bootconsole [pl11] disabled Mar 13 12:20:18.217933 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 13 12:20:18.217941 kernel: iommu: Default domain type: Translated Mar 13 12:20:18.217948 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 13 12:20:18.217956 kernel: efivars: Registered efivars operations Mar 13 12:20:18.217963 kernel: vgaarb: loaded Mar 13 12:20:18.217970 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 13 12:20:18.217978 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 12:20:18.217985 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 12:20:18.217993 kernel: pnp: PnP ACPI init Mar 13 12:20:18.218002 kernel: pnp: PnP ACPI: found 0 devices Mar 13 12:20:18.218009 kernel: NET: Registered PF_INET protocol family Mar 13 12:20:18.218017 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 12:20:18.218025 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 13 12:20:18.218032 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 12:20:18.218040 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 13 12:20:18.218047 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 13 12:20:18.218055 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 13 12:20:18.218062 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 12:20:18.218071 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 12:20:18.218079 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 12:20:18.218086 kernel: PCI: CLS 0 bytes, default 64 Mar 13 12:20:18.218094 kernel: kvm [1]: HYP mode not available Mar 13 12:20:18.218101 kernel: Initialise system trusted keyrings Mar 13 12:20:18.218109 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 13 12:20:18.218116 kernel: Key type asymmetric registered Mar 13 12:20:18.218124 kernel: Asymmetric key parser 'x509' registered Mar 13 12:20:18.218131 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 12:20:18.218140 kernel: io scheduler mq-deadline registered Mar 13 12:20:18.218148 kernel: io scheduler kyber registered Mar 13 12:20:18.218155 kernel: io scheduler bfq registered Mar 13 12:20:18.218163 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 12:20:18.218170 kernel: thunder_xcv, ver 1.0 Mar 13 12:20:18.218177 kernel: thunder_bgx, ver 1.0 Mar 13 12:20:18.218185 kernel: nicpf, ver 1.0 Mar 13 12:20:18.218192 kernel: nicvf, ver 1.0 Mar 13 12:20:18.218350 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 13 12:20:18.218426 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-13T12:20:17 UTC (1773404417) Mar 13 12:20:18.218436 kernel: efifb: probing for efifb Mar 13 12:20:18.218444 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 13 12:20:18.218452 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 13 12:20:18.218460 kernel: efifb: scrolling: redraw Mar 13 12:20:18.218467 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 13 12:20:18.218475 kernel: Console: switching to colour frame buffer device 128x48 Mar 13 12:20:18.218482 kernel: fb0: EFI VGA frame buffer device Mar 13 12:20:18.218492 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 13 12:20:18.218499 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 13 12:20:18.218507 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 13 12:20:18.218515 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 13 12:20:18.218522 kernel: watchdog: Hard watchdog permanently disabled Mar 13 12:20:18.218530 kernel: NET: Registered PF_INET6 protocol family Mar 13 12:20:18.218537 kernel: Segment Routing with IPv6 Mar 13 12:20:18.218545 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 12:20:18.218552 kernel: NET: Registered PF_PACKET protocol family Mar 13 12:20:18.218561 kernel: Key type dns_resolver registered Mar 13 12:20:18.218569 kernel: registered taskstats version 1 Mar 13 12:20:18.218576 kernel: Loading compiled-in X.509 certificates Mar 13 12:20:18.218583 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.129-flatcar: 669007e8dd7e677277a9246a6f3b194a311f8cf1' Mar 13 12:20:18.218591 kernel: Key type .fscrypt registered Mar 13 12:20:18.218598 kernel: Key type fscrypt-provisioning registered Mar 13 12:20:18.218605 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 12:20:18.218613 kernel: ima: Allocated hash algorithm: sha1 Mar 13 12:20:18.218620 kernel: ima: No architecture policies found Mar 13 12:20:18.218629 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 13 12:20:18.218637 kernel: clk: Disabling unused clocks Mar 13 12:20:18.218644 kernel: Freeing unused kernel memory: 39424K Mar 13 12:20:18.218652 kernel: Run /init as init process Mar 13 12:20:18.218659 kernel: with arguments: Mar 13 12:20:18.218667 kernel: /init Mar 13 12:20:18.218674 kernel: with environment: Mar 13 12:20:18.218681 kernel: HOME=/ Mar 13 12:20:18.218689 kernel: TERM=linux Mar 13 12:20:18.218699 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 13 12:20:18.218711 systemd[1]: Detected virtualization microsoft. Mar 13 12:20:18.218719 systemd[1]: Detected architecture arm64. Mar 13 12:20:18.218739 systemd[1]: Running in initrd. Mar 13 12:20:18.218749 systemd[1]: No hostname configured, using default hostname. Mar 13 12:20:18.218756 systemd[1]: Hostname set to . Mar 13 12:20:18.218765 systemd[1]: Initializing machine ID from random generator. Mar 13 12:20:18.218776 systemd[1]: Queued start job for default target initrd.target. Mar 13 12:20:18.218784 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 12:20:18.218792 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 12:20:18.218801 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 12:20:18.218810 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 12:20:18.218818 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 12:20:18.218826 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 12:20:18.218836 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 12:20:18.218846 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 12:20:18.218854 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 12:20:18.218862 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 12:20:18.218870 systemd[1]: Reached target paths.target - Path Units. Mar 13 12:20:18.218878 systemd[1]: Reached target slices.target - Slice Units. Mar 13 12:20:18.218886 systemd[1]: Reached target swap.target - Swaps. Mar 13 12:20:18.218894 systemd[1]: Reached target timers.target - Timer Units. Mar 13 12:20:18.218902 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 12:20:18.218912 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 12:20:18.218920 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 12:20:18.218928 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 13 12:20:18.218936 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 12:20:18.218944 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 12:20:18.218952 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 12:20:18.218960 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 12:20:18.218968 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 12:20:18.218978 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 12:20:18.218986 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 12:20:18.218994 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 12:20:18.219002 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 12:20:18.219010 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 12:20:18.219039 systemd-journald[217]: Collecting audit messages is disabled. Mar 13 12:20:18.219062 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 12:20:18.219071 systemd-journald[217]: Journal started Mar 13 12:20:18.219091 systemd-journald[217]: Runtime Journal (/run/log/journal/115a225d4f3e4dd9ae8fa7ae5df99dd6) is 8.0M, max 78.5M, 70.5M free. Mar 13 12:20:18.222890 systemd-modules-load[218]: Inserted module 'overlay' Mar 13 12:20:18.244595 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 12:20:18.244622 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 12:20:18.251592 kernel: Bridge firewalling registered Mar 13 12:20:18.249195 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 13 12:20:18.256723 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 12:20:18.262938 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 12:20:18.273444 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 12:20:18.283243 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 12:20:18.292426 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:20:18.310924 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 12:20:18.317892 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 12:20:18.337975 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 12:20:18.346893 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 12:20:18.354121 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 12:20:18.377917 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 12:20:18.390205 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 12:20:18.407042 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 12:20:18.422977 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 12:20:18.428896 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 12:20:18.445646 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 12:20:18.455324 dracut-cmdline[248]: dracut-dracut-053 Mar 13 12:20:18.467966 dracut-cmdline[248]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=74d23b4a193d6e906ee495726e120d1ddfae1935a31eb217879b4eafa2053949 Mar 13 12:20:18.497176 systemd-resolved[250]: Positive Trust Anchors: Mar 13 12:20:18.497193 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 12:20:18.497225 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 12:20:18.500131 systemd-resolved[250]: Defaulting to hostname 'linux'. Mar 13 12:20:18.501018 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 12:20:18.509749 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 12:20:18.522348 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 12:20:18.623748 kernel: SCSI subsystem initialized Mar 13 12:20:18.630754 kernel: Loading iSCSI transport class v2.0-870. Mar 13 12:20:18.640751 kernel: iscsi: registered transport (tcp) Mar 13 12:20:18.657080 kernel: iscsi: registered transport (qla4xxx) Mar 13 12:20:18.657110 kernel: QLogic iSCSI HBA Driver Mar 13 12:20:18.692545 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 12:20:18.704016 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 12:20:18.733306 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 12:20:18.733390 kernel: device-mapper: uevent: version 1.0.3 Mar 13 12:20:18.738510 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 13 12:20:18.786753 kernel: raid6: neonx8 gen() 15786 MB/s Mar 13 12:20:18.805760 kernel: raid6: neonx4 gen() 15691 MB/s Mar 13 12:20:18.824750 kernel: raid6: neonx2 gen() 13246 MB/s Mar 13 12:20:18.844763 kernel: raid6: neonx1 gen() 10502 MB/s Mar 13 12:20:18.863760 kernel: raid6: int64x8 gen() 6984 MB/s Mar 13 12:20:18.882754 kernel: raid6: int64x4 gen() 7372 MB/s Mar 13 12:20:18.902752 kernel: raid6: int64x2 gen() 6146 MB/s Mar 13 12:20:18.924829 kernel: raid6: int64x1 gen() 5072 MB/s Mar 13 12:20:18.924887 kernel: raid6: using algorithm neonx8 gen() 15786 MB/s Mar 13 12:20:18.947783 kernel: raid6: .... xor() 12043 MB/s, rmw enabled Mar 13 12:20:18.947794 kernel: raid6: using neon recovery algorithm Mar 13 12:20:18.958801 kernel: xor: measuring software checksum speed Mar 13 12:20:18.958819 kernel: 8regs : 19769 MB/sec Mar 13 12:20:18.961631 kernel: 32regs : 19613 MB/sec Mar 13 12:20:18.964479 kernel: arm64_neon : 27034 MB/sec Mar 13 12:20:18.967877 kernel: xor: using function: arm64_neon (27034 MB/sec) Mar 13 12:20:19.018784 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 12:20:19.028670 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 12:20:19.041897 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 12:20:19.062072 systemd-udevd[435]: Using default interface naming scheme 'v255'. Mar 13 12:20:19.066626 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 12:20:19.083950 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 12:20:19.100053 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation Mar 13 12:20:19.129926 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 12:20:19.141221 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 12:20:19.182239 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 12:20:19.201085 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 12:20:19.223565 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 12:20:19.241208 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 12:20:19.252805 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 12:20:19.259984 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 12:20:19.279005 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 12:20:19.308746 kernel: hv_vmbus: Vmbus version:5.3 Mar 13 12:20:19.312316 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 12:20:19.325624 kernel: hv_vmbus: registering driver hid_hyperv Mar 13 12:20:19.340255 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 13 12:20:19.340321 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 13 12:20:19.340774 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 13 12:20:19.349555 kernel: hv_vmbus: registering driver hv_netvsc Mar 13 12:20:19.349610 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 13 12:20:19.360261 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 13 12:20:19.361277 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 12:20:19.361433 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 12:20:19.398423 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 13 12:20:19.398449 kernel: hv_vmbus: registering driver hv_storvsc Mar 13 12:20:19.384289 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 12:20:19.389213 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 12:20:19.418900 kernel: scsi host1: storvsc_host_t Mar 13 12:20:19.423006 kernel: scsi host0: storvsc_host_t Mar 13 12:20:19.423037 kernel: PTP clock support registered Mar 13 12:20:19.423000 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:20:19.445540 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 13 12:20:19.445748 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 13 12:20:19.430014 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 12:20:19.466536 kernel: hv_utils: Registering HyperV Utility Driver Mar 13 12:20:19.466596 kernel: hv_vmbus: registering driver hv_utils Mar 13 12:20:19.474537 kernel: hv_utils: Heartbeat IC version 3.0 Mar 13 12:20:19.474592 kernel: hv_utils: Shutdown IC version 3.2 Mar 13 12:20:19.474613 kernel: hv_utils: TimeSync IC version 4.0 Mar 13 12:20:19.477801 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 12:20:19.169773 kernel: hv_netvsc 7ced8dc3-89b1-7ced-8dc3-89b17ced8dc3 eth0: VF slot 1 added Mar 13 12:20:19.179937 systemd-journald[217]: Time jumped backwards, rotating. Mar 13 12:20:19.160530 systemd-resolved[250]: Clock change detected. Flushing caches. Mar 13 12:20:19.196395 kernel: hv_vmbus: registering driver hv_pci Mar 13 12:20:19.196444 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 13 12:20:19.196634 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 13 12:20:19.194352 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:20:19.212493 kernel: hv_pci 4676b71a-7eba-4741-bc0b-2531674968a6: PCI VMBus probing: Using version 0x10004 Mar 13 12:20:19.220641 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 12:20:19.242222 kernel: hv_pci 4676b71a-7eba-4741-bc0b-2531674968a6: PCI host bridge to bus 7eba:00 Mar 13 12:20:19.242432 kernel: pci_bus 7eba:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 13 12:20:19.242560 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 13 12:20:19.242662 kernel: pci_bus 7eba:00: No busn resource found for root bus, will use [bus 00-ff] Mar 13 12:20:19.254269 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 13 12:20:19.258594 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 13 12:20:19.258814 kernel: pci 7eba:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 13 12:20:19.265377 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 13 12:20:19.265118 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 12:20:19.290954 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 13 12:20:19.300865 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 13 12:20:19.300986 kernel: pci 7eba:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 13 12:20:19.301007 kernel: pci 7eba:00:02.0: enabling Extended Tags Mar 13 12:20:19.301020 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 12:20:19.301036 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 13 12:20:19.301140 kernel: pci 7eba:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 7eba:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 13 12:20:19.313973 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 13 12:20:19.314149 kernel: pci_bus 7eba:00: busn_res: [bus 00-ff] end is updated to 00 Mar 13 12:20:19.323496 kernel: pci 7eba:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 13 12:20:19.349494 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#55 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 13 12:20:19.378045 kernel: mlx5_core 7eba:00:02.0: enabling device (0000 -> 0002) Mar 13 12:20:19.383496 kernel: mlx5_core 7eba:00:02.0: firmware version: 16.30.5026 Mar 13 12:20:19.583774 kernel: hv_netvsc 7ced8dc3-89b1-7ced-8dc3-89b17ced8dc3 eth0: VF registering: eth1 Mar 13 12:20:19.583986 kernel: mlx5_core 7eba:00:02.0 eth1: joined to eth0 Mar 13 12:20:19.591576 kernel: mlx5_core 7eba:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 13 12:20:19.599496 kernel: mlx5_core 7eba:00:02.0 enP32442s1: renamed from eth1 Mar 13 12:20:20.321223 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 13 12:20:20.386510 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (484) Mar 13 12:20:20.400381 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 13 12:20:20.411439 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 13 12:20:20.508512 kernel: BTRFS: device fsid beae115b-a7a4-4bd0-8b91-fe8e188f678a devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (491) Mar 13 12:20:20.522166 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 13 12:20:20.527750 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 13 12:20:20.554766 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 12:20:20.576534 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 12:20:20.584506 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 12:20:21.595512 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 12:20:21.595695 disk-uuid[606]: The operation has completed successfully. Mar 13 12:20:21.663290 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 12:20:21.663403 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 12:20:21.694653 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 12:20:21.705717 sh[719]: Success Mar 13 12:20:21.754100 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 13 12:20:22.296581 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 12:20:22.313030 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 12:20:22.320286 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 12:20:22.349047 kernel: BTRFS info (device dm-0): first mount of filesystem beae115b-a7a4-4bd0-8b91-fe8e188f678a Mar 13 12:20:22.349096 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 13 12:20:22.354626 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 13 12:20:22.358705 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 13 12:20:22.362162 kernel: BTRFS info (device dm-0): using free space tree Mar 13 12:20:23.030238 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 12:20:23.034700 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 12:20:23.049767 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 12:20:23.061204 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 12:20:23.093769 kernel: BTRFS info (device sda6): first mount of filesystem e3f2e5ef-5667-4244-bb44-c23afbdb3707 Mar 13 12:20:23.093829 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 13 12:20:23.097548 kernel: BTRFS info (device sda6): using free space tree Mar 13 12:20:23.164669 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 12:20:23.182502 kernel: BTRFS info (device sda6): auto enabling async discard Mar 13 12:20:23.187793 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 12:20:23.203511 kernel: BTRFS info (device sda6): last unmount of filesystem e3f2e5ef-5667-4244-bb44-c23afbdb3707 Mar 13 12:20:23.212814 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 12:20:23.225768 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 12:20:23.233214 systemd-networkd[895]: lo: Link UP Mar 13 12:20:23.233218 systemd-networkd[895]: lo: Gained carrier Mar 13 12:20:23.234826 systemd-networkd[895]: Enumeration completed Mar 13 12:20:23.235223 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 12:20:23.237942 systemd-networkd[895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 12:20:23.237946 systemd-networkd[895]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 12:20:23.245610 systemd[1]: Reached target network.target - Network. Mar 13 12:20:23.323494 kernel: mlx5_core 7eba:00:02.0 enP32442s1: Link up Mar 13 12:20:23.365495 kernel: hv_netvsc 7ced8dc3-89b1-7ced-8dc3-89b17ced8dc3 eth0: Data path switched to VF: enP32442s1 Mar 13 12:20:23.365568 systemd-networkd[895]: enP32442s1: Link UP Mar 13 12:20:23.365653 systemd-networkd[895]: eth0: Link UP Mar 13 12:20:23.365752 systemd-networkd[895]: eth0: Gained carrier Mar 13 12:20:23.365761 systemd-networkd[895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 12:20:23.385087 systemd-networkd[895]: enP32442s1: Gained carrier Mar 13 12:20:23.398537 systemd-networkd[895]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 13 12:20:25.251649 systemd-networkd[895]: eth0: Gained IPv6LL Mar 13 12:20:25.329773 ignition[903]: Ignition 2.19.0 Mar 13 12:20:25.329791 ignition[903]: Stage: fetch-offline Mar 13 12:20:25.339209 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 12:20:25.329834 ignition[903]: no configs at "/usr/lib/ignition/base.d" Mar 13 12:20:25.329842 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:20:25.329945 ignition[903]: parsed url from cmdline: "" Mar 13 12:20:25.329948 ignition[903]: no config URL provided Mar 13 12:20:25.329953 ignition[903]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 12:20:25.362768 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 13 12:20:25.329961 ignition[903]: no config at "/usr/lib/ignition/user.ign" Mar 13 12:20:25.329967 ignition[903]: failed to fetch config: resource requires networking Mar 13 12:20:25.330160 ignition[903]: Ignition finished successfully Mar 13 12:20:25.385066 ignition[912]: Ignition 2.19.0 Mar 13 12:20:25.385073 ignition[912]: Stage: fetch Mar 13 12:20:25.385276 ignition[912]: no configs at "/usr/lib/ignition/base.d" Mar 13 12:20:25.385286 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:20:25.385388 ignition[912]: parsed url from cmdline: "" Mar 13 12:20:25.385391 ignition[912]: no config URL provided Mar 13 12:20:25.385396 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 12:20:25.385403 ignition[912]: no config at "/usr/lib/ignition/user.ign" Mar 13 12:20:25.385432 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 13 12:20:25.483741 ignition[912]: GET result: OK Mar 13 12:20:25.483832 ignition[912]: config has been read from IMDS userdata Mar 13 12:20:25.483880 ignition[912]: parsing config with SHA512: c9b98b9fbff168c529df3b2300f197b8fa462516d6b068aab107929d55634e34b5fe86b894d3b3700211049ad8cdf72c7cd03ef90b9748d71800e00d2487aa53 Mar 13 12:20:25.487729 unknown[912]: fetched base config from "system" Mar 13 12:20:25.488122 ignition[912]: fetch: fetch complete Mar 13 12:20:25.487737 unknown[912]: fetched base config from "system" Mar 13 12:20:25.488127 ignition[912]: fetch: fetch passed Mar 13 12:20:25.487743 unknown[912]: fetched user config from "azure" Mar 13 12:20:25.488180 ignition[912]: Ignition finished successfully Mar 13 12:20:25.496793 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 13 12:20:25.512658 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 12:20:25.532086 ignition[918]: Ignition 2.19.0 Mar 13 12:20:25.532093 ignition[918]: Stage: kargs Mar 13 12:20:25.538094 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 12:20:25.532306 ignition[918]: no configs at "/usr/lib/ignition/base.d" Mar 13 12:20:25.532315 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:20:25.533688 ignition[918]: kargs: kargs passed Mar 13 12:20:25.533748 ignition[918]: Ignition finished successfully Mar 13 12:20:25.558664 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 12:20:25.573936 ignition[924]: Ignition 2.19.0 Mar 13 12:20:25.573945 ignition[924]: Stage: disks Mar 13 12:20:25.577744 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 12:20:25.574118 ignition[924]: no configs at "/usr/lib/ignition/base.d" Mar 13 12:20:25.584788 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 12:20:25.574128 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:20:25.593206 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 12:20:25.575142 ignition[924]: disks: disks passed Mar 13 12:20:25.602390 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 12:20:25.575195 ignition[924]: Ignition finished successfully Mar 13 12:20:25.610979 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 12:20:25.620113 systemd[1]: Reached target basic.target - Basic System. Mar 13 12:20:25.639741 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 12:20:25.777467 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 13 12:20:25.784511 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 12:20:25.796744 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 12:20:25.852664 kernel: EXT4-fs (sda9): mounted filesystem e3689c4f-fa92-4cc2-b7ea-ac589877c8df r/w with ordered data mode. Quota mode: none. Mar 13 12:20:25.853198 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 12:20:25.857295 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 12:20:25.936578 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 12:20:25.962358 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Mar 13 12:20:25.962410 kernel: BTRFS info (device sda6): first mount of filesystem e3f2e5ef-5667-4244-bb44-c23afbdb3707 Mar 13 12:20:25.967114 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 13 12:20:25.970492 kernel: BTRFS info (device sda6): using free space tree Mar 13 12:20:25.977502 kernel: BTRFS info (device sda6): auto enabling async discard Mar 13 12:20:26.003598 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 12:20:26.012287 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 13 12:20:26.022761 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 12:20:26.022804 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 12:20:26.038940 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 12:20:26.046600 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 12:20:26.066667 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 12:20:27.066311 coreos-metadata[960]: Mar 13 12:20:27.066 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 13 12:20:27.074931 coreos-metadata[960]: Mar 13 12:20:27.074 INFO Fetch successful Mar 13 12:20:27.074931 coreos-metadata[960]: Mar 13 12:20:27.074 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 13 12:20:27.089415 coreos-metadata[960]: Mar 13 12:20:27.089 INFO Fetch successful Mar 13 12:20:27.124424 coreos-metadata[960]: Mar 13 12:20:27.123 INFO wrote hostname ci-4081.3.101-d13a81acd8 to /sysroot/etc/hostname Mar 13 12:20:27.131095 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 13 12:20:27.550965 initrd-setup-root[973]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 12:20:27.635517 initrd-setup-root[980]: cut: /sysroot/etc/group: No such file or directory Mar 13 12:20:27.644068 initrd-setup-root[987]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 12:20:27.652136 initrd-setup-root[994]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 12:20:29.646384 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 12:20:29.658691 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 12:20:29.670669 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 12:20:29.685247 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 12:20:29.689441 kernel: BTRFS info (device sda6): last unmount of filesystem e3f2e5ef-5667-4244-bb44-c23afbdb3707 Mar 13 12:20:29.715315 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 12:20:29.727956 ignition[1063]: INFO : Ignition 2.19.0 Mar 13 12:20:29.727956 ignition[1063]: INFO : Stage: mount Mar 13 12:20:29.727956 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 12:20:29.727956 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:20:29.749786 ignition[1063]: INFO : mount: mount passed Mar 13 12:20:29.749786 ignition[1063]: INFO : Ignition finished successfully Mar 13 12:20:29.734449 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 12:20:29.753677 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 12:20:29.770723 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 12:20:29.793490 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1077) Mar 13 12:20:29.799505 kernel: BTRFS info (device sda6): first mount of filesystem e3f2e5ef-5667-4244-bb44-c23afbdb3707 Mar 13 12:20:29.799552 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 13 12:20:29.807204 kernel: BTRFS info (device sda6): using free space tree Mar 13 12:20:29.813486 kernel: BTRFS info (device sda6): auto enabling async discard Mar 13 12:20:29.815786 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 12:20:29.839389 ignition[1094]: INFO : Ignition 2.19.0 Mar 13 12:20:29.839389 ignition[1094]: INFO : Stage: files Mar 13 12:20:29.845975 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 12:20:29.845975 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:20:29.845975 ignition[1094]: DEBUG : files: compiled without relabeling support, skipping Mar 13 12:20:29.873069 ignition[1094]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 12:20:29.873069 ignition[1094]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 12:20:30.059389 ignition[1094]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 12:20:30.065436 ignition[1094]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 12:20:30.065436 ignition[1094]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 12:20:30.059816 unknown[1094]: wrote ssh authorized keys file for user: core Mar 13 12:20:30.095830 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 13 12:20:30.105244 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 13 12:20:30.132950 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 12:20:30.249968 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Mar 13 12:20:30.804975 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 13 12:20:31.796841 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 13 12:20:31.796841 ignition[1094]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 13 12:20:31.869405 ignition[1094]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 12:20:31.869405 ignition[1094]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 12:20:31.869405 ignition[1094]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 13 12:20:31.869405 ignition[1094]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 13 12:20:31.906509 ignition[1094]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 12:20:31.913852 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 12:20:31.913852 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 12:20:31.913852 ignition[1094]: INFO : files: files passed Mar 13 12:20:31.913852 ignition[1094]: INFO : Ignition finished successfully Mar 13 12:20:31.914311 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 12:20:31.941816 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 12:20:31.949691 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 12:20:31.962791 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 12:20:31.962882 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 12:20:32.004562 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 12:20:32.004562 initrd-setup-root-after-ignition[1122]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 12:20:32.019199 initrd-setup-root-after-ignition[1126]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 12:20:32.020331 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 12:20:32.031787 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 12:20:32.052770 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 12:20:32.082431 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 12:20:32.082551 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 12:20:32.092522 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 12:20:32.102351 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 12:20:32.111366 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 12:20:32.123755 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 12:20:32.142178 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 12:20:32.157755 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 12:20:32.173199 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 12:20:32.178724 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 12:20:32.189283 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 12:20:32.198603 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 12:20:32.198724 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 12:20:32.211466 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 12:20:32.215899 systemd[1]: Stopped target basic.target - Basic System. Mar 13 12:20:32.224984 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 12:20:32.234345 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 12:20:32.243388 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 12:20:32.253220 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 12:20:32.262960 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 12:20:32.273506 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 12:20:32.282160 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 12:20:32.291516 systemd[1]: Stopped target swap.target - Swaps. Mar 13 12:20:32.299965 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 12:20:32.300085 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 12:20:32.313514 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 12:20:32.318553 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 12:20:32.327693 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 12:20:32.331533 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 12:20:32.336761 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 12:20:32.336871 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 12:20:32.349949 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 12:20:32.350064 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 12:20:32.355419 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 12:20:32.355516 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 12:20:32.415870 ignition[1146]: INFO : Ignition 2.19.0 Mar 13 12:20:32.415870 ignition[1146]: INFO : Stage: umount Mar 13 12:20:32.415870 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 12:20:32.415870 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:20:32.415870 ignition[1146]: INFO : umount: umount passed Mar 13 12:20:32.415870 ignition[1146]: INFO : Ignition finished successfully Mar 13 12:20:32.363437 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 13 12:20:32.363540 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 13 12:20:32.387772 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 12:20:32.399100 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 12:20:32.403216 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 12:20:32.421856 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 12:20:32.428524 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 12:20:32.428757 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 12:20:32.433953 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 12:20:32.434105 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 12:20:32.449631 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 12:20:32.449740 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 12:20:32.455233 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 12:20:32.458062 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 12:20:32.458165 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 12:20:32.464260 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 12:20:32.464310 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 12:20:32.471888 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 12:20:32.471933 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 12:20:32.479516 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 13 12:20:32.479556 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 13 12:20:32.488015 systemd[1]: Stopped target network.target - Network. Mar 13 12:20:32.497329 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 12:20:32.497380 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 12:20:32.510301 systemd[1]: Stopped target paths.target - Path Units. Mar 13 12:20:32.514010 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 12:20:32.523499 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 12:20:32.531694 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 12:20:32.539833 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 12:20:32.548816 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 12:20:32.548869 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 12:20:32.557238 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 12:20:32.557279 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 12:20:32.565379 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 12:20:32.565443 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 12:20:32.573338 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 12:20:32.573380 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 12:20:32.581882 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 12:20:32.589854 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 12:20:32.599522 systemd-networkd[895]: eth0: DHCPv6 lease lost Mar 13 12:20:32.600061 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 12:20:32.600162 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 12:20:32.801643 kernel: hv_netvsc 7ced8dc3-89b1-7ced-8dc3-89b17ced8dc3 eth0: Data path switched from VF: enP32442s1 Mar 13 12:20:32.609658 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 12:20:32.609767 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 12:20:32.620660 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 12:20:32.620873 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 12:20:32.631239 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 12:20:32.631472 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 12:20:32.638238 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 12:20:32.638298 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 12:20:32.658689 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 12:20:32.667911 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 12:20:32.667990 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 12:20:32.677437 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 12:20:32.677483 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 12:20:32.686913 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 12:20:32.686952 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 12:20:32.698179 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 12:20:32.698224 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 12:20:32.707223 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 12:20:32.741819 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 12:20:32.741985 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 12:20:32.751660 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 12:20:32.751701 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 12:20:32.760871 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 12:20:32.760905 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 12:20:32.770368 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 12:20:32.770410 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 12:20:32.782193 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 12:20:32.782233 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 12:20:32.797440 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 12:20:32.797510 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 12:20:32.825259 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 12:20:32.832547 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 12:20:32.832635 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 12:20:32.845821 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 12:20:32.845878 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:20:32.856968 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 12:20:32.857084 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 12:20:32.866731 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 12:20:32.866823 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 12:20:32.877038 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 12:20:33.052085 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Mar 13 12:20:32.898650 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 12:20:32.912043 systemd[1]: Switching root. Mar 13 12:20:33.061305 systemd-journald[217]: Journal stopped Mar 13 12:20:18.212542 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 13 12:20:18.212566 kernel: Linux version 6.6.129-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 13 08:56:28 -00 2026 Mar 13 12:20:18.212575 kernel: KASLR enabled Mar 13 12:20:18.212581 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 13 12:20:18.212588 kernel: printk: bootconsole [pl11] enabled Mar 13 12:20:18.212594 kernel: efi: EFI v2.7 by EDK II Mar 13 12:20:18.212601 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Mar 13 12:20:18.212607 kernel: random: crng init done Mar 13 12:20:18.212614 kernel: ACPI: Early table checksum verification disabled Mar 13 12:20:18.212619 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 13 12:20:18.212626 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:20:18.212632 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:20:18.212639 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 13 12:20:18.212646 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:20:18.212653 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:20:18.212659 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:20:18.212666 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:20:18.212674 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:20:18.212680 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:20:18.212687 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 13 12:20:18.212693 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 13 12:20:18.212700 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 13 12:20:18.212706 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 13 12:20:18.212712 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 13 12:20:18.212719 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 13 12:20:18.212725 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 13 12:20:18.214757 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 13 12:20:18.214765 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 13 12:20:18.214778 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 13 12:20:18.214784 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 13 12:20:18.214791 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 13 12:20:18.214798 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 13 12:20:18.214805 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 13 12:20:18.214811 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 13 12:20:18.214818 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Mar 13 12:20:18.214824 kernel: Zone ranges: Mar 13 12:20:18.214830 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 13 12:20:18.214838 kernel: DMA32 empty Mar 13 12:20:18.214844 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 13 12:20:18.214851 kernel: Movable zone start for each node Mar 13 12:20:18.214862 kernel: Early memory node ranges Mar 13 12:20:18.214869 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 13 12:20:18.214877 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 13 12:20:18.214885 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 13 12:20:18.214892 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 13 12:20:18.214901 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 13 12:20:18.214908 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 13 12:20:18.214915 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 13 12:20:18.214923 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 13 12:20:18.214930 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 13 12:20:18.214937 kernel: psci: probing for conduit method from ACPI. Mar 13 12:20:18.214944 kernel: psci: PSCIv1.1 detected in firmware. Mar 13 12:20:18.214950 kernel: psci: Using standard PSCI v0.2 function IDs Mar 13 12:20:18.214958 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 13 12:20:18.214965 kernel: psci: SMC Calling Convention v1.4 Mar 13 12:20:18.214972 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 13 12:20:18.214979 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 13 12:20:18.214988 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 13 12:20:18.214994 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 13 12:20:18.215002 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 13 12:20:18.215009 kernel: Detected PIPT I-cache on CPU0 Mar 13 12:20:18.215016 kernel: CPU features: detected: GIC system register CPU interface Mar 13 12:20:18.215023 kernel: CPU features: detected: Hardware dirty bit management Mar 13 12:20:18.215030 kernel: CPU features: detected: Spectre-BHB Mar 13 12:20:18.215038 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 13 12:20:18.215044 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 13 12:20:18.215051 kernel: CPU features: detected: ARM erratum 1418040 Mar 13 12:20:18.215058 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 13 12:20:18.215066 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 13 12:20:18.215073 kernel: alternatives: applying boot alternatives Mar 13 12:20:18.215083 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=74d23b4a193d6e906ee495726e120d1ddfae1935a31eb217879b4eafa2053949 Mar 13 12:20:18.215090 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 12:20:18.215097 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 13 12:20:18.215104 kernel: Fallback order for Node 0: 0 Mar 13 12:20:18.215111 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 13 12:20:18.215117 kernel: Policy zone: Normal Mar 13 12:20:18.215125 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 12:20:18.215132 kernel: software IO TLB: area num 2. Mar 13 12:20:18.215139 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Mar 13 12:20:18.215148 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8120K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Mar 13 12:20:18.215155 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 13 12:20:18.215162 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 12:20:18.215170 kernel: rcu: RCU event tracing is enabled. Mar 13 12:20:18.215178 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 13 12:20:18.215184 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 12:20:18.215191 kernel: Tracing variant of Tasks RCU enabled. Mar 13 12:20:18.215198 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 12:20:18.215205 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 13 12:20:18.215213 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 13 12:20:18.215219 kernel: GICv3: 960 SPIs implemented Mar 13 12:20:18.215228 kernel: GICv3: 0 Extended SPIs implemented Mar 13 12:20:18.215236 kernel: Root IRQ handler: gic_handle_irq Mar 13 12:20:18.215242 kernel: GICv3: GICv3 features: 16 PPIs, RSS Mar 13 12:20:18.215249 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 13 12:20:18.215256 kernel: ITS: No ITS available, not enabling LPIs Mar 13 12:20:18.215263 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 12:20:18.215270 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 13 12:20:18.215277 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 13 12:20:18.215284 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 13 12:20:18.215291 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 13 12:20:18.215298 kernel: Console: colour dummy device 80x25 Mar 13 12:20:18.215307 kernel: printk: console [tty1] enabled Mar 13 12:20:18.215315 kernel: ACPI: Core revision 20230628 Mar 13 12:20:18.215322 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 13 12:20:18.215329 kernel: pid_max: default: 32768 minimum: 301 Mar 13 12:20:18.215336 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 13 12:20:18.215343 kernel: landlock: Up and running. Mar 13 12:20:18.215350 kernel: SELinux: Initializing. Mar 13 12:20:18.215357 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 12:20:18.215364 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 12:20:18.215373 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 12:20:18.215380 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 12:20:18.215388 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Mar 13 12:20:18.215395 kernel: Hyper-V: Host Build 10.0.26100.1480-1-0 Mar 13 12:20:18.215402 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 13 12:20:18.215409 kernel: rcu: Hierarchical SRCU implementation. Mar 13 12:20:18.215416 kernel: rcu: Max phase no-delay instances is 400. Mar 13 12:20:18.215424 kernel: Remapping and enabling EFI services. Mar 13 12:20:18.215437 kernel: smp: Bringing up secondary CPUs ... Mar 13 12:20:18.215444 kernel: Detected PIPT I-cache on CPU1 Mar 13 12:20:18.215452 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 13 12:20:18.215459 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 13 12:20:18.215468 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 13 12:20:18.215476 kernel: smp: Brought up 1 node, 2 CPUs Mar 13 12:20:18.215483 kernel: SMP: Total of 2 processors activated. Mar 13 12:20:18.215491 kernel: CPU features: detected: 32-bit EL0 Support Mar 13 12:20:18.215498 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 13 12:20:18.215507 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 13 12:20:18.215515 kernel: CPU features: detected: CRC32 instructions Mar 13 12:20:18.215522 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 13 12:20:18.215530 kernel: CPU features: detected: LSE atomic instructions Mar 13 12:20:18.215537 kernel: CPU features: detected: Privileged Access Never Mar 13 12:20:18.215545 kernel: CPU: All CPU(s) started at EL1 Mar 13 12:20:18.215552 kernel: alternatives: applying system-wide alternatives Mar 13 12:20:18.215559 kernel: devtmpfs: initialized Mar 13 12:20:18.215567 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 12:20:18.215576 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 13 12:20:18.215583 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 12:20:18.215591 kernel: SMBIOS 3.1.0 present. Mar 13 12:20:18.215599 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 13 12:20:18.215606 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 12:20:18.215613 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 13 12:20:18.215621 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 13 12:20:18.215628 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 13 12:20:18.215636 kernel: audit: initializing netlink subsys (disabled) Mar 13 12:20:18.215645 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 13 12:20:18.215652 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 12:20:18.215660 kernel: cpuidle: using governor menu Mar 13 12:20:18.215667 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 13 12:20:18.215675 kernel: ASID allocator initialised with 32768 entries Mar 13 12:20:18.215682 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 12:20:18.215689 kernel: Serial: AMBA PL011 UART driver Mar 13 12:20:18.215697 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 13 12:20:18.215704 kernel: Modules: 0 pages in range for non-PLT usage Mar 13 12:20:18.215713 kernel: Modules: 509008 pages in range for PLT usage Mar 13 12:20:18.215721 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 12:20:18.217796 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 12:20:18.217812 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 13 12:20:18.217820 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 13 12:20:18.217827 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 12:20:18.217835 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 12:20:18.217843 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 13 12:20:18.217851 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 13 12:20:18.217864 kernel: ACPI: Added _OSI(Module Device) Mar 13 12:20:18.217871 kernel: ACPI: Added _OSI(Processor Device) Mar 13 12:20:18.217879 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 12:20:18.217886 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 13 12:20:18.217894 kernel: ACPI: Interpreter enabled Mar 13 12:20:18.217901 kernel: ACPI: Using GIC for interrupt routing Mar 13 12:20:18.217909 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 13 12:20:18.217916 kernel: printk: console [ttyAMA0] enabled Mar 13 12:20:18.217924 kernel: printk: bootconsole [pl11] disabled Mar 13 12:20:18.217933 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 13 12:20:18.217941 kernel: iommu: Default domain type: Translated Mar 13 12:20:18.217948 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 13 12:20:18.217956 kernel: efivars: Registered efivars operations Mar 13 12:20:18.217963 kernel: vgaarb: loaded Mar 13 12:20:18.217970 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 13 12:20:18.217978 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 12:20:18.217985 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 12:20:18.217993 kernel: pnp: PnP ACPI init Mar 13 12:20:18.218002 kernel: pnp: PnP ACPI: found 0 devices Mar 13 12:20:18.218009 kernel: NET: Registered PF_INET protocol family Mar 13 12:20:18.218017 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 12:20:18.218025 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 13 12:20:18.218032 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 12:20:18.218040 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 13 12:20:18.218047 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 13 12:20:18.218055 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 13 12:20:18.218062 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 12:20:18.218071 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 12:20:18.218079 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 12:20:18.218086 kernel: PCI: CLS 0 bytes, default 64 Mar 13 12:20:18.218094 kernel: kvm [1]: HYP mode not available Mar 13 12:20:18.218101 kernel: Initialise system trusted keyrings Mar 13 12:20:18.218109 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 13 12:20:18.218116 kernel: Key type asymmetric registered Mar 13 12:20:18.218124 kernel: Asymmetric key parser 'x509' registered Mar 13 12:20:18.218131 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 12:20:18.218140 kernel: io scheduler mq-deadline registered Mar 13 12:20:18.218148 kernel: io scheduler kyber registered Mar 13 12:20:18.218155 kernel: io scheduler bfq registered Mar 13 12:20:18.218163 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 12:20:18.218170 kernel: thunder_xcv, ver 1.0 Mar 13 12:20:18.218177 kernel: thunder_bgx, ver 1.0 Mar 13 12:20:18.218185 kernel: nicpf, ver 1.0 Mar 13 12:20:18.218192 kernel: nicvf, ver 1.0 Mar 13 12:20:18.218350 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 13 12:20:18.218426 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-13T12:20:17 UTC (1773404417) Mar 13 12:20:18.218436 kernel: efifb: probing for efifb Mar 13 12:20:18.218444 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 13 12:20:18.218452 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 13 12:20:18.218460 kernel: efifb: scrolling: redraw Mar 13 12:20:18.218467 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 13 12:20:18.218475 kernel: Console: switching to colour frame buffer device 128x48 Mar 13 12:20:18.218482 kernel: fb0: EFI VGA frame buffer device Mar 13 12:20:18.218492 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 13 12:20:18.218499 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 13 12:20:18.218507 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Mar 13 12:20:18.218515 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 13 12:20:18.218522 kernel: watchdog: Hard watchdog permanently disabled Mar 13 12:20:18.218530 kernel: NET: Registered PF_INET6 protocol family Mar 13 12:20:18.218537 kernel: Segment Routing with IPv6 Mar 13 12:20:18.218545 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 12:20:18.218552 kernel: NET: Registered PF_PACKET protocol family Mar 13 12:20:18.218561 kernel: Key type dns_resolver registered Mar 13 12:20:18.218569 kernel: registered taskstats version 1 Mar 13 12:20:18.218576 kernel: Loading compiled-in X.509 certificates Mar 13 12:20:18.218583 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.129-flatcar: 669007e8dd7e677277a9246a6f3b194a311f8cf1' Mar 13 12:20:18.218591 kernel: Key type .fscrypt registered Mar 13 12:20:18.218598 kernel: Key type fscrypt-provisioning registered Mar 13 12:20:18.218605 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 12:20:18.218613 kernel: ima: Allocated hash algorithm: sha1 Mar 13 12:20:18.218620 kernel: ima: No architecture policies found Mar 13 12:20:18.218629 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 13 12:20:18.218637 kernel: clk: Disabling unused clocks Mar 13 12:20:18.218644 kernel: Freeing unused kernel memory: 39424K Mar 13 12:20:18.218652 kernel: Run /init as init process Mar 13 12:20:18.218659 kernel: with arguments: Mar 13 12:20:18.218667 kernel: /init Mar 13 12:20:18.218674 kernel: with environment: Mar 13 12:20:18.218681 kernel: HOME=/ Mar 13 12:20:18.218689 kernel: TERM=linux Mar 13 12:20:18.218699 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 13 12:20:18.218711 systemd[1]: Detected virtualization microsoft. Mar 13 12:20:18.218719 systemd[1]: Detected architecture arm64. Mar 13 12:20:18.218739 systemd[1]: Running in initrd. Mar 13 12:20:18.218749 systemd[1]: No hostname configured, using default hostname. Mar 13 12:20:18.218756 systemd[1]: Hostname set to . Mar 13 12:20:18.218765 systemd[1]: Initializing machine ID from random generator. Mar 13 12:20:18.218776 systemd[1]: Queued start job for default target initrd.target. Mar 13 12:20:18.218784 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 12:20:18.218792 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 12:20:18.218801 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 12:20:18.218810 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 12:20:18.218818 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 12:20:18.218826 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 12:20:18.218836 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 12:20:18.218846 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 12:20:18.218854 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 12:20:18.218862 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 12:20:18.218870 systemd[1]: Reached target paths.target - Path Units. Mar 13 12:20:18.218878 systemd[1]: Reached target slices.target - Slice Units. Mar 13 12:20:18.218886 systemd[1]: Reached target swap.target - Swaps. Mar 13 12:20:18.218894 systemd[1]: Reached target timers.target - Timer Units. Mar 13 12:20:18.218902 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 12:20:18.218912 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 12:20:18.218920 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 12:20:18.218928 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 13 12:20:18.218936 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 12:20:18.218944 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 12:20:18.218952 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 12:20:18.218960 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 12:20:18.218968 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 12:20:18.218978 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 12:20:18.218986 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 12:20:18.218994 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 12:20:18.219002 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 12:20:18.219010 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 12:20:18.219039 systemd-journald[217]: Collecting audit messages is disabled. Mar 13 12:20:18.219062 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 12:20:18.219071 systemd-journald[217]: Journal started Mar 13 12:20:18.219091 systemd-journald[217]: Runtime Journal (/run/log/journal/115a225d4f3e4dd9ae8fa7ae5df99dd6) is 8.0M, max 78.5M, 70.5M free. Mar 13 12:20:18.222890 systemd-modules-load[218]: Inserted module 'overlay' Mar 13 12:20:18.244595 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 12:20:18.244622 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 12:20:18.251592 kernel: Bridge firewalling registered Mar 13 12:20:18.249195 systemd-modules-load[218]: Inserted module 'br_netfilter' Mar 13 12:20:18.256723 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 12:20:18.262938 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 12:20:18.273444 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 12:20:18.283243 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 12:20:18.292426 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:20:18.310924 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 12:20:18.317892 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 12:20:18.337975 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 12:20:18.346893 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 12:20:18.354121 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 12:20:18.377917 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 12:20:18.390205 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 12:20:18.407042 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 12:20:18.422977 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 12:20:18.428896 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 12:20:18.445646 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 12:20:18.455324 dracut-cmdline[248]: dracut-dracut-053 Mar 13 12:20:18.467966 dracut-cmdline[248]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=74d23b4a193d6e906ee495726e120d1ddfae1935a31eb217879b4eafa2053949 Mar 13 12:20:18.497176 systemd-resolved[250]: Positive Trust Anchors: Mar 13 12:20:18.497193 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 12:20:18.497225 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 12:20:18.500131 systemd-resolved[250]: Defaulting to hostname 'linux'. Mar 13 12:20:18.501018 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 12:20:18.509749 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 12:20:18.522348 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 12:20:18.623748 kernel: SCSI subsystem initialized Mar 13 12:20:18.630754 kernel: Loading iSCSI transport class v2.0-870. Mar 13 12:20:18.640751 kernel: iscsi: registered transport (tcp) Mar 13 12:20:18.657080 kernel: iscsi: registered transport (qla4xxx) Mar 13 12:20:18.657110 kernel: QLogic iSCSI HBA Driver Mar 13 12:20:18.692545 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 12:20:18.704016 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 12:20:18.733306 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 12:20:18.733390 kernel: device-mapper: uevent: version 1.0.3 Mar 13 12:20:18.738510 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 13 12:20:18.786753 kernel: raid6: neonx8 gen() 15786 MB/s Mar 13 12:20:18.805760 kernel: raid6: neonx4 gen() 15691 MB/s Mar 13 12:20:18.824750 kernel: raid6: neonx2 gen() 13246 MB/s Mar 13 12:20:18.844763 kernel: raid6: neonx1 gen() 10502 MB/s Mar 13 12:20:18.863760 kernel: raid6: int64x8 gen() 6984 MB/s Mar 13 12:20:18.882754 kernel: raid6: int64x4 gen() 7372 MB/s Mar 13 12:20:18.902752 kernel: raid6: int64x2 gen() 6146 MB/s Mar 13 12:20:18.924829 kernel: raid6: int64x1 gen() 5072 MB/s Mar 13 12:20:18.924887 kernel: raid6: using algorithm neonx8 gen() 15786 MB/s Mar 13 12:20:18.947783 kernel: raid6: .... xor() 12043 MB/s, rmw enabled Mar 13 12:20:18.947794 kernel: raid6: using neon recovery algorithm Mar 13 12:20:18.958801 kernel: xor: measuring software checksum speed Mar 13 12:20:18.958819 kernel: 8regs : 19769 MB/sec Mar 13 12:20:18.961631 kernel: 32regs : 19613 MB/sec Mar 13 12:20:18.964479 kernel: arm64_neon : 27034 MB/sec Mar 13 12:20:18.967877 kernel: xor: using function: arm64_neon (27034 MB/sec) Mar 13 12:20:19.018784 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 12:20:19.028670 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 12:20:19.041897 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 12:20:19.062072 systemd-udevd[435]: Using default interface naming scheme 'v255'. Mar 13 12:20:19.066626 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 12:20:19.083950 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 12:20:19.100053 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation Mar 13 12:20:19.129926 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 12:20:19.141221 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 12:20:19.182239 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 12:20:19.201085 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 12:20:19.223565 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 12:20:19.241208 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 12:20:19.252805 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 12:20:19.259984 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 12:20:19.279005 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 12:20:19.308746 kernel: hv_vmbus: Vmbus version:5.3 Mar 13 12:20:19.312316 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 12:20:19.325624 kernel: hv_vmbus: registering driver hid_hyperv Mar 13 12:20:19.340255 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 13 12:20:19.340321 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 13 12:20:19.340774 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Mar 13 12:20:19.349555 kernel: hv_vmbus: registering driver hv_netvsc Mar 13 12:20:19.349610 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 13 12:20:19.360261 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 13 12:20:19.361277 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 12:20:19.361433 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 12:20:19.398423 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Mar 13 12:20:19.398449 kernel: hv_vmbus: registering driver hv_storvsc Mar 13 12:20:19.384289 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 12:20:19.389213 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 12:20:19.418900 kernel: scsi host1: storvsc_host_t Mar 13 12:20:19.423006 kernel: scsi host0: storvsc_host_t Mar 13 12:20:19.423037 kernel: PTP clock support registered Mar 13 12:20:19.423000 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:20:19.445540 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 13 12:20:19.445748 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Mar 13 12:20:19.430014 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 12:20:19.466536 kernel: hv_utils: Registering HyperV Utility Driver Mar 13 12:20:19.466596 kernel: hv_vmbus: registering driver hv_utils Mar 13 12:20:19.474537 kernel: hv_utils: Heartbeat IC version 3.0 Mar 13 12:20:19.474592 kernel: hv_utils: Shutdown IC version 3.2 Mar 13 12:20:19.474613 kernel: hv_utils: TimeSync IC version 4.0 Mar 13 12:20:19.477801 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 12:20:19.169773 kernel: hv_netvsc 7ced8dc3-89b1-7ced-8dc3-89b17ced8dc3 eth0: VF slot 1 added Mar 13 12:20:19.179937 systemd-journald[217]: Time jumped backwards, rotating. Mar 13 12:20:19.160530 systemd-resolved[250]: Clock change detected. Flushing caches. Mar 13 12:20:19.196395 kernel: hv_vmbus: registering driver hv_pci Mar 13 12:20:19.196444 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 13 12:20:19.196634 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 13 12:20:19.194352 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:20:19.212493 kernel: hv_pci 4676b71a-7eba-4741-bc0b-2531674968a6: PCI VMBus probing: Using version 0x10004 Mar 13 12:20:19.220641 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 12:20:19.242222 kernel: hv_pci 4676b71a-7eba-4741-bc0b-2531674968a6: PCI host bridge to bus 7eba:00 Mar 13 12:20:19.242432 kernel: pci_bus 7eba:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 13 12:20:19.242560 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 13 12:20:19.242662 kernel: pci_bus 7eba:00: No busn resource found for root bus, will use [bus 00-ff] Mar 13 12:20:19.254269 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 13 12:20:19.258594 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 13 12:20:19.258814 kernel: pci 7eba:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 13 12:20:19.265377 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 13 12:20:19.265118 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 12:20:19.290954 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 13 12:20:19.300865 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 13 12:20:19.300986 kernel: pci 7eba:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 13 12:20:19.301007 kernel: pci 7eba:00:02.0: enabling Extended Tags Mar 13 12:20:19.301020 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 12:20:19.301036 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#32 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 13 12:20:19.301140 kernel: pci 7eba:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 7eba:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 13 12:20:19.313973 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 13 12:20:19.314149 kernel: pci_bus 7eba:00: busn_res: [bus 00-ff] end is updated to 00 Mar 13 12:20:19.323496 kernel: pci 7eba:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 13 12:20:19.349494 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#55 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 13 12:20:19.378045 kernel: mlx5_core 7eba:00:02.0: enabling device (0000 -> 0002) Mar 13 12:20:19.383496 kernel: mlx5_core 7eba:00:02.0: firmware version: 16.30.5026 Mar 13 12:20:19.583774 kernel: hv_netvsc 7ced8dc3-89b1-7ced-8dc3-89b17ced8dc3 eth0: VF registering: eth1 Mar 13 12:20:19.583986 kernel: mlx5_core 7eba:00:02.0 eth1: joined to eth0 Mar 13 12:20:19.591576 kernel: mlx5_core 7eba:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 13 12:20:19.599496 kernel: mlx5_core 7eba:00:02.0 enP32442s1: renamed from eth1 Mar 13 12:20:20.321223 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 13 12:20:20.386510 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (484) Mar 13 12:20:20.400381 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 13 12:20:20.411439 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 13 12:20:20.508512 kernel: BTRFS: device fsid beae115b-a7a4-4bd0-8b91-fe8e188f678a devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (491) Mar 13 12:20:20.522166 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 13 12:20:20.527750 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 13 12:20:20.554766 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 12:20:20.576534 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 12:20:20.584506 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 12:20:21.595512 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 12:20:21.595695 disk-uuid[606]: The operation has completed successfully. Mar 13 12:20:21.663290 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 12:20:21.663403 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 12:20:21.694653 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 12:20:21.705717 sh[719]: Success Mar 13 12:20:21.754100 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 13 12:20:22.296581 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 12:20:22.313030 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 12:20:22.320286 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 12:20:22.349047 kernel: BTRFS info (device dm-0): first mount of filesystem beae115b-a7a4-4bd0-8b91-fe8e188f678a Mar 13 12:20:22.349096 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 13 12:20:22.354626 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 13 12:20:22.358705 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 13 12:20:22.362162 kernel: BTRFS info (device dm-0): using free space tree Mar 13 12:20:23.030238 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 12:20:23.034700 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 12:20:23.049767 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 12:20:23.061204 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 12:20:23.093769 kernel: BTRFS info (device sda6): first mount of filesystem e3f2e5ef-5667-4244-bb44-c23afbdb3707 Mar 13 12:20:23.093829 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 13 12:20:23.097548 kernel: BTRFS info (device sda6): using free space tree Mar 13 12:20:23.164669 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 12:20:23.182502 kernel: BTRFS info (device sda6): auto enabling async discard Mar 13 12:20:23.187793 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 12:20:23.203511 kernel: BTRFS info (device sda6): last unmount of filesystem e3f2e5ef-5667-4244-bb44-c23afbdb3707 Mar 13 12:20:23.212814 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 12:20:23.225768 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 12:20:23.233214 systemd-networkd[895]: lo: Link UP Mar 13 12:20:23.233218 systemd-networkd[895]: lo: Gained carrier Mar 13 12:20:23.234826 systemd-networkd[895]: Enumeration completed Mar 13 12:20:23.235223 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 12:20:23.237942 systemd-networkd[895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 12:20:23.237946 systemd-networkd[895]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 12:20:23.245610 systemd[1]: Reached target network.target - Network. Mar 13 12:20:23.323494 kernel: mlx5_core 7eba:00:02.0 enP32442s1: Link up Mar 13 12:20:23.365495 kernel: hv_netvsc 7ced8dc3-89b1-7ced-8dc3-89b17ced8dc3 eth0: Data path switched to VF: enP32442s1 Mar 13 12:20:23.365568 systemd-networkd[895]: enP32442s1: Link UP Mar 13 12:20:23.365653 systemd-networkd[895]: eth0: Link UP Mar 13 12:20:23.365752 systemd-networkd[895]: eth0: Gained carrier Mar 13 12:20:23.365761 systemd-networkd[895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 12:20:23.385087 systemd-networkd[895]: enP32442s1: Gained carrier Mar 13 12:20:23.398537 systemd-networkd[895]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 13 12:20:25.251649 systemd-networkd[895]: eth0: Gained IPv6LL Mar 13 12:20:25.329773 ignition[903]: Ignition 2.19.0 Mar 13 12:20:25.329791 ignition[903]: Stage: fetch-offline Mar 13 12:20:25.339209 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 12:20:25.329834 ignition[903]: no configs at "/usr/lib/ignition/base.d" Mar 13 12:20:25.329842 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:20:25.329945 ignition[903]: parsed url from cmdline: "" Mar 13 12:20:25.329948 ignition[903]: no config URL provided Mar 13 12:20:25.329953 ignition[903]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 12:20:25.362768 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 13 12:20:25.329961 ignition[903]: no config at "/usr/lib/ignition/user.ign" Mar 13 12:20:25.329967 ignition[903]: failed to fetch config: resource requires networking Mar 13 12:20:25.330160 ignition[903]: Ignition finished successfully Mar 13 12:20:25.385066 ignition[912]: Ignition 2.19.0 Mar 13 12:20:25.385073 ignition[912]: Stage: fetch Mar 13 12:20:25.385276 ignition[912]: no configs at "/usr/lib/ignition/base.d" Mar 13 12:20:25.385286 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:20:25.385388 ignition[912]: parsed url from cmdline: "" Mar 13 12:20:25.385391 ignition[912]: no config URL provided Mar 13 12:20:25.385396 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 12:20:25.385403 ignition[912]: no config at "/usr/lib/ignition/user.ign" Mar 13 12:20:25.385432 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 13 12:20:25.483741 ignition[912]: GET result: OK Mar 13 12:20:25.483832 ignition[912]: config has been read from IMDS userdata Mar 13 12:20:25.483880 ignition[912]: parsing config with SHA512: c9b98b9fbff168c529df3b2300f197b8fa462516d6b068aab107929d55634e34b5fe86b894d3b3700211049ad8cdf72c7cd03ef90b9748d71800e00d2487aa53 Mar 13 12:20:25.487729 unknown[912]: fetched base config from "system" Mar 13 12:20:25.488122 ignition[912]: fetch: fetch complete Mar 13 12:20:25.487737 unknown[912]: fetched base config from "system" Mar 13 12:20:25.488127 ignition[912]: fetch: fetch passed Mar 13 12:20:25.487743 unknown[912]: fetched user config from "azure" Mar 13 12:20:25.488180 ignition[912]: Ignition finished successfully Mar 13 12:20:25.496793 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 13 12:20:25.512658 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 12:20:25.532086 ignition[918]: Ignition 2.19.0 Mar 13 12:20:25.532093 ignition[918]: Stage: kargs Mar 13 12:20:25.538094 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 12:20:25.532306 ignition[918]: no configs at "/usr/lib/ignition/base.d" Mar 13 12:20:25.532315 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:20:25.533688 ignition[918]: kargs: kargs passed Mar 13 12:20:25.533748 ignition[918]: Ignition finished successfully Mar 13 12:20:25.558664 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 12:20:25.573936 ignition[924]: Ignition 2.19.0 Mar 13 12:20:25.573945 ignition[924]: Stage: disks Mar 13 12:20:25.577744 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 12:20:25.574118 ignition[924]: no configs at "/usr/lib/ignition/base.d" Mar 13 12:20:25.584788 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 12:20:25.574128 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:20:25.593206 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 12:20:25.575142 ignition[924]: disks: disks passed Mar 13 12:20:25.602390 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 12:20:25.575195 ignition[924]: Ignition finished successfully Mar 13 12:20:25.610979 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 12:20:25.620113 systemd[1]: Reached target basic.target - Basic System. Mar 13 12:20:25.639741 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 12:20:25.777467 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 13 12:20:25.784511 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 12:20:25.796744 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 12:20:25.852664 kernel: EXT4-fs (sda9): mounted filesystem e3689c4f-fa92-4cc2-b7ea-ac589877c8df r/w with ordered data mode. Quota mode: none. Mar 13 12:20:25.853198 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 12:20:25.857295 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 12:20:25.936578 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 12:20:25.962358 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Mar 13 12:20:25.962410 kernel: BTRFS info (device sda6): first mount of filesystem e3f2e5ef-5667-4244-bb44-c23afbdb3707 Mar 13 12:20:25.967114 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 13 12:20:25.970492 kernel: BTRFS info (device sda6): using free space tree Mar 13 12:20:25.977502 kernel: BTRFS info (device sda6): auto enabling async discard Mar 13 12:20:26.003598 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 12:20:26.012287 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 13 12:20:26.022761 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 12:20:26.022804 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 12:20:26.038940 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 12:20:26.046600 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 12:20:26.066667 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 12:20:27.066311 coreos-metadata[960]: Mar 13 12:20:27.066 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 13 12:20:27.074931 coreos-metadata[960]: Mar 13 12:20:27.074 INFO Fetch successful Mar 13 12:20:27.074931 coreos-metadata[960]: Mar 13 12:20:27.074 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 13 12:20:27.089415 coreos-metadata[960]: Mar 13 12:20:27.089 INFO Fetch successful Mar 13 12:20:27.124424 coreos-metadata[960]: Mar 13 12:20:27.123 INFO wrote hostname ci-4081.3.101-d13a81acd8 to /sysroot/etc/hostname Mar 13 12:20:27.131095 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 13 12:20:27.550965 initrd-setup-root[973]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 12:20:27.635517 initrd-setup-root[980]: cut: /sysroot/etc/group: No such file or directory Mar 13 12:20:27.644068 initrd-setup-root[987]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 12:20:27.652136 initrd-setup-root[994]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 12:20:29.646384 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 12:20:29.658691 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 12:20:29.670669 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 12:20:29.685247 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 12:20:29.689441 kernel: BTRFS info (device sda6): last unmount of filesystem e3f2e5ef-5667-4244-bb44-c23afbdb3707 Mar 13 12:20:29.715315 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 12:20:29.727956 ignition[1063]: INFO : Ignition 2.19.0 Mar 13 12:20:29.727956 ignition[1063]: INFO : Stage: mount Mar 13 12:20:29.727956 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 12:20:29.727956 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:20:29.749786 ignition[1063]: INFO : mount: mount passed Mar 13 12:20:29.749786 ignition[1063]: INFO : Ignition finished successfully Mar 13 12:20:29.734449 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 12:20:29.753677 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 12:20:29.770723 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 12:20:29.793490 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1077) Mar 13 12:20:29.799505 kernel: BTRFS info (device sda6): first mount of filesystem e3f2e5ef-5667-4244-bb44-c23afbdb3707 Mar 13 12:20:29.799552 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 13 12:20:29.807204 kernel: BTRFS info (device sda6): using free space tree Mar 13 12:20:29.813486 kernel: BTRFS info (device sda6): auto enabling async discard Mar 13 12:20:29.815786 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 12:20:29.839389 ignition[1094]: INFO : Ignition 2.19.0 Mar 13 12:20:29.839389 ignition[1094]: INFO : Stage: files Mar 13 12:20:29.845975 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 12:20:29.845975 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:20:29.845975 ignition[1094]: DEBUG : files: compiled without relabeling support, skipping Mar 13 12:20:29.873069 ignition[1094]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 12:20:29.873069 ignition[1094]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 12:20:30.059389 ignition[1094]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 12:20:30.065436 ignition[1094]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 12:20:30.065436 ignition[1094]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 12:20:30.059816 unknown[1094]: wrote ssh authorized keys file for user: core Mar 13 12:20:30.095830 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 13 12:20:30.105244 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 13 12:20:30.132950 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 12:20:30.249968 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 13 12:20:30.259325 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Mar 13 12:20:30.804975 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 13 12:20:31.796841 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 13 12:20:31.796841 ignition[1094]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 13 12:20:31.869405 ignition[1094]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 12:20:31.869405 ignition[1094]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 12:20:31.869405 ignition[1094]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 13 12:20:31.869405 ignition[1094]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 13 12:20:31.906509 ignition[1094]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 12:20:31.913852 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 12:20:31.913852 ignition[1094]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 12:20:31.913852 ignition[1094]: INFO : files: files passed Mar 13 12:20:31.913852 ignition[1094]: INFO : Ignition finished successfully Mar 13 12:20:31.914311 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 12:20:31.941816 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 12:20:31.949691 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 12:20:31.962791 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 12:20:31.962882 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 12:20:32.004562 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 12:20:32.004562 initrd-setup-root-after-ignition[1122]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 12:20:32.019199 initrd-setup-root-after-ignition[1126]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 12:20:32.020331 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 12:20:32.031787 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 12:20:32.052770 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 12:20:32.082431 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 12:20:32.082551 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 12:20:32.092522 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 12:20:32.102351 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 12:20:32.111366 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 12:20:32.123755 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 12:20:32.142178 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 12:20:32.157755 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 12:20:32.173199 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 12:20:32.178724 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 12:20:32.189283 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 12:20:32.198603 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 12:20:32.198724 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 12:20:32.211466 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 12:20:32.215899 systemd[1]: Stopped target basic.target - Basic System. Mar 13 12:20:32.224984 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 12:20:32.234345 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 12:20:32.243388 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 12:20:32.253220 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 12:20:32.262960 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 12:20:32.273506 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 12:20:32.282160 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 12:20:32.291516 systemd[1]: Stopped target swap.target - Swaps. Mar 13 12:20:32.299965 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 12:20:32.300085 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 12:20:32.313514 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 12:20:32.318553 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 12:20:32.327693 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 12:20:32.331533 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 12:20:32.336761 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 12:20:32.336871 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 12:20:32.349949 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 12:20:32.350064 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 12:20:32.355419 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 12:20:32.355516 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 12:20:32.415870 ignition[1146]: INFO : Ignition 2.19.0 Mar 13 12:20:32.415870 ignition[1146]: INFO : Stage: umount Mar 13 12:20:32.415870 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 12:20:32.415870 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 13 12:20:32.415870 ignition[1146]: INFO : umount: umount passed Mar 13 12:20:32.415870 ignition[1146]: INFO : Ignition finished successfully Mar 13 12:20:32.363437 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 13 12:20:32.363540 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 13 12:20:32.387772 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 12:20:32.399100 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 12:20:32.403216 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 12:20:32.421856 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 12:20:32.428524 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 12:20:32.428757 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 12:20:32.433953 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 12:20:32.434105 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 12:20:32.449631 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 12:20:32.449740 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 12:20:32.455233 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 12:20:32.458062 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 12:20:32.458165 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 12:20:32.464260 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 12:20:32.464310 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 12:20:32.471888 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 12:20:32.471933 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 12:20:32.479516 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 13 12:20:32.479556 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 13 12:20:32.488015 systemd[1]: Stopped target network.target - Network. Mar 13 12:20:32.497329 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 12:20:32.497380 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 12:20:32.510301 systemd[1]: Stopped target paths.target - Path Units. Mar 13 12:20:32.514010 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 12:20:32.523499 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 12:20:32.531694 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 12:20:32.539833 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 12:20:32.548816 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 12:20:32.548869 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 12:20:32.557238 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 12:20:32.557279 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 12:20:32.565379 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 12:20:32.565443 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 12:20:32.573338 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 12:20:32.573380 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 12:20:32.581882 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 12:20:32.589854 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 12:20:32.599522 systemd-networkd[895]: eth0: DHCPv6 lease lost Mar 13 12:20:32.600061 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 12:20:32.600162 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 12:20:32.801643 kernel: hv_netvsc 7ced8dc3-89b1-7ced-8dc3-89b17ced8dc3 eth0: Data path switched from VF: enP32442s1 Mar 13 12:20:32.609658 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 12:20:32.609767 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 12:20:32.620660 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 12:20:32.620873 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 12:20:32.631239 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 12:20:32.631472 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 12:20:32.638238 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 12:20:32.638298 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 12:20:32.658689 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 12:20:32.667911 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 12:20:32.667990 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 12:20:32.677437 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 12:20:32.677483 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 12:20:32.686913 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 12:20:32.686952 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 12:20:32.698179 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 12:20:32.698224 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 12:20:32.707223 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 12:20:32.741819 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 12:20:32.741985 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 12:20:32.751660 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 12:20:32.751701 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 12:20:32.760871 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 12:20:32.760905 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 12:20:32.770368 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 12:20:32.770410 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 12:20:32.782193 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 12:20:32.782233 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 12:20:32.797440 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 12:20:32.797510 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 12:20:32.825259 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 12:20:32.832547 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 12:20:32.832635 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 12:20:32.845821 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 12:20:32.845878 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:20:32.856968 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 12:20:32.857084 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 12:20:32.866731 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 12:20:32.866823 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 12:20:32.877038 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 12:20:33.052085 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Mar 13 12:20:32.898650 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 12:20:32.912043 systemd[1]: Switching root. Mar 13 12:20:33.061305 systemd-journald[217]: Journal stopped Mar 13 12:20:42.372629 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 12:20:42.372654 kernel: SELinux: policy capability open_perms=1 Mar 13 12:20:42.372665 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 12:20:42.372676 kernel: SELinux: policy capability always_check_network=0 Mar 13 12:20:42.372686 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 12:20:42.372694 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 12:20:42.372702 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 12:20:42.372710 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 12:20:42.372718 kernel: audit: type=1403 audit(1773404435.771:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 12:20:42.372728 systemd[1]: Successfully loaded SELinux policy in 268.931ms. Mar 13 12:20:42.372740 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.463ms. Mar 13 12:20:42.372750 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 13 12:20:42.372759 systemd[1]: Detected virtualization microsoft. Mar 13 12:20:42.372767 systemd[1]: Detected architecture arm64. Mar 13 12:20:42.372777 systemd[1]: Detected first boot. Mar 13 12:20:42.372788 systemd[1]: Hostname set to . Mar 13 12:20:42.372797 systemd[1]: Initializing machine ID from random generator. Mar 13 12:20:42.372806 zram_generator::config[1187]: No configuration found. Mar 13 12:20:42.372816 systemd[1]: Populated /etc with preset unit settings. Mar 13 12:20:42.372825 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 12:20:42.372834 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 12:20:42.372843 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 12:20:42.372855 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 12:20:42.372864 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 12:20:42.372874 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 12:20:42.372883 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 12:20:42.372893 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 12:20:42.372902 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 12:20:42.372911 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 12:20:42.372922 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 12:20:42.372931 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 12:20:42.372941 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 12:20:42.372950 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 12:20:42.372959 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 12:20:42.372968 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 12:20:42.372978 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 12:20:42.372987 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 13 12:20:42.372998 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 12:20:42.373007 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 12:20:42.373016 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 12:20:42.373027 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 12:20:42.373037 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 12:20:42.373046 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 12:20:42.373056 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 12:20:42.373065 systemd[1]: Reached target slices.target - Slice Units. Mar 13 12:20:42.373077 systemd[1]: Reached target swap.target - Swaps. Mar 13 12:20:42.373087 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 12:20:42.373096 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 12:20:42.373106 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 12:20:42.373115 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 12:20:42.373125 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 12:20:42.373136 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 12:20:42.373146 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 12:20:42.373155 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 12:20:42.373165 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 12:20:42.373174 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 12:20:42.373184 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 12:20:42.373193 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 12:20:42.373205 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 12:20:42.373215 systemd[1]: Reached target machines.target - Containers. Mar 13 12:20:42.373224 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 12:20:42.373234 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 12:20:42.373244 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 12:20:42.373253 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 12:20:42.373263 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 12:20:42.373272 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 12:20:42.373284 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 12:20:42.373294 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 12:20:42.373303 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 12:20:42.373313 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 12:20:42.373323 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 12:20:42.373332 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 12:20:42.373342 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 12:20:42.373351 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 12:20:42.373362 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 12:20:42.373372 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 12:20:42.373381 kernel: fuse: init (API version 7.39) Mar 13 12:20:42.373390 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 12:20:42.373400 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 12:20:42.373409 kernel: loop: module loaded Mar 13 12:20:42.373418 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 12:20:42.373444 systemd-journald[1283]: Collecting audit messages is disabled. Mar 13 12:20:42.373467 systemd-journald[1283]: Journal started Mar 13 12:20:42.373494 systemd-journald[1283]: Runtime Journal (/run/log/journal/6643b507203a452f8303daa3a7b35f5f) is 8.0M, max 78.5M, 70.5M free. Mar 13 12:20:41.126276 systemd[1]: Queued start job for default target multi-user.target. Mar 13 12:20:41.492881 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 13 12:20:41.493258 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 12:20:41.494745 systemd[1]: systemd-journald.service: Consumed 2.508s CPU time. Mar 13 12:20:42.383218 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 12:20:42.383267 systemd[1]: Stopped verity-setup.service. Mar 13 12:20:42.397642 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 12:20:42.398553 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 12:20:42.403393 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 12:20:42.408969 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 12:20:42.413535 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 12:20:42.418688 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 12:20:42.424319 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 12:20:42.430517 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 12:20:42.436324 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 12:20:42.445670 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 12:20:42.445830 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 12:20:42.451719 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 12:20:42.451853 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 12:20:42.457434 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 12:20:42.458578 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 12:20:42.464678 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 12:20:42.464829 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 12:20:42.470123 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 12:20:42.470257 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 12:20:42.475367 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 12:20:42.484859 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 12:20:42.500702 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 12:20:42.510566 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 12:20:42.518355 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 12:20:42.523829 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 12:20:42.523971 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 12:20:42.529841 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 13 12:20:42.540629 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 12:20:42.547245 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 12:20:42.552380 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 12:20:42.574506 kernel: ACPI: bus type drm_connector registered Mar 13 12:20:42.596657 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 12:20:42.606013 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 12:20:42.613801 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 12:20:42.616721 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 12:20:42.621928 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 12:20:42.624685 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 12:20:42.637725 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 12:20:42.645080 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 12:20:42.645369 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 12:20:42.650905 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 12:20:42.656210 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 12:20:42.662173 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 12:20:42.667553 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 12:20:42.676559 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 12:20:42.685134 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 12:20:42.697474 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 12:20:42.710704 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 13 12:20:42.728904 systemd-journald[1283]: Time spent on flushing to /var/log/journal/6643b507203a452f8303daa3a7b35f5f is 56.742ms for 893 entries. Mar 13 12:20:42.728904 systemd-journald[1283]: System Journal (/var/log/journal/6643b507203a452f8303daa3a7b35f5f) is 11.8M, max 2.6G, 2.6G free. Mar 13 12:20:42.832468 kernel: loop0: detected capacity change from 0 to 114328 Mar 13 12:20:42.832535 systemd-journald[1283]: Received client request to flush runtime journal. Mar 13 12:20:42.832569 systemd-journald[1283]: /var/log/journal/6643b507203a452f8303daa3a7b35f5f/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Mar 13 12:20:42.832593 systemd-journald[1283]: Rotating system journal. Mar 13 12:20:42.724228 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 12:20:42.743646 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 13 12:20:42.767212 udevadm[1330]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 13 12:20:42.810664 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 12:20:42.811385 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 13 12:20:42.834335 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 12:20:42.929852 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 12:20:42.958649 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 12:20:42.970704 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 12:20:43.103771 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Mar 13 12:20:43.104161 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Mar 13 12:20:43.108645 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 12:20:43.592505 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 12:20:43.733510 kernel: loop1: detected capacity change from 0 to 114424 Mar 13 12:20:43.745540 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 12:20:43.760684 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 12:20:43.784928 systemd-udevd[1345]: Using default interface naming scheme 'v255'. Mar 13 12:20:43.912546 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 12:20:43.934999 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 12:20:43.976321 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 13 12:20:43.995748 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 12:20:44.075530 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 12:20:44.119507 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#4 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Mar 13 12:20:44.130954 kernel: hv_vmbus: registering driver hv_balloon Mar 13 12:20:44.131055 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 13 12:20:44.131079 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 12:20:44.134545 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 13 12:20:44.173785 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 12:20:44.185735 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 12:20:44.186046 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:20:44.197382 kernel: hv_vmbus: registering driver hyperv_fb Mar 13 12:20:44.197468 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 13 12:20:44.204714 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 13 12:20:44.206866 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 12:20:44.214824 kernel: Console: switching to colour dummy device 80x25 Mar 13 12:20:44.223030 kernel: Console: switching to colour frame buffer device 128x48 Mar 13 12:20:44.232831 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 12:20:44.234762 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:20:44.245659 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 12:20:44.278558 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1352) Mar 13 12:20:44.323422 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 13 12:20:44.333635 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 12:20:44.375054 systemd-networkd[1359]: lo: Link UP Mar 13 12:20:44.375062 systemd-networkd[1359]: lo: Gained carrier Mar 13 12:20:44.376954 systemd-networkd[1359]: Enumeration completed Mar 13 12:20:44.377095 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 12:20:44.377284 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 12:20:44.377287 systemd-networkd[1359]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 12:20:44.391731 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 12:20:44.441494 kernel: mlx5_core 7eba:00:02.0 enP32442s1: Link up Mar 13 12:20:44.468764 kernel: hv_netvsc 7ced8dc3-89b1-7ced-8dc3-89b17ced8dc3 eth0: Data path switched to VF: enP32442s1 Mar 13 12:20:44.469406 systemd-networkd[1359]: enP32442s1: Link UP Mar 13 12:20:44.469602 systemd-networkd[1359]: eth0: Link UP Mar 13 12:20:44.469607 systemd-networkd[1359]: eth0: Gained carrier Mar 13 12:20:44.469623 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 12:20:44.473773 systemd-networkd[1359]: enP32442s1: Gained carrier Mar 13 12:20:44.479550 systemd-networkd[1359]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 13 12:20:44.505673 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 12:20:44.530509 kernel: loop2: detected capacity change from 0 to 31320 Mar 13 12:20:45.245165 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 13 12:20:45.257630 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 13 12:20:45.293494 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 12:20:45.303506 kernel: loop3: detected capacity change from 0 to 197488 Mar 13 12:20:45.353502 kernel: loop4: detected capacity change from 0 to 114328 Mar 13 12:20:45.366504 kernel: loop5: detected capacity change from 0 to 114424 Mar 13 12:20:45.377499 kernel: loop6: detected capacity change from 0 to 31320 Mar 13 12:20:45.389502 kernel: loop7: detected capacity change from 0 to 197488 Mar 13 12:20:45.401622 (sd-merge)[1449]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 13 12:20:45.402091 (sd-merge)[1449]: Merged extensions into '/usr'. Mar 13 12:20:45.407126 systemd[1]: Reloading requested from client PID 1317 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 12:20:45.407146 systemd[1]: Reloading... Mar 13 12:20:45.418516 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 13 12:20:45.485581 zram_generator::config[1476]: No configuration found. Mar 13 12:20:45.697195 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 13 12:20:45.767305 systemd[1]: Reloading finished in 359 ms. Mar 13 12:20:45.794520 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 13 12:20:45.800581 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 12:20:45.823925 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 12:20:45.840652 systemd[1]: Starting ensure-sysext.service... Mar 13 12:20:45.846682 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 13 12:20:45.855702 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 12:20:45.867291 lvm[1535]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 13 12:20:45.867635 systemd[1]: Reloading requested from client PID 1534 ('systemctl') (unit ensure-sysext.service)... Mar 13 12:20:45.867650 systemd[1]: Reloading... Mar 13 12:20:45.877588 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 12:20:45.878897 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 12:20:45.879764 systemd-tmpfiles[1536]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 12:20:45.880066 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Mar 13 12:20:45.880190 systemd-tmpfiles[1536]: ACLs are not supported, ignoring. Mar 13 12:20:45.923586 systemd-networkd[1359]: eth0: Gained IPv6LL Mar 13 12:20:45.946714 systemd-tmpfiles[1536]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 12:20:45.946724 systemd-tmpfiles[1536]: Skipping /boot Mar 13 12:20:45.956031 systemd-tmpfiles[1536]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 12:20:45.956158 systemd-tmpfiles[1536]: Skipping /boot Mar 13 12:20:45.965504 zram_generator::config[1567]: No configuration found. Mar 13 12:20:46.079694 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 13 12:20:46.154513 systemd[1]: Reloading finished in 286 ms. Mar 13 12:20:46.173218 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 12:20:46.185945 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 13 12:20:46.192385 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 12:20:46.210729 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 13 12:20:46.220146 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 12:20:46.228026 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 12:20:46.242661 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 12:20:46.257802 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 12:20:46.276839 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 12:20:46.285845 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 12:20:46.292804 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 12:20:46.314224 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 12:20:46.324322 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 12:20:46.330013 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 12:20:46.330325 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 12:20:46.336676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 12:20:46.336847 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 12:20:46.344137 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 12:20:46.344285 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 12:20:46.350038 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 12:20:46.350177 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 12:20:46.357243 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 12:20:46.357377 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 12:20:46.367859 systemd[1]: Finished ensure-sysext.service. Mar 13 12:20:46.372977 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 12:20:46.387991 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 12:20:46.388072 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 12:20:46.399753 systemd-resolved[1631]: Positive Trust Anchors: Mar 13 12:20:46.399769 systemd-resolved[1631]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 12:20:46.399802 systemd-resolved[1631]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 12:20:46.436106 systemd-resolved[1631]: Using system hostname 'ci-4081.3.101-d13a81acd8'. Mar 13 12:20:46.437760 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 12:20:46.443019 systemd[1]: Reached target network.target - Network. Mar 13 12:20:46.447020 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 12:20:46.452091 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 12:20:46.516436 augenrules[1657]: No rules Mar 13 12:20:46.518202 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 13 12:20:46.627161 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 12:20:48.331238 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 12:20:48.337508 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 12:20:52.854373 ldconfig[1313]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 12:20:52.871355 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 12:20:52.881634 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 12:20:52.895518 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 12:20:52.900835 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 12:20:52.905953 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 12:20:52.911599 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 12:20:52.917532 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 12:20:52.922855 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 12:20:52.929027 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 12:20:52.935101 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 12:20:52.935134 systemd[1]: Reached target paths.target - Path Units. Mar 13 12:20:52.939653 systemd[1]: Reached target timers.target - Timer Units. Mar 13 12:20:52.946542 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 12:20:52.953155 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 12:20:52.962543 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 12:20:52.967969 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 12:20:52.973132 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 12:20:52.977267 systemd[1]: Reached target basic.target - Basic System. Mar 13 12:20:52.981348 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 12:20:52.981372 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 12:20:52.993579 systemd[1]: Starting chronyd.service - NTP client/server... Mar 13 12:20:53.000641 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 12:20:53.015688 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 13 12:20:53.021948 (chronyd)[1670]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 13 12:20:53.024238 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 12:20:53.030637 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 12:20:53.041431 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 12:20:53.045999 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 12:20:53.046140 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 13 12:20:53.049697 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 13 12:20:53.055172 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 13 12:20:53.057538 KVP[1678]: KVP starting; pid is:1678 Mar 13 12:20:53.059270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 12:20:53.069862 chronyd[1683]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 13 12:20:53.072677 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 12:20:53.080271 jq[1676]: false Mar 13 12:20:53.081818 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 12:20:53.088623 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 12:20:53.098695 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 12:20:53.105776 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 12:20:53.114170 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 12:20:53.119136 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 13 12:20:53.119802 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 12:20:53.123176 chronyd[1683]: Timezone right/UTC failed leap second check, ignoring Mar 13 12:20:53.124554 chronyd[1683]: Loaded seccomp filter (level 2) Mar 13 12:20:53.128610 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 12:20:53.138454 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 12:20:53.147809 systemd[1]: Started chronyd.service - NTP client/server. Mar 13 12:20:53.149698 kernel: hv_utils: KVP IC version 4.0 Mar 13 12:20:53.149303 KVP[1678]: KVP LIC Version: 3.1 Mar 13 12:20:53.157521 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 12:20:53.159545 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 12:20:53.163813 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 12:20:53.164000 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 12:20:53.168296 jq[1692]: true Mar 13 12:20:53.178343 extend-filesystems[1677]: Found loop4 Mar 13 12:20:53.178343 extend-filesystems[1677]: Found loop5 Mar 13 12:20:53.178343 extend-filesystems[1677]: Found loop6 Mar 13 12:20:53.178343 extend-filesystems[1677]: Found loop7 Mar 13 12:20:53.178343 extend-filesystems[1677]: Found sda Mar 13 12:20:53.178343 extend-filesystems[1677]: Found sda1 Mar 13 12:20:53.178343 extend-filesystems[1677]: Found sda2 Mar 13 12:20:53.178343 extend-filesystems[1677]: Found sda3 Mar 13 12:20:53.178343 extend-filesystems[1677]: Found usr Mar 13 12:20:53.178343 extend-filesystems[1677]: Found sda4 Mar 13 12:20:53.178343 extend-filesystems[1677]: Found sda6 Mar 13 12:20:53.178343 extend-filesystems[1677]: Found sda7 Mar 13 12:20:53.178343 extend-filesystems[1677]: Found sda9 Mar 13 12:20:53.178343 extend-filesystems[1677]: Checking size of /dev/sda9 Mar 13 12:20:53.301231 update_engine[1691]: I20260313 12:20:53.287770 1691 main.cc:92] Flatcar Update Engine starting Mar 13 12:20:53.204083 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 12:20:53.207329 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 12:20:53.208139 (ntainerd)[1713]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 12:20:53.265301 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 12:20:53.302904 jq[1700]: true Mar 13 12:20:53.287270 systemd-logind[1690]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Mar 13 12:20:53.288253 systemd-logind[1690]: New seat seat0. Mar 13 12:20:53.292953 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 12:20:53.306706 tar[1699]: linux-arm64/LICENSE Mar 13 12:20:53.306706 tar[1699]: linux-arm64/helm Mar 13 12:20:53.348506 extend-filesystems[1677]: Old size kept for /dev/sda9 Mar 13 12:20:53.348506 extend-filesystems[1677]: Found sr0 Mar 13 12:20:53.353168 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 12:20:53.354241 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 12:20:53.398987 dbus-daemon[1675]: [system] SELinux support is enabled Mar 13 12:20:53.399225 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 12:20:53.409197 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 12:20:53.410717 dbus-daemon[1675]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 13 12:20:53.409229 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 12:20:53.415458 update_engine[1691]: I20260313 12:20:53.415398 1691 update_check_scheduler.cc:74] Next update check in 11m58s Mar 13 12:20:53.418546 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 12:20:53.418570 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 12:20:53.428703 systemd[1]: Started update-engine.service - Update Engine. Mar 13 12:20:53.445127 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 12:20:53.474767 bash[1730]: Updated "/home/core/.ssh/authorized_keys" Mar 13 12:20:53.478440 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 12:20:53.488462 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 13 12:20:53.543766 coreos-metadata[1672]: Mar 13 12:20:53.543 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 13 12:20:53.546179 coreos-metadata[1672]: Mar 13 12:20:53.545 INFO Fetch successful Mar 13 12:20:53.546179 coreos-metadata[1672]: Mar 13 12:20:53.546 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 13 12:20:53.553758 coreos-metadata[1672]: Mar 13 12:20:53.553 INFO Fetch successful Mar 13 12:20:53.554112 coreos-metadata[1672]: Mar 13 12:20:53.554 INFO Fetching http://168.63.129.16/machine/dbc13a18-0845-4aa3-b2ad-40443603278a/52f878d7%2D226a%2D4047%2Db57c%2Dfbaa1531e631.%5Fci%2D4081.3.101%2Dd13a81acd8?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 13 12:20:53.555934 coreos-metadata[1672]: Mar 13 12:20:53.555 INFO Fetch successful Mar 13 12:20:53.555934 coreos-metadata[1672]: Mar 13 12:20:53.555 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 13 12:20:53.570075 coreos-metadata[1672]: Mar 13 12:20:53.570 INFO Fetch successful Mar 13 12:20:53.585997 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1740) Mar 13 12:20:53.702089 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 13 12:20:53.708964 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 12:20:53.839767 sshd_keygen[1697]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 12:20:53.856467 locksmithd[1753]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 12:20:53.881340 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 12:20:53.897955 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 12:20:53.908153 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 13 12:20:53.920373 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 12:20:53.921266 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 12:20:53.934176 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 12:20:53.968003 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 12:20:53.983753 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 13 12:20:54.005866 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 12:20:54.021517 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 13 12:20:54.033562 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 12:20:54.134247 tar[1699]: linux-arm64/README.md Mar 13 12:20:54.148660 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 12:20:54.262710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 12:20:54.269769 (kubelet)[1825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 12:20:54.500988 containerd[1713]: time="2026-03-13T12:20:54.500899280Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 13 12:20:54.529519 containerd[1713]: time="2026-03-13T12:20:54.529394760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 13 12:20:54.532456 containerd[1713]: time="2026-03-13T12:20:54.532401400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.129-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 13 12:20:54.532456 containerd[1713]: time="2026-03-13T12:20:54.532447400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 13 12:20:54.532456 containerd[1713]: time="2026-03-13T12:20:54.532466800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 13 12:20:54.532696 containerd[1713]: time="2026-03-13T12:20:54.532673320Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 13 12:20:54.532727 containerd[1713]: time="2026-03-13T12:20:54.532697720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 13 12:20:54.532782 containerd[1713]: time="2026-03-13T12:20:54.532765000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 13 12:20:54.532808 containerd[1713]: time="2026-03-13T12:20:54.532786240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 13 12:20:54.532994 containerd[1713]: time="2026-03-13T12:20:54.532970040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 13 12:20:54.532994 containerd[1713]: time="2026-03-13T12:20:54.532992000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 13 12:20:54.533038 containerd[1713]: time="2026-03-13T12:20:54.533006240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 13 12:20:54.533038 containerd[1713]: time="2026-03-13T12:20:54.533016800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 13 12:20:54.533105 containerd[1713]: time="2026-03-13T12:20:54.533089560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 13 12:20:54.533315 containerd[1713]: time="2026-03-13T12:20:54.533295360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 13 12:20:54.533418 containerd[1713]: time="2026-03-13T12:20:54.533399880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 13 12:20:54.533418 containerd[1713]: time="2026-03-13T12:20:54.533416720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 13 12:20:54.533574 containerd[1713]: time="2026-03-13T12:20:54.533557040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 13 12:20:54.533629 containerd[1713]: time="2026-03-13T12:20:54.533611520Z" level=info msg="metadata content store policy set" policy=shared Mar 13 12:20:54.549888 containerd[1713]: time="2026-03-13T12:20:54.549831120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 13 12:20:54.549888 containerd[1713]: time="2026-03-13T12:20:54.549903920Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 13 12:20:54.550045 containerd[1713]: time="2026-03-13T12:20:54.549920800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 13 12:20:54.550045 containerd[1713]: time="2026-03-13T12:20:54.549936840Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 13 12:20:54.550045 containerd[1713]: time="2026-03-13T12:20:54.549952360Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 13 12:20:54.550167 containerd[1713]: time="2026-03-13T12:20:54.550144560Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 13 12:20:54.550469 containerd[1713]: time="2026-03-13T12:20:54.550444680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 13 12:20:54.550649 containerd[1713]: time="2026-03-13T12:20:54.550629240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 13 12:20:54.550692 containerd[1713]: time="2026-03-13T12:20:54.550650160Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 13 12:20:54.550692 containerd[1713]: time="2026-03-13T12:20:54.550664320Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 13 12:20:54.550692 containerd[1713]: time="2026-03-13T12:20:54.550677800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 13 12:20:54.550692 containerd[1713]: time="2026-03-13T12:20:54.550691160Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 13 12:20:54.550768 containerd[1713]: time="2026-03-13T12:20:54.550704640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 13 12:20:54.550768 containerd[1713]: time="2026-03-13T12:20:54.550722080Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 13 12:20:54.550768 containerd[1713]: time="2026-03-13T12:20:54.550737160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 13 12:20:54.550768 containerd[1713]: time="2026-03-13T12:20:54.550750520Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 13 12:20:54.550768 containerd[1713]: time="2026-03-13T12:20:54.550762200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 13 12:20:54.550854 containerd[1713]: time="2026-03-13T12:20:54.550774040Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 13 12:20:54.550854 containerd[1713]: time="2026-03-13T12:20:54.550794360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 13 12:20:54.550854 containerd[1713]: time="2026-03-13T12:20:54.550813040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 13 12:20:54.550854 containerd[1713]: time="2026-03-13T12:20:54.550826280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 13 12:20:54.550854 containerd[1713]: time="2026-03-13T12:20:54.550839360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 13 12:20:54.550854 containerd[1713]: time="2026-03-13T12:20:54.550851280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 13 12:20:54.550958 containerd[1713]: time="2026-03-13T12:20:54.550864160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 13 12:20:54.550958 containerd[1713]: time="2026-03-13T12:20:54.550875360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 13 12:20:54.550958 containerd[1713]: time="2026-03-13T12:20:54.550890560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 13 12:20:54.550958 containerd[1713]: time="2026-03-13T12:20:54.550906280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 13 12:20:54.550958 containerd[1713]: time="2026-03-13T12:20:54.550920800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 13 12:20:54.550958 containerd[1713]: time="2026-03-13T12:20:54.550933760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 13 12:20:54.550958 containerd[1713]: time="2026-03-13T12:20:54.550950960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 13 12:20:54.551094 containerd[1713]: time="2026-03-13T12:20:54.550963120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 13 12:20:54.551094 containerd[1713]: time="2026-03-13T12:20:54.550978840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 13 12:20:54.551094 containerd[1713]: time="2026-03-13T12:20:54.551001000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 13 12:20:54.551094 containerd[1713]: time="2026-03-13T12:20:54.551071160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 13 12:20:54.551094 containerd[1713]: time="2026-03-13T12:20:54.551087160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 13 12:20:54.551247 containerd[1713]: time="2026-03-13T12:20:54.551225640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 13 12:20:54.551278 containerd[1713]: time="2026-03-13T12:20:54.551253800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 13 12:20:54.551343 containerd[1713]: time="2026-03-13T12:20:54.551267360Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 13 12:20:54.551377 containerd[1713]: time="2026-03-13T12:20:54.551350120Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 13 12:20:54.551377 containerd[1713]: time="2026-03-13T12:20:54.551361240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 13 12:20:54.551428 containerd[1713]: time="2026-03-13T12:20:54.551377520Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 13 12:20:54.551428 containerd[1713]: time="2026-03-13T12:20:54.551387600Z" level=info msg="NRI interface is disabled by configuration." Mar 13 12:20:54.551511 containerd[1713]: time="2026-03-13T12:20:54.551401120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 13 12:20:54.551984 containerd[1713]: time="2026-03-13T12:20:54.551897400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 13 12:20:54.551984 containerd[1713]: time="2026-03-13T12:20:54.551971400Z" level=info msg="Connect containerd service" Mar 13 12:20:54.552130 containerd[1713]: time="2026-03-13T12:20:54.552002160Z" level=info msg="using legacy CRI server" Mar 13 12:20:54.552130 containerd[1713]: time="2026-03-13T12:20:54.552009520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 12:20:54.552130 containerd[1713]: time="2026-03-13T12:20:54.552116000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 13 12:20:54.553932 containerd[1713]: time="2026-03-13T12:20:54.553896280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 12:20:54.554057 containerd[1713]: time="2026-03-13T12:20:54.554029120Z" level=info msg="Start subscribing containerd event" Mar 13 12:20:54.554085 containerd[1713]: time="2026-03-13T12:20:54.554073160Z" level=info msg="Start recovering state" Mar 13 12:20:54.554155 containerd[1713]: time="2026-03-13T12:20:54.554140600Z" level=info msg="Start event monitor" Mar 13 12:20:54.554180 containerd[1713]: time="2026-03-13T12:20:54.554155280Z" level=info msg="Start snapshots syncer" Mar 13 12:20:54.554180 containerd[1713]: time="2026-03-13T12:20:54.554164360Z" level=info msg="Start cni network conf syncer for default" Mar 13 12:20:54.554180 containerd[1713]: time="2026-03-13T12:20:54.554171720Z" level=info msg="Start streaming server" Mar 13 12:20:54.554578 containerd[1713]: time="2026-03-13T12:20:54.554556520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 12:20:54.554613 containerd[1713]: time="2026-03-13T12:20:54.554602640Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 12:20:54.554747 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 12:20:54.559843 containerd[1713]: time="2026-03-13T12:20:54.559798800Z" level=info msg="containerd successfully booted in 0.059784s" Mar 13 12:20:54.560350 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 12:20:54.565586 systemd[1]: Startup finished in 629ms (kernel) + 18.102s (initrd) + 19.061s (userspace) = 37.793s. Mar 13 12:20:54.648820 kubelet[1825]: E0313 12:20:54.648756 1825 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 12:20:54.651993 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 12:20:54.652320 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 12:20:54.979510 login[1815]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 13 12:20:54.981711 login[1816]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:20:54.997425 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 12:20:55.008935 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 12:20:55.010886 systemd-logind[1690]: New session 2 of user core. Mar 13 12:20:55.052503 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 12:20:55.059748 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 12:20:55.062951 (systemd)[1842]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 12:20:55.278174 systemd[1842]: Queued start job for default target default.target. Mar 13 12:20:55.285453 systemd[1842]: Created slice app.slice - User Application Slice. Mar 13 12:20:55.285499 systemd[1842]: Reached target paths.target - Paths. Mar 13 12:20:55.285512 systemd[1842]: Reached target timers.target - Timers. Mar 13 12:20:55.286703 systemd[1842]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 12:20:55.301117 systemd[1842]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 12:20:55.301237 systemd[1842]: Reached target sockets.target - Sockets. Mar 13 12:20:55.301253 systemd[1842]: Reached target basic.target - Basic System. Mar 13 12:20:55.301292 systemd[1842]: Reached target default.target - Main User Target. Mar 13 12:20:55.301321 systemd[1842]: Startup finished in 232ms. Mar 13 12:20:55.301401 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 12:20:55.308670 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 12:20:55.979923 login[1815]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:20:55.983979 systemd-logind[1690]: New session 1 of user core. Mar 13 12:20:55.989627 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 12:20:57.004275 waagent[1813]: 2026-03-13T12:20:57.004169Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 13 12:20:57.008894 waagent[1813]: 2026-03-13T12:20:57.008829Z INFO Daemon Daemon OS: flatcar 4081.3.101 Mar 13 12:20:57.012555 waagent[1813]: 2026-03-13T12:20:57.012504Z INFO Daemon Daemon Python: 3.11.9 Mar 13 12:20:57.016101 waagent[1813]: 2026-03-13T12:20:57.016029Z INFO Daemon Daemon Run daemon Mar 13 12:20:57.019936 waagent[1813]: 2026-03-13T12:20:57.019885Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.101' Mar 13 12:20:57.027696 waagent[1813]: 2026-03-13T12:20:57.027633Z INFO Daemon Daemon Using waagent for provisioning Mar 13 12:20:57.032298 waagent[1813]: 2026-03-13T12:20:57.032251Z INFO Daemon Daemon Activate resource disk Mar 13 12:20:57.036496 waagent[1813]: 2026-03-13T12:20:57.036442Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 13 12:20:57.046653 waagent[1813]: 2026-03-13T12:20:57.046599Z INFO Daemon Daemon Found device: None Mar 13 12:20:57.050288 waagent[1813]: 2026-03-13T12:20:57.050244Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 13 12:20:57.056849 waagent[1813]: 2026-03-13T12:20:57.056807Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 13 12:20:57.067774 waagent[1813]: 2026-03-13T12:20:57.067718Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 13 12:20:57.072350 waagent[1813]: 2026-03-13T12:20:57.072309Z INFO Daemon Daemon Running default provisioning handler Mar 13 12:20:57.083028 waagent[1813]: 2026-03-13T12:20:57.082954Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 13 12:20:57.094123 waagent[1813]: 2026-03-13T12:20:57.094059Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 13 12:20:57.102349 waagent[1813]: 2026-03-13T12:20:57.102296Z INFO Daemon Daemon cloud-init is enabled: False Mar 13 12:20:57.107028 waagent[1813]: 2026-03-13T12:20:57.106971Z INFO Daemon Daemon Copying ovf-env.xml Mar 13 12:20:57.312981 waagent[1813]: 2026-03-13T12:20:57.310071Z INFO Daemon Daemon Successfully mounted dvd Mar 13 12:20:57.354646 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 13 12:20:57.358875 waagent[1813]: 2026-03-13T12:20:57.358792Z INFO Daemon Daemon Detect protocol endpoint Mar 13 12:20:57.362838 waagent[1813]: 2026-03-13T12:20:57.362788Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 13 12:20:57.367527 waagent[1813]: 2026-03-13T12:20:57.367442Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 13 12:20:57.372695 waagent[1813]: 2026-03-13T12:20:57.372650Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 13 12:20:57.377136 waagent[1813]: 2026-03-13T12:20:57.377090Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 13 12:20:57.381440 waagent[1813]: 2026-03-13T12:20:57.381395Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 13 12:20:57.436102 waagent[1813]: 2026-03-13T12:20:57.436058Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 13 12:20:57.441413 waagent[1813]: 2026-03-13T12:20:57.441386Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 13 12:20:57.445759 waagent[1813]: 2026-03-13T12:20:57.445715Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 13 12:20:57.851073 waagent[1813]: 2026-03-13T12:20:57.850970Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 13 12:20:57.856783 waagent[1813]: 2026-03-13T12:20:57.856724Z INFO Daemon Daemon Forcing an update of the goal state. Mar 13 12:20:57.864381 waagent[1813]: 2026-03-13T12:20:57.864327Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 13 12:20:57.906171 waagent[1813]: 2026-03-13T12:20:57.906122Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Mar 13 12:20:57.910981 waagent[1813]: 2026-03-13T12:20:57.910934Z INFO Daemon Mar 13 12:20:57.913295 waagent[1813]: 2026-03-13T12:20:57.913251Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 0707a4cf-501e-47c0-8fbb-0ca18bf08c57 eTag: 12647352300421460255 source: Fabric] Mar 13 12:20:57.922679 waagent[1813]: 2026-03-13T12:20:57.922633Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 13 12:20:57.928367 waagent[1813]: 2026-03-13T12:20:57.928323Z INFO Daemon Mar 13 12:20:57.930691 waagent[1813]: 2026-03-13T12:20:57.930648Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 13 12:20:57.939646 waagent[1813]: 2026-03-13T12:20:57.939611Z INFO Daemon Daemon Downloading artifacts profile blob Mar 13 12:20:58.019160 waagent[1813]: 2026-03-13T12:20:58.019072Z INFO Daemon Downloaded certificate {'thumbprint': '353C7231568748D1A12897EA78960303993263AC', 'hasPrivateKey': True} Mar 13 12:20:58.027029 waagent[1813]: 2026-03-13T12:20:58.026979Z INFO Daemon Fetch goal state completed Mar 13 12:20:58.036712 waagent[1813]: 2026-03-13T12:20:58.036650Z INFO Daemon Daemon Starting provisioning Mar 13 12:20:58.040725 waagent[1813]: 2026-03-13T12:20:58.040681Z INFO Daemon Daemon Handle ovf-env.xml. Mar 13 12:20:58.044501 waagent[1813]: 2026-03-13T12:20:58.044458Z INFO Daemon Daemon Set hostname [ci-4081.3.101-d13a81acd8] Mar 13 12:20:58.087223 waagent[1813]: 2026-03-13T12:20:58.087149Z INFO Daemon Daemon Publish hostname [ci-4081.3.101-d13a81acd8] Mar 13 12:20:58.092551 waagent[1813]: 2026-03-13T12:20:58.092488Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 13 12:20:58.097773 waagent[1813]: 2026-03-13T12:20:58.097727Z INFO Daemon Daemon Primary interface is [eth0] Mar 13 12:20:58.176350 systemd-networkd[1359]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 12:20:58.176356 systemd-networkd[1359]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 12:20:58.176381 systemd-networkd[1359]: eth0: DHCP lease lost Mar 13 12:20:58.178610 waagent[1813]: 2026-03-13T12:20:58.177598Z INFO Daemon Daemon Create user account if not exists Mar 13 12:20:58.182838 waagent[1813]: 2026-03-13T12:20:58.182783Z INFO Daemon Daemon User core already exists, skip useradd Mar 13 12:20:58.187925 waagent[1813]: 2026-03-13T12:20:58.187873Z INFO Daemon Daemon Configure sudoer Mar 13 12:20:58.192444 waagent[1813]: 2026-03-13T12:20:58.192380Z INFO Daemon Daemon Configure sshd Mar 13 12:20:58.195895 systemd-networkd[1359]: eth0: DHCPv6 lease lost Mar 13 12:20:58.196793 waagent[1813]: 2026-03-13T12:20:58.196735Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 13 12:20:58.210169 waagent[1813]: 2026-03-13T12:20:58.210098Z INFO Daemon Daemon Deploy ssh public key. Mar 13 12:20:58.222614 systemd-networkd[1359]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 13 12:20:59.869034 waagent[1813]: 2026-03-13T12:20:59.868976Z INFO Daemon Daemon Provisioning complete Mar 13 12:20:59.885451 waagent[1813]: 2026-03-13T12:20:59.885392Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 13 12:20:59.891085 waagent[1813]: 2026-03-13T12:20:59.891024Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 13 12:20:59.899748 waagent[1813]: 2026-03-13T12:20:59.899699Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 13 12:21:00.036516 waagent[1892]: 2026-03-13T12:21:00.035818Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 13 12:21:00.036516 waagent[1892]: 2026-03-13T12:21:00.035979Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.101 Mar 13 12:21:00.036516 waagent[1892]: 2026-03-13T12:21:00.036033Z INFO ExtHandler ExtHandler Python: 3.11.9 Mar 13 12:21:00.119951 waagent[1892]: 2026-03-13T12:21:00.119787Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.101; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 13 12:21:00.120093 waagent[1892]: 2026-03-13T12:21:00.120055Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 13 12:21:00.120160 waagent[1892]: 2026-03-13T12:21:00.120131Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 13 12:21:00.128082 waagent[1892]: 2026-03-13T12:21:00.128010Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 13 12:21:00.133683 waagent[1892]: 2026-03-13T12:21:00.133636Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Mar 13 12:21:00.134215 waagent[1892]: 2026-03-13T12:21:00.134171Z INFO ExtHandler Mar 13 12:21:00.134284 waagent[1892]: 2026-03-13T12:21:00.134257Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 5affc23f-92c6-4270-b09f-8fac9d78b941 eTag: 12647352300421460255 source: Fabric] Mar 13 12:21:00.134621 waagent[1892]: 2026-03-13T12:21:00.134580Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 13 12:21:00.135221 waagent[1892]: 2026-03-13T12:21:00.135176Z INFO ExtHandler Mar 13 12:21:00.135283 waagent[1892]: 2026-03-13T12:21:00.135258Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 13 12:21:00.138582 waagent[1892]: 2026-03-13T12:21:00.138546Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 13 12:21:00.210622 waagent[1892]: 2026-03-13T12:21:00.210521Z INFO ExtHandler Downloaded certificate {'thumbprint': '353C7231568748D1A12897EA78960303993263AC', 'hasPrivateKey': True} Mar 13 12:21:00.211180 waagent[1892]: 2026-03-13T12:21:00.211133Z INFO ExtHandler Fetch goal state completed Mar 13 12:21:00.225959 waagent[1892]: 2026-03-13T12:21:00.225892Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1892 Mar 13 12:21:00.226130 waagent[1892]: 2026-03-13T12:21:00.226093Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 13 12:21:00.227959 waagent[1892]: 2026-03-13T12:21:00.227906Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.101', '', 'Flatcar Container Linux by Kinvolk'] Mar 13 12:21:00.228355 waagent[1892]: 2026-03-13T12:21:00.228315Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 13 12:21:00.302014 waagent[1892]: 2026-03-13T12:21:00.301967Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 13 12:21:00.302220 waagent[1892]: 2026-03-13T12:21:00.302181Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 13 12:21:00.308515 waagent[1892]: 2026-03-13T12:21:00.308459Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 13 12:21:00.315455 systemd[1]: Reloading requested from client PID 1905 ('systemctl') (unit waagent.service)... Mar 13 12:21:00.315470 systemd[1]: Reloading... Mar 13 12:21:00.414560 zram_generator::config[1942]: No configuration found. Mar 13 12:21:00.522350 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 13 12:21:00.603623 systemd[1]: Reloading finished in 287 ms. Mar 13 12:21:00.627499 waagent[1892]: 2026-03-13T12:21:00.624278Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 13 12:21:00.630655 systemd[1]: Reloading requested from client PID 1996 ('systemctl') (unit waagent.service)... Mar 13 12:21:00.630791 systemd[1]: Reloading... Mar 13 12:21:00.718576 zram_generator::config[2030]: No configuration found. Mar 13 12:21:00.816556 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 13 12:21:00.891892 systemd[1]: Reloading finished in 260 ms. Mar 13 12:21:00.921508 waagent[1892]: 2026-03-13T12:21:00.919717Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 13 12:21:00.921508 waagent[1892]: 2026-03-13T12:21:00.919893Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 13 12:21:01.716505 waagent[1892]: 2026-03-13T12:21:01.716389Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 13 12:21:01.717124 waagent[1892]: 2026-03-13T12:21:01.717069Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 13 12:21:01.717963 waagent[1892]: 2026-03-13T12:21:01.717900Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 13 12:21:01.718428 waagent[1892]: 2026-03-13T12:21:01.718271Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 13 12:21:01.719499 waagent[1892]: 2026-03-13T12:21:01.718659Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 13 12:21:01.719499 waagent[1892]: 2026-03-13T12:21:01.718769Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 13 12:21:01.719499 waagent[1892]: 2026-03-13T12:21:01.718988Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 13 12:21:01.719499 waagent[1892]: 2026-03-13T12:21:01.719179Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 13 12:21:01.719499 waagent[1892]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 13 12:21:01.719499 waagent[1892]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 13 12:21:01.719499 waagent[1892]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 13 12:21:01.719499 waagent[1892]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 13 12:21:01.719499 waagent[1892]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 13 12:21:01.719499 waagent[1892]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 13 12:21:01.719909 waagent[1892]: 2026-03-13T12:21:01.719845Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 13 12:21:01.720007 waagent[1892]: 2026-03-13T12:21:01.719975Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 13 12:21:01.720180 waagent[1892]: 2026-03-13T12:21:01.720143Z INFO EnvHandler ExtHandler Configure routes Mar 13 12:21:01.720261 waagent[1892]: 2026-03-13T12:21:01.720215Z INFO EnvHandler ExtHandler Gateway:None Mar 13 12:21:01.720289 waagent[1892]: 2026-03-13T12:21:01.720268Z INFO EnvHandler ExtHandler Routes:None Mar 13 12:21:01.720571 waagent[1892]: 2026-03-13T12:21:01.720502Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 13 12:21:01.720665 waagent[1892]: 2026-03-13T12:21:01.720595Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 13 12:21:01.721043 waagent[1892]: 2026-03-13T12:21:01.720987Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 13 12:21:01.721214 waagent[1892]: 2026-03-13T12:21:01.721129Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 13 12:21:01.721582 waagent[1892]: 2026-03-13T12:21:01.721521Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 13 12:21:01.727591 waagent[1892]: 2026-03-13T12:21:01.727522Z INFO ExtHandler ExtHandler Mar 13 12:21:01.728285 waagent[1892]: 2026-03-13T12:21:01.728057Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: f83685c9-4962-4ecb-8f6a-675c77276929 correlation a85615ea-b74a-42f6-9849-73dc60b85c84 created: 2026-03-13T12:19:21.343040Z] Mar 13 12:21:01.729198 waagent[1892]: 2026-03-13T12:21:01.729136Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 13 12:21:01.731136 waagent[1892]: 2026-03-13T12:21:01.731076Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Mar 13 12:21:01.785040 waagent[1892]: 2026-03-13T12:21:01.784912Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 33981F14-3767-45B2-A365-AB753D90A448;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 13 12:21:01.855091 waagent[1892]: 2026-03-13T12:21:01.854642Z INFO MonitorHandler ExtHandler Network interfaces: Mar 13 12:21:01.855091 waagent[1892]: Executing ['ip', '-a', '-o', 'link']: Mar 13 12:21:01.855091 waagent[1892]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 13 12:21:01.855091 waagent[1892]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:c3:89:b1 brd ff:ff:ff:ff:ff:ff Mar 13 12:21:01.855091 waagent[1892]: 3: enP32442s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:c3:89:b1 brd ff:ff:ff:ff:ff:ff\ altname enP32442p0s2 Mar 13 12:21:01.855091 waagent[1892]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 13 12:21:01.855091 waagent[1892]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 13 12:21:01.855091 waagent[1892]: 2: eth0 inet 10.200.20.10/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 13 12:21:01.855091 waagent[1892]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 13 12:21:01.855091 waagent[1892]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 13 12:21:01.855091 waagent[1892]: 2: eth0 inet6 fe80::7eed:8dff:fec3:89b1/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 13 12:21:01.943703 waagent[1892]: 2026-03-13T12:21:01.943611Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 13 12:21:01.943703 waagent[1892]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 13 12:21:01.943703 waagent[1892]: pkts bytes target prot opt in out source destination Mar 13 12:21:01.943703 waagent[1892]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 13 12:21:01.943703 waagent[1892]: pkts bytes target prot opt in out source destination Mar 13 12:21:01.943703 waagent[1892]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 13 12:21:01.943703 waagent[1892]: pkts bytes target prot opt in out source destination Mar 13 12:21:01.943703 waagent[1892]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 13 12:21:01.943703 waagent[1892]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 13 12:21:01.943703 waagent[1892]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 13 12:21:01.947149 waagent[1892]: 2026-03-13T12:21:01.947072Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 13 12:21:01.947149 waagent[1892]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 13 12:21:01.947149 waagent[1892]: pkts bytes target prot opt in out source destination Mar 13 12:21:01.947149 waagent[1892]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 13 12:21:01.947149 waagent[1892]: pkts bytes target prot opt in out source destination Mar 13 12:21:01.947149 waagent[1892]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 13 12:21:01.947149 waagent[1892]: pkts bytes target prot opt in out source destination Mar 13 12:21:01.947149 waagent[1892]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 13 12:21:01.947149 waagent[1892]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 13 12:21:01.947149 waagent[1892]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 13 12:21:01.947433 waagent[1892]: 2026-03-13T12:21:01.947394Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 13 12:21:04.803143 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 12:21:04.810885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 12:21:04.915142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 12:21:04.919397 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 12:21:04.953837 kubelet[2123]: E0313 12:21:04.953784 2123 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 12:21:04.957203 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 12:21:04.957363 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 12:21:07.655787 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 12:21:07.662755 systemd[1]: Started sshd@0-10.200.20.10:22-10.200.16.10:54184.service - OpenSSH per-connection server daemon (10.200.16.10:54184). Mar 13 12:21:08.232318 sshd[2131]: Accepted publickey for core from 10.200.16.10 port 54184 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:21:08.233239 sshd[2131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:21:08.237101 systemd-logind[1690]: New session 3 of user core. Mar 13 12:21:08.247701 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 12:21:08.647365 systemd[1]: Started sshd@1-10.200.20.10:22-10.200.16.10:54198.service - OpenSSH per-connection server daemon (10.200.16.10:54198). Mar 13 12:21:09.138890 sshd[2136]: Accepted publickey for core from 10.200.16.10 port 54198 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:21:09.140288 sshd[2136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:21:09.144447 systemd-logind[1690]: New session 4 of user core. Mar 13 12:21:09.150627 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 12:21:09.491625 sshd[2136]: pam_unix(sshd:session): session closed for user core Mar 13 12:21:09.495340 systemd[1]: sshd@1-10.200.20.10:22-10.200.16.10:54198.service: Deactivated successfully. Mar 13 12:21:09.497227 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 12:21:09.497909 systemd-logind[1690]: Session 4 logged out. Waiting for processes to exit. Mar 13 12:21:09.498935 systemd-logind[1690]: Removed session 4. Mar 13 12:21:09.578281 systemd[1]: Started sshd@2-10.200.20.10:22-10.200.16.10:54212.service - OpenSSH per-connection server daemon (10.200.16.10:54212). Mar 13 12:21:10.065257 sshd[2143]: Accepted publickey for core from 10.200.16.10 port 54212 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:21:10.066141 sshd[2143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:21:10.070016 systemd-logind[1690]: New session 5 of user core. Mar 13 12:21:10.077641 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 12:21:10.411835 sshd[2143]: pam_unix(sshd:session): session closed for user core Mar 13 12:21:10.415424 systemd-logind[1690]: Session 5 logged out. Waiting for processes to exit. Mar 13 12:21:10.415846 systemd[1]: sshd@2-10.200.20.10:22-10.200.16.10:54212.service: Deactivated successfully. Mar 13 12:21:10.418747 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 12:21:10.420936 systemd-logind[1690]: Removed session 5. Mar 13 12:21:10.501071 systemd[1]: Started sshd@3-10.200.20.10:22-10.200.16.10:55290.service - OpenSSH per-connection server daemon (10.200.16.10:55290). Mar 13 12:21:10.991505 sshd[2150]: Accepted publickey for core from 10.200.16.10 port 55290 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:21:10.992446 sshd[2150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:21:10.997194 systemd-logind[1690]: New session 6 of user core. Mar 13 12:21:11.003700 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 12:21:11.344242 sshd[2150]: pam_unix(sshd:session): session closed for user core Mar 13 12:21:11.347997 systemd[1]: sshd@3-10.200.20.10:22-10.200.16.10:55290.service: Deactivated successfully. Mar 13 12:21:11.349955 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 12:21:11.350682 systemd-logind[1690]: Session 6 logged out. Waiting for processes to exit. Mar 13 12:21:11.351545 systemd-logind[1690]: Removed session 6. Mar 13 12:21:11.435721 systemd[1]: Started sshd@4-10.200.20.10:22-10.200.16.10:55294.service - OpenSSH per-connection server daemon (10.200.16.10:55294). Mar 13 12:21:11.924351 sshd[2157]: Accepted publickey for core from 10.200.16.10 port 55294 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:21:11.925772 sshd[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:21:11.929575 systemd-logind[1690]: New session 7 of user core. Mar 13 12:21:11.940663 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 12:21:12.411801 sudo[2160]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 12:21:12.412079 sudo[2160]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 12:21:12.425372 sudo[2160]: pam_unix(sudo:session): session closed for user root Mar 13 12:21:12.502770 sshd[2157]: pam_unix(sshd:session): session closed for user core Mar 13 12:21:12.507283 systemd[1]: sshd@4-10.200.20.10:22-10.200.16.10:55294.service: Deactivated successfully. Mar 13 12:21:12.508984 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 12:21:12.509697 systemd-logind[1690]: Session 7 logged out. Waiting for processes to exit. Mar 13 12:21:12.510668 systemd-logind[1690]: Removed session 7. Mar 13 12:21:12.590634 systemd[1]: Started sshd@5-10.200.20.10:22-10.200.16.10:55296.service - OpenSSH per-connection server daemon (10.200.16.10:55296). Mar 13 12:21:13.081078 sshd[2165]: Accepted publickey for core from 10.200.16.10 port 55296 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:21:13.082520 sshd[2165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:21:13.086161 systemd-logind[1690]: New session 8 of user core. Mar 13 12:21:13.095635 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 12:21:13.357240 sudo[2169]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 12:21:13.357760 sudo[2169]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 12:21:13.360830 sudo[2169]: pam_unix(sudo:session): session closed for user root Mar 13 12:21:13.365757 sudo[2168]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 13 12:21:13.366026 sudo[2168]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 12:21:13.384730 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 13 12:21:13.386797 auditctl[2172]: No rules Mar 13 12:21:13.388012 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 12:21:13.388248 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 13 12:21:13.390216 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 13 12:21:13.415884 augenrules[2190]: No rules Mar 13 12:21:13.417524 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 13 12:21:13.419908 sudo[2168]: pam_unix(sudo:session): session closed for user root Mar 13 12:21:13.498129 sshd[2165]: pam_unix(sshd:session): session closed for user core Mar 13 12:21:13.501489 systemd[1]: sshd@5-10.200.20.10:22-10.200.16.10:55296.service: Deactivated successfully. Mar 13 12:21:13.503403 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 12:21:13.505316 systemd-logind[1690]: Session 8 logged out. Waiting for processes to exit. Mar 13 12:21:13.506442 systemd-logind[1690]: Removed session 8. Mar 13 12:21:13.585279 systemd[1]: Started sshd@6-10.200.20.10:22-10.200.16.10:55310.service - OpenSSH per-connection server daemon (10.200.16.10:55310). Mar 13 12:21:14.074451 sshd[2198]: Accepted publickey for core from 10.200.16.10 port 55310 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:21:14.075418 sshd[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:21:14.079693 systemd-logind[1690]: New session 9 of user core. Mar 13 12:21:14.094648 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 12:21:14.349005 sudo[2201]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 12:21:14.349553 sudo[2201]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 12:21:15.053077 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 13 12:21:15.064067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 12:21:15.657837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 12:21:15.662673 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 12:21:15.696124 kubelet[2219]: E0313 12:21:15.696070 2219 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 12:21:15.698087 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 12:21:15.698325 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 12:21:16.481906 (dockerd)[2232]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 12:21:16.482277 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 12:21:16.915289 chronyd[1683]: Selected source PHC0 Mar 13 12:21:17.476200 dockerd[2232]: time="2026-03-13T12:21:17.476131554Z" level=info msg="Starting up" Mar 13 12:21:18.013806 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1991839419-merged.mount: Deactivated successfully. Mar 13 12:21:18.231597 dockerd[2232]: time="2026-03-13T12:21:18.231335789Z" level=info msg="Loading containers: start." Mar 13 12:21:18.422519 kernel: Initializing XFRM netlink socket Mar 13 12:21:18.662897 systemd-networkd[1359]: docker0: Link UP Mar 13 12:21:18.693155 dockerd[2232]: time="2026-03-13T12:21:18.693048337Z" level=info msg="Loading containers: done." Mar 13 12:21:18.707382 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2399399320-merged.mount: Deactivated successfully. Mar 13 12:21:18.747108 dockerd[2232]: time="2026-03-13T12:21:18.747059095Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 12:21:18.747249 dockerd[2232]: time="2026-03-13T12:21:18.747176815Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 13 12:21:18.747321 dockerd[2232]: time="2026-03-13T12:21:18.747297975Z" level=info msg="Daemon has completed initialization" Mar 13 12:21:18.816355 dockerd[2232]: time="2026-03-13T12:21:18.815174534Z" level=info msg="API listen on /run/docker.sock" Mar 13 12:21:18.816044 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 12:21:19.215550 containerd[1713]: time="2026-03-13T12:21:19.215510131Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 13 12:21:20.041747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2329223074.mount: Deactivated successfully. Mar 13 12:21:21.691989 containerd[1713]: time="2026-03-13T12:21:21.691934011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:21.696149 containerd[1713]: time="2026-03-13T12:21:21.696099251Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=24701796" Mar 13 12:21:21.701499 containerd[1713]: time="2026-03-13T12:21:21.700525531Z" level=info msg="ImageCreate event name:\"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:21.706165 containerd[1713]: time="2026-03-13T12:21:21.706131611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:21.707441 containerd[1713]: time="2026-03-13T12:21:21.706958851Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"24698395\" in 2.49141032s" Mar 13 12:21:21.707962 containerd[1713]: time="2026-03-13T12:21:21.707944171Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\"" Mar 13 12:21:21.708549 containerd[1713]: time="2026-03-13T12:21:21.708519171Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 13 12:21:23.344214 containerd[1713]: time="2026-03-13T12:21:23.344161891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:23.347515 containerd[1713]: time="2026-03-13T12:21:23.347462611Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=19063039" Mar 13 12:21:23.352065 containerd[1713]: time="2026-03-13T12:21:23.352011131Z" level=info msg="ImageCreate event name:\"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:23.358703 containerd[1713]: time="2026-03-13T12:21:23.358369251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:23.359892 containerd[1713]: time="2026-03-13T12:21:23.359548611Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"20675140\" in 1.65084124s" Mar 13 12:21:23.359892 containerd[1713]: time="2026-03-13T12:21:23.359586291Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\"" Mar 13 12:21:23.360540 containerd[1713]: time="2026-03-13T12:21:23.360343611Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 13 12:21:24.803975 containerd[1713]: time="2026-03-13T12:21:24.803916811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:24.807277 containerd[1713]: time="2026-03-13T12:21:24.807236851Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=13797901" Mar 13 12:21:24.816169 containerd[1713]: time="2026-03-13T12:21:24.816134411Z" level=info msg="ImageCreate event name:\"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:24.822088 containerd[1713]: time="2026-03-13T12:21:24.822052491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:24.824226 containerd[1713]: time="2026-03-13T12:21:24.824187451Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"15410020\" in 1.4638118s" Mar 13 12:21:24.824264 containerd[1713]: time="2026-03-13T12:21:24.824231891Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\"" Mar 13 12:21:24.824688 containerd[1713]: time="2026-03-13T12:21:24.824656491Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 13 12:21:25.803243 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 13 12:21:25.808887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 12:21:25.911444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 12:21:25.917886 (kubelet)[2442]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 12:21:25.953331 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 12:21:26.337618 kubelet[2442]: E0313 12:21:25.951640 2442 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 12:21:25.953458 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 12:21:26.799106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3076625271.mount: Deactivated successfully. Mar 13 12:21:27.043142 containerd[1713]: time="2026-03-13T12:21:27.043053595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:27.046305 containerd[1713]: time="2026-03-13T12:21:27.046268013Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=22329583" Mar 13 12:21:27.050273 containerd[1713]: time="2026-03-13T12:21:27.050192715Z" level=info msg="ImageCreate event name:\"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:27.055724 containerd[1713]: time="2026-03-13T12:21:27.055656745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:27.056676 containerd[1713]: time="2026-03-13T12:21:27.056049948Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"22328602\" in 2.231359737s" Mar 13 12:21:27.056676 containerd[1713]: time="2026-03-13T12:21:27.056084428Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\"" Mar 13 12:21:27.056676 containerd[1713]: time="2026-03-13T12:21:27.056549750Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 13 12:21:27.771361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount241113784.mount: Deactivated successfully. Mar 13 12:21:29.207205 containerd[1713]: time="2026-03-13T12:21:29.207143741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:29.211040 containerd[1713]: time="2026-03-13T12:21:29.210739427Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172211" Mar 13 12:21:29.214668 containerd[1713]: time="2026-03-13T12:21:29.214407674Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:29.219967 containerd[1713]: time="2026-03-13T12:21:29.219906124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:29.221363 containerd[1713]: time="2026-03-13T12:21:29.221197726Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 2.164562255s" Mar 13 12:21:29.221363 containerd[1713]: time="2026-03-13T12:21:29.221238367Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Mar 13 12:21:29.222564 containerd[1713]: time="2026-03-13T12:21:29.221755967Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 13 12:21:30.373943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1805565736.mount: Deactivated successfully. Mar 13 12:21:30.398059 containerd[1713]: time="2026-03-13T12:21:30.398001027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:30.402675 containerd[1713]: time="2026-03-13T12:21:30.402397995Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 13 12:21:30.406042 containerd[1713]: time="2026-03-13T12:21:30.405851561Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:30.411496 containerd[1713]: time="2026-03-13T12:21:30.411435211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:30.412378 containerd[1713]: time="2026-03-13T12:21:30.412135372Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 1.190349845s" Mar 13 12:21:30.412378 containerd[1713]: time="2026-03-13T12:21:30.412171813Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 13 12:21:30.412835 containerd[1713]: time="2026-03-13T12:21:30.412660893Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 13 12:21:31.132536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount447996736.mount: Deactivated successfully. Mar 13 12:21:32.246494 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 13 12:21:32.333399 containerd[1713]: time="2026-03-13T12:21:32.332882466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:32.335835 containerd[1713]: time="2026-03-13T12:21:32.335801071Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21738165" Mar 13 12:21:32.340284 containerd[1713]: time="2026-03-13T12:21:32.340255399Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:32.346212 containerd[1713]: time="2026-03-13T12:21:32.346175170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:32.348215 containerd[1713]: time="2026-03-13T12:21:32.347195532Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 1.934506439s" Mar 13 12:21:32.348215 containerd[1713]: time="2026-03-13T12:21:32.347235812Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Mar 13 12:21:35.783830 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 12:21:35.791700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 12:21:35.823469 systemd[1]: Reloading requested from client PID 2604 ('systemctl') (unit session-9.scope)... Mar 13 12:21:35.823606 systemd[1]: Reloading... Mar 13 12:21:35.913550 zram_generator::config[2647]: No configuration found. Mar 13 12:21:36.040074 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 13 12:21:36.118085 systemd[1]: Reloading finished in 294 ms. Mar 13 12:21:36.197197 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 12:21:36.197458 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 12:21:36.197830 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 12:21:36.203913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 12:21:36.851434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 12:21:36.860844 (kubelet)[2708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 12:21:36.902110 kubelet[2708]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:21:37.262780 kubelet[2708]: I0313 12:21:37.262654 2708 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 13 12:21:37.262780 kubelet[2708]: I0313 12:21:37.262700 2708 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 12:21:37.262780 kubelet[2708]: I0313 12:21:37.262724 2708 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 12:21:37.262780 kubelet[2708]: I0313 12:21:37.262732 2708 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 12:21:37.263263 kubelet[2708]: I0313 12:21:37.263035 2708 server.go:951] "Client rotation is on, will bootstrap in background" Mar 13 12:21:37.403604 kubelet[2708]: E0313 12:21:37.403567 2708 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 12:21:37.403830 kubelet[2708]: I0313 12:21:37.403616 2708 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 12:21:37.409054 kubelet[2708]: E0313 12:21:37.408141 2708 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 13 12:21:37.409054 kubelet[2708]: I0313 12:21:37.408216 2708 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 13 12:21:37.411680 kubelet[2708]: I0313 12:21:37.411655 2708 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 12:21:37.412570 kubelet[2708]: I0313 12:21:37.412516 2708 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 12:21:37.412716 kubelet[2708]: I0313 12:21:37.412558 2708 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.101-d13a81acd8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 12:21:37.412809 kubelet[2708]: I0313 12:21:37.412719 2708 topology_manager.go:143] "Creating topology manager with none policy" Mar 13 12:21:37.412809 kubelet[2708]: I0313 12:21:37.412727 2708 container_manager_linux.go:308] "Creating device plugin manager" Mar 13 12:21:37.412858 kubelet[2708]: I0313 12:21:37.412832 2708 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 12:21:37.418714 kubelet[2708]: I0313 12:21:37.418680 2708 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 13 12:21:37.418900 kubelet[2708]: I0313 12:21:37.418887 2708 kubelet.go:482] "Attempting to sync node with API server" Mar 13 12:21:37.418936 kubelet[2708]: I0313 12:21:37.418904 2708 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 12:21:37.418936 kubelet[2708]: I0313 12:21:37.418920 2708 kubelet.go:394] "Adding apiserver pod source" Mar 13 12:21:37.418936 kubelet[2708]: I0313 12:21:37.418930 2708 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 12:21:37.422861 kubelet[2708]: I0313 12:21:37.422259 2708 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 13 12:21:37.423307 kubelet[2708]: I0313 12:21:37.423287 2708 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 12:21:37.423353 kubelet[2708]: I0313 12:21:37.423321 2708 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 12:21:37.423382 kubelet[2708]: W0313 12:21:37.423361 2708 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 12:21:37.425602 kubelet[2708]: I0313 12:21:37.425546 2708 server.go:1257] "Started kubelet" Mar 13 12:21:37.427218 kubelet[2708]: I0313 12:21:37.427193 2708 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 13 12:21:37.431804 kubelet[2708]: E0313 12:21:37.430741 2708 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.10:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.101-d13a81acd8.189c65fe9820d4dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.101-d13a81acd8,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.101-d13a81acd8,},FirstTimestamp:2026-03-13 12:21:37.425519836 +0000 UTC m=+0.561742776,LastTimestamp:2026-03-13 12:21:37.425519836 +0000 UTC m=+0.561742776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.101-d13a81acd8,}" Mar 13 12:21:37.432730 kubelet[2708]: I0313 12:21:37.432683 2708 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 12:21:37.433763 kubelet[2708]: I0313 12:21:37.433739 2708 server.go:317] "Adding debug handlers to kubelet server" Mar 13 12:21:37.435352 kubelet[2708]: I0313 12:21:37.434723 2708 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 13 12:21:37.435352 kubelet[2708]: E0313 12:21:37.434925 2708 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.101-d13a81acd8\" not found" Mar 13 12:21:37.437448 kubelet[2708]: I0313 12:21:37.436794 2708 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 12:21:37.437448 kubelet[2708]: I0313 12:21:37.436866 2708 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 12:21:37.437448 kubelet[2708]: I0313 12:21:37.437049 2708 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 12:21:37.437448 kubelet[2708]: I0313 12:21:37.437258 2708 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 12:21:37.437448 kubelet[2708]: I0313 12:21:37.437274 2708 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 12:21:37.437448 kubelet[2708]: I0313 12:21:37.437311 2708 reconciler.go:29] "Reconciler: start to sync state" Mar 13 12:21:37.439046 kubelet[2708]: E0313 12:21:37.439011 2708 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-d13a81acd8?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="200ms" Mar 13 12:21:37.439372 kubelet[2708]: I0313 12:21:37.439350 2708 factory.go:223] Registration of the systemd container factory successfully Mar 13 12:21:37.439613 kubelet[2708]: I0313 12:21:37.439594 2708 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 12:21:37.441433 kubelet[2708]: I0313 12:21:37.441412 2708 factory.go:223] Registration of the containerd container factory successfully Mar 13 12:21:37.443625 kubelet[2708]: E0313 12:21:37.443593 2708 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 12:21:37.470823 kubelet[2708]: I0313 12:21:37.470789 2708 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 12:21:37.473124 kubelet[2708]: I0313 12:21:37.472466 2708 cpu_manager.go:225] "Starting" policy="none" Mar 13 12:21:37.473124 kubelet[2708]: I0313 12:21:37.472514 2708 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 13 12:21:37.473124 kubelet[2708]: I0313 12:21:37.472534 2708 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 13 12:21:37.473657 kubelet[2708]: I0313 12:21:37.473383 2708 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 12:21:37.473657 kubelet[2708]: I0313 12:21:37.473423 2708 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 13 12:21:37.473657 kubelet[2708]: I0313 12:21:37.473450 2708 kubelet.go:2501] "Starting kubelet main sync loop" Mar 13 12:21:37.473657 kubelet[2708]: E0313 12:21:37.473508 2708 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 12:21:37.480853 kubelet[2708]: I0313 12:21:37.480828 2708 policy_none.go:50] "Start" Mar 13 12:21:37.480999 kubelet[2708]: I0313 12:21:37.480987 2708 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 12:21:37.481059 kubelet[2708]: I0313 12:21:37.481049 2708 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 12:21:37.486183 kubelet[2708]: I0313 12:21:37.486159 2708 policy_none.go:44] "Start" Mar 13 12:21:37.490399 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 12:21:37.504549 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 12:21:37.507894 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 12:21:37.512501 kubelet[2708]: E0313 12:21:37.512429 2708 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 12:21:37.513588 kubelet[2708]: I0313 12:21:37.512784 2708 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 13 12:21:37.513588 kubelet[2708]: I0313 12:21:37.512801 2708 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 12:21:37.513588 kubelet[2708]: I0313 12:21:37.513077 2708 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 13 12:21:37.515792 kubelet[2708]: E0313 12:21:37.515764 2708 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 12:21:37.515868 kubelet[2708]: E0313 12:21:37.515807 2708 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.101-d13a81acd8\" not found" Mar 13 12:21:37.587785 systemd[1]: Created slice kubepods-burstable-pod20b204dbf6bc292e674f66c44eee38eb.slice - libcontainer container kubepods-burstable-pod20b204dbf6bc292e674f66c44eee38eb.slice. Mar 13 12:21:37.600403 kubelet[2708]: E0313 12:21:37.600232 2708 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-d13a81acd8\" not found" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:37.605854 systemd[1]: Created slice kubepods-burstable-pod3e4e2b8ba8e564017883ae82c46aff0b.slice - libcontainer container kubepods-burstable-pod3e4e2b8ba8e564017883ae82c46aff0b.slice. Mar 13 12:21:37.609123 kubelet[2708]: E0313 12:21:37.608849 2708 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-d13a81acd8\" not found" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:37.612800 systemd[1]: Created slice kubepods-burstable-pod6ef3e5f45a216b19102e11a65110b23f.slice - libcontainer container kubepods-burstable-pod6ef3e5f45a216b19102e11a65110b23f.slice. Mar 13 12:21:37.614750 kubelet[2708]: I0313 12:21:37.614601 2708 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:37.615173 kubelet[2708]: E0313 12:21:37.615147 2708 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:37.615279 kubelet[2708]: E0313 12:21:37.615262 2708 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-d13a81acd8\" not found" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:37.638633 kubelet[2708]: I0313 12:21:37.638600 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e4e2b8ba8e564017883ae82c46aff0b-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.101-d13a81acd8\" (UID: \"3e4e2b8ba8e564017883ae82c46aff0b\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-d13a81acd8" Mar 13 12:21:37.638633 kubelet[2708]: I0313 12:21:37.638638 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ef3e5f45a216b19102e11a65110b23f-kubeconfig\") pod \"kube-scheduler-ci-4081.3.101-d13a81acd8\" (UID: \"6ef3e5f45a216b19102e11a65110b23f\") " pod="kube-system/kube-scheduler-ci-4081.3.101-d13a81acd8" Mar 13 12:21:37.638747 kubelet[2708]: I0313 12:21:37.638675 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20b204dbf6bc292e674f66c44eee38eb-ca-certs\") pod \"kube-apiserver-ci-4081.3.101-d13a81acd8\" (UID: \"20b204dbf6bc292e674f66c44eee38eb\") " pod="kube-system/kube-apiserver-ci-4081.3.101-d13a81acd8" Mar 13 12:21:37.638747 kubelet[2708]: I0313 12:21:37.638690 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20b204dbf6bc292e674f66c44eee38eb-k8s-certs\") pod \"kube-apiserver-ci-4081.3.101-d13a81acd8\" (UID: \"20b204dbf6bc292e674f66c44eee38eb\") " pod="kube-system/kube-apiserver-ci-4081.3.101-d13a81acd8" Mar 13 12:21:37.638747 kubelet[2708]: I0313 12:21:37.638706 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e4e2b8ba8e564017883ae82c46aff0b-ca-certs\") pod \"kube-controller-manager-ci-4081.3.101-d13a81acd8\" (UID: \"3e4e2b8ba8e564017883ae82c46aff0b\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-d13a81acd8" Mar 13 12:21:37.638747 kubelet[2708]: I0313 12:21:37.638740 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3e4e2b8ba8e564017883ae82c46aff0b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.101-d13a81acd8\" (UID: \"3e4e2b8ba8e564017883ae82c46aff0b\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-d13a81acd8" Mar 13 12:21:37.638840 kubelet[2708]: I0313 12:21:37.638755 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e4e2b8ba8e564017883ae82c46aff0b-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.101-d13a81acd8\" (UID: \"3e4e2b8ba8e564017883ae82c46aff0b\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-d13a81acd8" Mar 13 12:21:37.638840 kubelet[2708]: I0313 12:21:37.638772 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e4e2b8ba8e564017883ae82c46aff0b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.101-d13a81acd8\" (UID: \"3e4e2b8ba8e564017883ae82c46aff0b\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-d13a81acd8" Mar 13 12:21:37.638840 kubelet[2708]: I0313 12:21:37.638797 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20b204dbf6bc292e674f66c44eee38eb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.101-d13a81acd8\" (UID: \"20b204dbf6bc292e674f66c44eee38eb\") " pod="kube-system/kube-apiserver-ci-4081.3.101-d13a81acd8" Mar 13 12:21:37.640286 kubelet[2708]: E0313 12:21:37.640260 2708 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-d13a81acd8?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="400ms" Mar 13 12:21:37.817502 kubelet[2708]: I0313 12:21:37.817352 2708 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:37.817801 kubelet[2708]: E0313 12:21:37.817704 2708 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:37.908638 containerd[1713]: time="2026-03-13T12:21:37.908272109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.101-d13a81acd8,Uid:20b204dbf6bc292e674f66c44eee38eb,Namespace:kube-system,Attempt:0,}" Mar 13 12:21:37.916361 containerd[1713]: time="2026-03-13T12:21:37.915825963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.101-d13a81acd8,Uid:3e4e2b8ba8e564017883ae82c46aff0b,Namespace:kube-system,Attempt:0,}" Mar 13 12:21:37.922543 containerd[1713]: time="2026-03-13T12:21:37.922505255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.101-d13a81acd8,Uid:6ef3e5f45a216b19102e11a65110b23f,Namespace:kube-system,Attempt:0,}" Mar 13 12:21:38.041315 kubelet[2708]: E0313 12:21:38.041275 2708 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-d13a81acd8?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="800ms" Mar 13 12:21:38.049756 kubelet[2708]: E0313 12:21:38.049644 2708 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.10:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.101-d13a81acd8.189c65fe9820d4dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.101-d13a81acd8,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.101-d13a81acd8,},FirstTimestamp:2026-03-13 12:21:37.425519836 +0000 UTC m=+0.561742776,LastTimestamp:2026-03-13 12:21:37.425519836 +0000 UTC m=+0.561742776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.101-d13a81acd8,}" Mar 13 12:21:38.219906 kubelet[2708]: I0313 12:21:38.219805 2708 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:38.220182 kubelet[2708]: E0313 12:21:38.220139 2708 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:38.648858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount650178566.mount: Deactivated successfully. Mar 13 12:21:38.679578 containerd[1713]: time="2026-03-13T12:21:38.679525823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 12:21:38.685957 containerd[1713]: time="2026-03-13T12:21:38.685708434Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 13 12:21:38.689174 containerd[1713]: time="2026-03-13T12:21:38.689134160Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 12:21:38.693448 containerd[1713]: time="2026-03-13T12:21:38.692700407Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 12:21:38.695926 containerd[1713]: time="2026-03-13T12:21:38.695824772Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 13 12:21:38.700838 containerd[1713]: time="2026-03-13T12:21:38.699805739Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 12:21:38.703506 containerd[1713]: time="2026-03-13T12:21:38.702793785Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 13 12:21:38.707948 containerd[1713]: time="2026-03-13T12:21:38.707886154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 12:21:38.709087 containerd[1713]: time="2026-03-13T12:21:38.708660955Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 792.759912ms" Mar 13 12:21:38.710988 containerd[1713]: time="2026-03-13T12:21:38.710954960Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 788.251625ms" Mar 13 12:21:38.711512 containerd[1713]: time="2026-03-13T12:21:38.711470880Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 803.120571ms" Mar 13 12:21:38.774575 update_engine[1691]: I20260313 12:21:38.774505 1691 update_attempter.cc:509] Updating boot flags... Mar 13 12:21:38.821029 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2760) Mar 13 12:21:38.842499 kubelet[2708]: E0313 12:21:38.842402 2708 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.101-d13a81acd8?timeout=10s\": dial tcp 10.200.20.10:6443: connect: connection refused" interval="1.6s" Mar 13 12:21:39.022830 kubelet[2708]: I0313 12:21:39.022396 2708 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:39.022830 kubelet[2708]: E0313 12:21:39.022714 2708 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.200.20.10:6443/api/v1/nodes\": dial tcp 10.200.20.10:6443: connect: connection refused" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:39.507472 kubelet[2708]: E0313 12:21:39.507427 2708 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.10:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 12:21:39.913929 containerd[1713]: time="2026-03-13T12:21:39.913657293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:21:39.913929 containerd[1713]: time="2026-03-13T12:21:39.913741853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:21:39.913929 containerd[1713]: time="2026-03-13T12:21:39.913767693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:21:39.915020 containerd[1713]: time="2026-03-13T12:21:39.914908415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:21:39.915993 containerd[1713]: time="2026-03-13T12:21:39.915907137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:21:39.915993 containerd[1713]: time="2026-03-13T12:21:39.915964017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:21:39.916155 containerd[1713]: time="2026-03-13T12:21:39.915996737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:21:39.916155 containerd[1713]: time="2026-03-13T12:21:39.916084738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:21:39.916588 containerd[1713]: time="2026-03-13T12:21:39.916368898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:21:39.916588 containerd[1713]: time="2026-03-13T12:21:39.916413538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:21:39.916588 containerd[1713]: time="2026-03-13T12:21:39.916427938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:21:39.916588 containerd[1713]: time="2026-03-13T12:21:39.916520698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:21:39.940369 systemd[1]: run-containerd-runc-k8s.io-1ec2787d1d491a36df8e6dccc78fb14893db6ac720f5019f3bc7b6958abe578c-runc.DwAB73.mount: Deactivated successfully. Mar 13 12:21:39.953689 systemd[1]: Started cri-containerd-1ec2787d1d491a36df8e6dccc78fb14893db6ac720f5019f3bc7b6958abe578c.scope - libcontainer container 1ec2787d1d491a36df8e6dccc78fb14893db6ac720f5019f3bc7b6958abe578c. Mar 13 12:21:39.959034 systemd[1]: Started cri-containerd-9d06fa00bc5f033a0a080641d03ae81c982027de783840ac70b9727583bae7f9.scope - libcontainer container 9d06fa00bc5f033a0a080641d03ae81c982027de783840ac70b9727583bae7f9. Mar 13 12:21:39.960566 systemd[1]: Started cri-containerd-e9d730b343400c7d9b58ced43c7f5dd7ff4f36a41306c401592b0045cbed21b5.scope - libcontainer container e9d730b343400c7d9b58ced43c7f5dd7ff4f36a41306c401592b0045cbed21b5. Mar 13 12:21:40.008362 containerd[1713]: time="2026-03-13T12:21:40.008207464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.101-d13a81acd8,Uid:20b204dbf6bc292e674f66c44eee38eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9d730b343400c7d9b58ced43c7f5dd7ff4f36a41306c401592b0045cbed21b5\"" Mar 13 12:21:40.010284 containerd[1713]: time="2026-03-13T12:21:40.010205788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.101-d13a81acd8,Uid:3e4e2b8ba8e564017883ae82c46aff0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ec2787d1d491a36df8e6dccc78fb14893db6ac720f5019f3bc7b6958abe578c\"" Mar 13 12:21:40.012403 containerd[1713]: time="2026-03-13T12:21:40.012298791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.101-d13a81acd8,Uid:6ef3e5f45a216b19102e11a65110b23f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d06fa00bc5f033a0a080641d03ae81c982027de783840ac70b9727583bae7f9\"" Mar 13 12:21:40.022415 containerd[1713]: time="2026-03-13T12:21:40.022375370Z" level=info msg="CreateContainer within sandbox \"e9d730b343400c7d9b58ced43c7f5dd7ff4f36a41306c401592b0045cbed21b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 12:21:40.028833 containerd[1713]: time="2026-03-13T12:21:40.028786461Z" level=info msg="CreateContainer within sandbox \"1ec2787d1d491a36df8e6dccc78fb14893db6ac720f5019f3bc7b6958abe578c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 12:21:40.034613 containerd[1713]: time="2026-03-13T12:21:40.034579432Z" level=info msg="CreateContainer within sandbox \"9d06fa00bc5f033a0a080641d03ae81c982027de783840ac70b9727583bae7f9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 12:21:40.109120 containerd[1713]: time="2026-03-13T12:21:40.109072606Z" level=info msg="CreateContainer within sandbox \"e9d730b343400c7d9b58ced43c7f5dd7ff4f36a41306c401592b0045cbed21b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"518f5577397681ba3e6fab0bb661f4be7614793b5f45ec07ab32628bca21b43c\"" Mar 13 12:21:40.110210 containerd[1713]: time="2026-03-13T12:21:40.110001088Z" level=info msg="StartContainer for \"518f5577397681ba3e6fab0bb661f4be7614793b5f45ec07ab32628bca21b43c\"" Mar 13 12:21:40.113218 containerd[1713]: time="2026-03-13T12:21:40.113108534Z" level=info msg="CreateContainer within sandbox \"1ec2787d1d491a36df8e6dccc78fb14893db6ac720f5019f3bc7b6958abe578c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f78b1690afa0544e4daf2fbfd2bd869bc32ce8904c8fb2d1fc01ac02360f7094\"" Mar 13 12:21:40.114772 containerd[1713]: time="2026-03-13T12:21:40.114748417Z" level=info msg="StartContainer for \"f78b1690afa0544e4daf2fbfd2bd869bc32ce8904c8fb2d1fc01ac02360f7094\"" Mar 13 12:21:40.123906 containerd[1713]: time="2026-03-13T12:21:40.123791433Z" level=info msg="CreateContainer within sandbox \"9d06fa00bc5f033a0a080641d03ae81c982027de783840ac70b9727583bae7f9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"684f7a8c3dec54537a3444ab74469ad51151850867b3a65dc8e95cbb8f0faf48\"" Mar 13 12:21:40.125445 containerd[1713]: time="2026-03-13T12:21:40.124665194Z" level=info msg="StartContainer for \"684f7a8c3dec54537a3444ab74469ad51151850867b3a65dc8e95cbb8f0faf48\"" Mar 13 12:21:40.143252 systemd[1]: Started cri-containerd-518f5577397681ba3e6fab0bb661f4be7614793b5f45ec07ab32628bca21b43c.scope - libcontainer container 518f5577397681ba3e6fab0bb661f4be7614793b5f45ec07ab32628bca21b43c. Mar 13 12:21:40.153907 systemd[1]: Started cri-containerd-f78b1690afa0544e4daf2fbfd2bd869bc32ce8904c8fb2d1fc01ac02360f7094.scope - libcontainer container f78b1690afa0544e4daf2fbfd2bd869bc32ce8904c8fb2d1fc01ac02360f7094. Mar 13 12:21:40.166415 systemd[1]: Started cri-containerd-684f7a8c3dec54537a3444ab74469ad51151850867b3a65dc8e95cbb8f0faf48.scope - libcontainer container 684f7a8c3dec54537a3444ab74469ad51151850867b3a65dc8e95cbb8f0faf48. Mar 13 12:21:40.226628 containerd[1713]: time="2026-03-13T12:21:40.226579499Z" level=info msg="StartContainer for \"518f5577397681ba3e6fab0bb661f4be7614793b5f45ec07ab32628bca21b43c\" returns successfully" Mar 13 12:21:40.226761 containerd[1713]: time="2026-03-13T12:21:40.226742099Z" level=info msg="StartContainer for \"f78b1690afa0544e4daf2fbfd2bd869bc32ce8904c8fb2d1fc01ac02360f7094\" returns successfully" Mar 13 12:21:40.226791 containerd[1713]: time="2026-03-13T12:21:40.226771099Z" level=info msg="StartContainer for \"684f7a8c3dec54537a3444ab74469ad51151850867b3a65dc8e95cbb8f0faf48\" returns successfully" Mar 13 12:21:40.487454 kubelet[2708]: E0313 12:21:40.486586 2708 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-d13a81acd8\" not found" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:40.490351 kubelet[2708]: E0313 12:21:40.490329 2708 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-d13a81acd8\" not found" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:40.492791 kubelet[2708]: E0313 12:21:40.492770 2708 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-d13a81acd8\" not found" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:40.625976 kubelet[2708]: I0313 12:21:40.625948 2708 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:41.492453 kubelet[2708]: E0313 12:21:41.492422 2708 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-d13a81acd8\" not found" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:41.492991 kubelet[2708]: E0313 12:21:41.492970 2708 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-d13a81acd8\" not found" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:41.864123 kubelet[2708]: I0313 12:21:41.864083 2708 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:41.864123 kubelet[2708]: E0313 12:21:41.864121 2708 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ci-4081.3.101-d13a81acd8\": node \"ci-4081.3.101-d13a81acd8\" not found" Mar 13 12:21:41.958928 kubelet[2708]: E0313 12:21:41.958818 2708 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.101-d13a81acd8\" not found" Mar 13 12:21:42.060000 kubelet[2708]: E0313 12:21:42.059948 2708 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.101-d13a81acd8\" not found" Mar 13 12:21:42.160908 kubelet[2708]: E0313 12:21:42.160457 2708 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.101-d13a81acd8\" not found" Mar 13 12:21:42.261185 kubelet[2708]: E0313 12:21:42.261144 2708 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.101-d13a81acd8\" not found" Mar 13 12:21:42.361805 kubelet[2708]: E0313 12:21:42.361755 2708 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.101-d13a81acd8\" not found" Mar 13 12:21:42.462011 kubelet[2708]: E0313 12:21:42.461891 2708 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.101-d13a81acd8\" not found" Mar 13 12:21:42.494673 kubelet[2708]: E0313 12:21:42.494440 2708 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-d13a81acd8\" not found" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:42.562208 kubelet[2708]: E0313 12:21:42.562162 2708 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.101-d13a81acd8\" not found" Mar 13 12:21:42.662887 kubelet[2708]: E0313 12:21:42.662833 2708 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.101-d13a81acd8\" not found" Mar 13 12:21:42.734506 kubelet[2708]: E0313 12:21:42.734402 2708 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.101-d13a81acd8\" not found" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:42.763768 kubelet[2708]: E0313 12:21:42.763725 2708 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.101-d13a81acd8\" not found" Mar 13 12:21:42.864392 kubelet[2708]: E0313 12:21:42.864345 2708 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.101-d13a81acd8\" not found" Mar 13 12:21:42.965094 kubelet[2708]: E0313 12:21:42.965047 2708 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.101-d13a81acd8\" not found" Mar 13 12:21:43.066206 kubelet[2708]: E0313 12:21:43.066166 2708 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.101-d13a81acd8\" not found" Mar 13 12:21:43.166938 kubelet[2708]: E0313 12:21:43.166901 2708 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.101-d13a81acd8\" not found" Mar 13 12:21:43.267990 kubelet[2708]: E0313 12:21:43.267691 2708 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.101-d13a81acd8\" not found" Mar 13 12:21:43.368442 kubelet[2708]: E0313 12:21:43.368330 2708 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.101-d13a81acd8\" not found" Mar 13 12:21:43.535909 kubelet[2708]: I0313 12:21:43.535862 2708 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.101-d13a81acd8" Mar 13 12:21:43.551932 kubelet[2708]: I0313 12:21:43.551417 2708 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 12:21:43.553024 kubelet[2708]: I0313 12:21:43.552602 2708 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.101-d13a81acd8" Mar 13 12:21:43.567570 kubelet[2708]: I0313 12:21:43.567530 2708 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 12:21:43.567725 kubelet[2708]: I0313 12:21:43.567668 2708 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.101-d13a81acd8" Mar 13 12:21:43.582934 kubelet[2708]: I0313 12:21:43.582828 2708 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 12:21:44.344473 systemd[1]: Reloading requested from client PID 3036 ('systemctl') (unit session-9.scope)... Mar 13 12:21:44.344500 systemd[1]: Reloading... Mar 13 12:21:44.426754 kubelet[2708]: I0313 12:21:44.425585 2708 apiserver.go:52] "Watching apiserver" Mar 13 12:21:44.438128 kubelet[2708]: I0313 12:21:44.438026 2708 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 12:21:44.438505 zram_generator::config[3076]: No configuration found. Mar 13 12:21:44.550818 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 13 12:21:44.645475 systemd[1]: Reloading finished in 300 ms. Mar 13 12:21:44.682716 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 12:21:44.702563 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 12:21:44.702802 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 12:21:44.708827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 12:21:44.823681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 12:21:44.834848 (kubelet)[3140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 12:21:44.874512 kubelet[3140]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:21:44.880346 kubelet[3140]: I0313 12:21:44.880291 3140 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 13 12:21:44.880602 kubelet[3140]: I0313 12:21:44.880589 3140 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 12:21:44.880678 kubelet[3140]: I0313 12:21:44.880669 3140 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 13 12:21:44.880734 kubelet[3140]: I0313 12:21:44.880724 3140 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 12:21:44.881056 kubelet[3140]: I0313 12:21:44.881040 3140 server.go:951] "Client rotation is on, will bootstrap in background" Mar 13 12:21:44.882348 kubelet[3140]: I0313 12:21:44.882322 3140 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 12:21:44.884640 kubelet[3140]: I0313 12:21:44.884607 3140 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 12:21:44.889830 kubelet[3140]: E0313 12:21:44.889787 3140 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 13 12:21:44.889933 kubelet[3140]: I0313 12:21:44.889857 3140 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 13 12:21:44.892865 kubelet[3140]: I0313 12:21:44.892841 3140 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 13 12:21:44.893051 kubelet[3140]: I0313 12:21:44.893026 3140 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 12:21:44.893192 kubelet[3140]: I0313 12:21:44.893048 3140 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.101-d13a81acd8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 12:21:44.893284 kubelet[3140]: I0313 12:21:44.893195 3140 topology_manager.go:143] "Creating topology manager with none policy" Mar 13 12:21:44.893284 kubelet[3140]: I0313 12:21:44.893202 3140 container_manager_linux.go:308] "Creating device plugin manager" Mar 13 12:21:44.893284 kubelet[3140]: I0313 12:21:44.893222 3140 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 13 12:21:44.893414 kubelet[3140]: I0313 12:21:44.893396 3140 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 13 12:21:44.893591 kubelet[3140]: I0313 12:21:44.893578 3140 kubelet.go:482] "Attempting to sync node with API server" Mar 13 12:21:44.893628 kubelet[3140]: I0313 12:21:44.893597 3140 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 12:21:44.893628 kubelet[3140]: I0313 12:21:44.893616 3140 kubelet.go:394] "Adding apiserver pod source" Mar 13 12:21:44.893628 kubelet[3140]: I0313 12:21:44.893628 3140 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 12:21:44.895878 kubelet[3140]: I0313 12:21:44.895558 3140 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 13 12:21:44.897569 kubelet[3140]: I0313 12:21:44.897244 3140 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 12:21:44.897569 kubelet[3140]: I0313 12:21:44.897282 3140 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 13 12:21:44.902158 kubelet[3140]: I0313 12:21:44.902132 3140 server.go:1257] "Started kubelet" Mar 13 12:21:44.906401 kubelet[3140]: I0313 12:21:44.906368 3140 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 13 12:21:44.911085 kubelet[3140]: I0313 12:21:44.911035 3140 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 12:21:44.913504 kubelet[3140]: I0313 12:21:44.912236 3140 server.go:317] "Adding debug handlers to kubelet server" Mar 13 12:21:44.917844 kubelet[3140]: I0313 12:21:44.916387 3140 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 12:21:44.917844 kubelet[3140]: I0313 12:21:44.916453 3140 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 13 12:21:44.917844 kubelet[3140]: I0313 12:21:44.916605 3140 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 12:21:44.917844 kubelet[3140]: I0313 12:21:44.916855 3140 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 12:21:44.918289 kubelet[3140]: I0313 12:21:44.918271 3140 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 13 12:21:44.918613 kubelet[3140]: E0313 12:21:44.918473 3140 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081.3.101-d13a81acd8\" not found" Mar 13 12:21:44.918921 kubelet[3140]: I0313 12:21:44.918904 3140 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 13 12:21:44.919031 kubelet[3140]: I0313 12:21:44.919015 3140 reconciler.go:29] "Reconciler: start to sync state" Mar 13 12:21:44.940353 kubelet[3140]: I0313 12:21:44.940016 3140 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 13 12:21:44.958483 kubelet[3140]: I0313 12:21:44.957353 3140 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 13 12:21:44.958483 kubelet[3140]: I0313 12:21:44.957382 3140 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 13 12:21:44.958483 kubelet[3140]: I0313 12:21:44.957400 3140 kubelet.go:2501] "Starting kubelet main sync loop" Mar 13 12:21:44.958483 kubelet[3140]: E0313 12:21:44.957458 3140 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 12:21:44.961819 kubelet[3140]: I0313 12:21:44.961475 3140 factory.go:223] Registration of the containerd container factory successfully Mar 13 12:21:44.961819 kubelet[3140]: I0313 12:21:44.961509 3140 factory.go:223] Registration of the systemd container factory successfully Mar 13 12:21:44.961819 kubelet[3140]: I0313 12:21:44.961586 3140 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 12:21:45.017732 kubelet[3140]: I0313 12:21:45.017709 3140 cpu_manager.go:225] "Starting" policy="none" Mar 13 12:21:45.017907 kubelet[3140]: I0313 12:21:45.017893 3140 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 13 12:21:45.017974 kubelet[3140]: I0313 12:21:45.017965 3140 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 13 12:21:45.018232 kubelet[3140]: I0313 12:21:45.018215 3140 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 13 12:21:45.018318 kubelet[3140]: I0313 12:21:45.018293 3140 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 13 12:21:45.018361 kubelet[3140]: I0313 12:21:45.018355 3140 policy_none.go:50] "Start" Mar 13 12:21:45.018409 kubelet[3140]: I0313 12:21:45.018402 3140 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 13 12:21:45.018461 kubelet[3140]: I0313 12:21:45.018453 3140 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 13 12:21:45.019058 kubelet[3140]: I0313 12:21:45.019036 3140 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 13 12:21:45.019158 kubelet[3140]: I0313 12:21:45.019149 3140 policy_none.go:44] "Start" Mar 13 12:21:45.024711 kubelet[3140]: E0313 12:21:45.024680 3140 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 12:21:45.026147 kubelet[3140]: I0313 12:21:45.024981 3140 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 13 12:21:45.026147 kubelet[3140]: I0313 12:21:45.024998 3140 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 12:21:45.026917 kubelet[3140]: I0313 12:21:45.026892 3140 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 13 12:21:45.032070 kubelet[3140]: E0313 12:21:45.031978 3140 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 12:21:45.058734 kubelet[3140]: I0313 12:21:45.058692 3140 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.101-d13a81acd8" Mar 13 12:21:45.059765 kubelet[3140]: I0313 12:21:45.059741 3140 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.101-d13a81acd8" Mar 13 12:21:45.060146 kubelet[3140]: I0313 12:21:45.060104 3140 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.101-d13a81acd8" Mar 13 12:21:45.070245 kubelet[3140]: I0313 12:21:45.069504 3140 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 12:21:45.070245 kubelet[3140]: E0313 12:21:45.069571 3140 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.101-d13a81acd8\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.101-d13a81acd8" Mar 13 12:21:45.071412 kubelet[3140]: I0313 12:21:45.070938 3140 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 12:21:45.071412 kubelet[3140]: E0313 12:21:45.070983 3140 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.101-d13a81acd8\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.101-d13a81acd8" Mar 13 12:21:45.071412 kubelet[3140]: I0313 12:21:45.071034 3140 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 12:21:45.071412 kubelet[3140]: E0313 12:21:45.071051 3140 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.101-d13a81acd8\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.101-d13a81acd8" Mar 13 12:21:45.119539 kubelet[3140]: I0313 12:21:45.119466 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e4e2b8ba8e564017883ae82c46aff0b-ca-certs\") pod \"kube-controller-manager-ci-4081.3.101-d13a81acd8\" (UID: \"3e4e2b8ba8e564017883ae82c46aff0b\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-d13a81acd8" Mar 13 12:21:45.119539 kubelet[3140]: I0313 12:21:45.119524 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e4e2b8ba8e564017883ae82c46aff0b-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.101-d13a81acd8\" (UID: \"3e4e2b8ba8e564017883ae82c46aff0b\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-d13a81acd8" Mar 13 12:21:45.119539 kubelet[3140]: I0313 12:21:45.119542 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e4e2b8ba8e564017883ae82c46aff0b-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.101-d13a81acd8\" (UID: \"3e4e2b8ba8e564017883ae82c46aff0b\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-d13a81acd8" Mar 13 12:21:45.119539 kubelet[3140]: I0313 12:21:45.119558 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e4e2b8ba8e564017883ae82c46aff0b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.101-d13a81acd8\" (UID: \"3e4e2b8ba8e564017883ae82c46aff0b\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-d13a81acd8" Mar 13 12:21:45.119760 kubelet[3140]: I0313 12:21:45.119575 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20b204dbf6bc292e674f66c44eee38eb-k8s-certs\") pod \"kube-apiserver-ci-4081.3.101-d13a81acd8\" (UID: \"20b204dbf6bc292e674f66c44eee38eb\") " pod="kube-system/kube-apiserver-ci-4081.3.101-d13a81acd8" Mar 13 12:21:45.119760 kubelet[3140]: I0313 12:21:45.119591 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3e4e2b8ba8e564017883ae82c46aff0b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.101-d13a81acd8\" (UID: \"3e4e2b8ba8e564017883ae82c46aff0b\") " pod="kube-system/kube-controller-manager-ci-4081.3.101-d13a81acd8" Mar 13 12:21:45.119760 kubelet[3140]: I0313 12:21:45.119619 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ef3e5f45a216b19102e11a65110b23f-kubeconfig\") pod \"kube-scheduler-ci-4081.3.101-d13a81acd8\" (UID: \"6ef3e5f45a216b19102e11a65110b23f\") " pod="kube-system/kube-scheduler-ci-4081.3.101-d13a81acd8" Mar 13 12:21:45.119760 kubelet[3140]: I0313 12:21:45.119632 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20b204dbf6bc292e674f66c44eee38eb-ca-certs\") pod \"kube-apiserver-ci-4081.3.101-d13a81acd8\" (UID: \"20b204dbf6bc292e674f66c44eee38eb\") " pod="kube-system/kube-apiserver-ci-4081.3.101-d13a81acd8" Mar 13 12:21:45.119760 kubelet[3140]: I0313 12:21:45.119647 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20b204dbf6bc292e674f66c44eee38eb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.101-d13a81acd8\" (UID: \"20b204dbf6bc292e674f66c44eee38eb\") " pod="kube-system/kube-apiserver-ci-4081.3.101-d13a81acd8" Mar 13 12:21:45.135512 kubelet[3140]: I0313 12:21:45.135304 3140 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:45.151513 kubelet[3140]: I0313 12:21:45.150885 3140 kubelet_node_status.go:123] "Node was previously registered" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:45.151513 kubelet[3140]: I0313 12:21:45.150970 3140 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081.3.101-d13a81acd8" Mar 13 12:21:45.894362 kubelet[3140]: I0313 12:21:45.894324 3140 apiserver.go:52] "Watching apiserver" Mar 13 12:21:45.919382 kubelet[3140]: I0313 12:21:45.919346 3140 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 13 12:21:46.004095 kubelet[3140]: I0313 12:21:46.003876 3140 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.101-d13a81acd8" Mar 13 12:21:46.017946 kubelet[3140]: I0313 12:21:46.017912 3140 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 12:21:46.018085 kubelet[3140]: E0313 12:21:46.017973 3140 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.101-d13a81acd8\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.101-d13a81acd8" Mar 13 12:21:46.054522 kubelet[3140]: I0313 12:21:46.053979 3140 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.101-d13a81acd8" podStartSLOduration=3.053864533 podStartE2EDuration="3.053864533s" podCreationTimestamp="2026-03-13 12:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:21:46.041505751 +0000 UTC m=+1.202286071" watchObservedRunningTime="2026-03-13 12:21:46.053864533 +0000 UTC m=+1.214644853" Mar 13 12:21:46.070410 kubelet[3140]: I0313 12:21:46.070106 3140 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.101-d13a81acd8" podStartSLOduration=3.070090602 podStartE2EDuration="3.070090602s" podCreationTimestamp="2026-03-13 12:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:21:46.055118295 +0000 UTC m=+1.215898615" watchObservedRunningTime="2026-03-13 12:21:46.070090602 +0000 UTC m=+1.230870922" Mar 13 12:21:46.175130 kubelet[3140]: I0313 12:21:46.174841 3140 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.101-d13a81acd8" podStartSLOduration=3.174827269 podStartE2EDuration="3.174827269s" podCreationTimestamp="2026-03-13 12:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:21:46.071663525 +0000 UTC m=+1.232443845" watchObservedRunningTime="2026-03-13 12:21:46.174827269 +0000 UTC m=+1.335607549" Mar 13 12:21:50.514446 kubelet[3140]: I0313 12:21:50.514383 3140 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 12:21:50.514884 containerd[1713]: time="2026-03-13T12:21:50.514711670Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 12:21:50.515059 kubelet[3140]: I0313 12:21:50.514913 3140 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 12:21:51.693467 systemd[1]: Created slice kubepods-besteffort-podbed0504e_f551_4da8_93db_31108259ffb9.slice - libcontainer container kubepods-besteffort-podbed0504e_f551_4da8_93db_31108259ffb9.slice. Mar 13 12:21:51.761367 kubelet[3140]: I0313 12:21:51.761214 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bed0504e-f551-4da8-93db-31108259ffb9-kube-proxy\") pod \"kube-proxy-t42xn\" (UID: \"bed0504e-f551-4da8-93db-31108259ffb9\") " pod="kube-system/kube-proxy-t42xn" Mar 13 12:21:51.761367 kubelet[3140]: I0313 12:21:51.761255 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bed0504e-f551-4da8-93db-31108259ffb9-xtables-lock\") pod \"kube-proxy-t42xn\" (UID: \"bed0504e-f551-4da8-93db-31108259ffb9\") " pod="kube-system/kube-proxy-t42xn" Mar 13 12:21:51.761367 kubelet[3140]: I0313 12:21:51.761270 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bed0504e-f551-4da8-93db-31108259ffb9-lib-modules\") pod \"kube-proxy-t42xn\" (UID: \"bed0504e-f551-4da8-93db-31108259ffb9\") " pod="kube-system/kube-proxy-t42xn" Mar 13 12:21:51.761367 kubelet[3140]: I0313 12:21:51.761290 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4kvv\" (UniqueName: \"kubernetes.io/projected/bed0504e-f551-4da8-93db-31108259ffb9-kube-api-access-t4kvv\") pod \"kube-proxy-t42xn\" (UID: \"bed0504e-f551-4da8-93db-31108259ffb9\") " pod="kube-system/kube-proxy-t42xn" Mar 13 12:21:51.836188 systemd[1]: Created slice kubepods-besteffort-pod57760383_6b63_4ee4_9a37_65cf377d5d2b.slice - libcontainer container kubepods-besteffort-pod57760383_6b63_4ee4_9a37_65cf377d5d2b.slice. Mar 13 12:21:51.862298 kubelet[3140]: I0313 12:21:51.862233 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/57760383-6b63-4ee4-9a37-65cf377d5d2b-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-vkdbx\" (UID: \"57760383-6b63-4ee4-9a37-65cf377d5d2b\") " pod="tigera-operator/tigera-operator-6cf4cccc57-vkdbx" Mar 13 12:21:51.862298 kubelet[3140]: I0313 12:21:51.862300 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxbdv\" (UniqueName: \"kubernetes.io/projected/57760383-6b63-4ee4-9a37-65cf377d5d2b-kube-api-access-nxbdv\") pod \"tigera-operator-6cf4cccc57-vkdbx\" (UID: \"57760383-6b63-4ee4-9a37-65cf377d5d2b\") " pod="tigera-operator/tigera-operator-6cf4cccc57-vkdbx" Mar 13 12:21:52.014759 containerd[1713]: time="2026-03-13T12:21:52.014285566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t42xn,Uid:bed0504e-f551-4da8-93db-31108259ffb9,Namespace:kube-system,Attempt:0,}" Mar 13 12:21:52.059923 containerd[1713]: time="2026-03-13T12:21:52.059787667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:21:52.059923 containerd[1713]: time="2026-03-13T12:21:52.059852348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:21:52.060143 containerd[1713]: time="2026-03-13T12:21:52.059895908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:21:52.060763 containerd[1713]: time="2026-03-13T12:21:52.060625069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:21:52.084715 systemd[1]: Started cri-containerd-b91cd39d5f799d128981940f6ed192e5ab0e3e9f866ea1bc6b79019d0ee2fc60.scope - libcontainer container b91cd39d5f799d128981940f6ed192e5ab0e3e9f866ea1bc6b79019d0ee2fc60. Mar 13 12:21:52.105270 containerd[1713]: time="2026-03-13T12:21:52.105125248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t42xn,Uid:bed0504e-f551-4da8-93db-31108259ffb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b91cd39d5f799d128981940f6ed192e5ab0e3e9f866ea1bc6b79019d0ee2fc60\"" Mar 13 12:21:52.114963 containerd[1713]: time="2026-03-13T12:21:52.114918150Z" level=info msg="CreateContainer within sandbox \"b91cd39d5f799d128981940f6ed192e5ab0e3e9f866ea1bc6b79019d0ee2fc60\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 12:21:52.147903 containerd[1713]: time="2026-03-13T12:21:52.147860143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-vkdbx,Uid:57760383-6b63-4ee4-9a37-65cf377d5d2b,Namespace:tigera-operator,Attempt:0,}" Mar 13 12:21:52.155427 containerd[1713]: time="2026-03-13T12:21:52.155091399Z" level=info msg="CreateContainer within sandbox \"b91cd39d5f799d128981940f6ed192e5ab0e3e9f866ea1bc6b79019d0ee2fc60\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"79839c32d850803cceff3d1c0d18cf02ca5b23ed80f91eb8e59fd1761f802508\"" Mar 13 12:21:52.156708 containerd[1713]: time="2026-03-13T12:21:52.156158282Z" level=info msg="StartContainer for \"79839c32d850803cceff3d1c0d18cf02ca5b23ed80f91eb8e59fd1761f802508\"" Mar 13 12:21:52.182676 systemd[1]: Started cri-containerd-79839c32d850803cceff3d1c0d18cf02ca5b23ed80f91eb8e59fd1761f802508.scope - libcontainer container 79839c32d850803cceff3d1c0d18cf02ca5b23ed80f91eb8e59fd1761f802508. Mar 13 12:21:52.225577 containerd[1713]: time="2026-03-13T12:21:52.225204915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:21:52.225577 containerd[1713]: time="2026-03-13T12:21:52.225277036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:21:52.225577 containerd[1713]: time="2026-03-13T12:21:52.225300436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:21:52.225577 containerd[1713]: time="2026-03-13T12:21:52.225439916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:21:52.232885 containerd[1713]: time="2026-03-13T12:21:52.232835972Z" level=info msg="StartContainer for \"79839c32d850803cceff3d1c0d18cf02ca5b23ed80f91eb8e59fd1761f802508\" returns successfully" Mar 13 12:21:52.244382 systemd[1]: Started cri-containerd-b3e1027d6e84c3cf14c119fcb7a67f14561dcd0ed213daae7386a8a5f2d9dc90.scope - libcontainer container b3e1027d6e84c3cf14c119fcb7a67f14561dcd0ed213daae7386a8a5f2d9dc90. Mar 13 12:21:52.279923 containerd[1713]: time="2026-03-13T12:21:52.279541516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-vkdbx,Uid:57760383-6b63-4ee4-9a37-65cf377d5d2b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b3e1027d6e84c3cf14c119fcb7a67f14561dcd0ed213daae7386a8a5f2d9dc90\"" Mar 13 12:21:52.284863 containerd[1713]: time="2026-03-13T12:21:52.284822448Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 13 12:21:53.776840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067680218.mount: Deactivated successfully. Mar 13 12:21:54.905543 containerd[1713]: time="2026-03-13T12:21:54.904697917Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:54.907902 containerd[1713]: time="2026-03-13T12:21:54.907866324Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 13 12:21:54.912053 containerd[1713]: time="2026-03-13T12:21:54.912020334Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:54.917434 containerd[1713]: time="2026-03-13T12:21:54.917191025Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:21:54.918069 containerd[1713]: time="2026-03-13T12:21:54.918033467Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.633122819s" Mar 13 12:21:54.918069 containerd[1713]: time="2026-03-13T12:21:54.918067787Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 13 12:21:54.928739 containerd[1713]: time="2026-03-13T12:21:54.928701891Z" level=info msg="CreateContainer within sandbox \"b3e1027d6e84c3cf14c119fcb7a67f14561dcd0ed213daae7386a8a5f2d9dc90\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 13 12:21:54.966411 containerd[1713]: time="2026-03-13T12:21:54.966344854Z" level=info msg="CreateContainer within sandbox \"b3e1027d6e84c3cf14c119fcb7a67f14561dcd0ed213daae7386a8a5f2d9dc90\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0f8d247a69c660e2bcbf16c36a225e86999fe2b76eefa9413fcc165ad6186fc5\"" Mar 13 12:21:54.968596 containerd[1713]: time="2026-03-13T12:21:54.967384977Z" level=info msg="StartContainer for \"0f8d247a69c660e2bcbf16c36a225e86999fe2b76eefa9413fcc165ad6186fc5\"" Mar 13 12:21:54.995257 systemd[1]: run-containerd-runc-k8s.io-0f8d247a69c660e2bcbf16c36a225e86999fe2b76eefa9413fcc165ad6186fc5-runc.mg4MLu.mount: Deactivated successfully. Mar 13 12:21:55.008754 systemd[1]: Started cri-containerd-0f8d247a69c660e2bcbf16c36a225e86999fe2b76eefa9413fcc165ad6186fc5.scope - libcontainer container 0f8d247a69c660e2bcbf16c36a225e86999fe2b76eefa9413fcc165ad6186fc5. Mar 13 12:21:55.046599 containerd[1713]: time="2026-03-13T12:21:55.046383513Z" level=info msg="StartContainer for \"0f8d247a69c660e2bcbf16c36a225e86999fe2b76eefa9413fcc165ad6186fc5\" returns successfully" Mar 13 12:21:55.159388 kubelet[3140]: I0313 12:21:55.158896 3140 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-t42xn" podStartSLOduration=4.158882923 podStartE2EDuration="4.158882923s" podCreationTimestamp="2026-03-13 12:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:21:53.032526552 +0000 UTC m=+8.193306872" watchObservedRunningTime="2026-03-13 12:21:55.158882923 +0000 UTC m=+10.319663243" Mar 13 12:21:56.040495 kubelet[3140]: I0313 12:21:56.040153 3140 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-vkdbx" podStartSLOduration=2.403963059 podStartE2EDuration="5.040138044s" podCreationTimestamp="2026-03-13 12:21:51 +0000 UTC" firstStartedPulling="2026-03-13 12:21:52.282848804 +0000 UTC m=+7.443629124" lastFinishedPulling="2026-03-13 12:21:54.919023789 +0000 UTC m=+10.079804109" observedRunningTime="2026-03-13 12:21:56.040057803 +0000 UTC m=+11.200838083" watchObservedRunningTime="2026-03-13 12:21:56.040138044 +0000 UTC m=+11.200918364" Mar 13 12:22:01.137977 sudo[2201]: pam_unix(sudo:session): session closed for user root Mar 13 12:22:01.219274 sshd[2198]: pam_unix(sshd:session): session closed for user core Mar 13 12:22:01.225988 systemd-logind[1690]: Session 9 logged out. Waiting for processes to exit. Mar 13 12:22:01.228023 systemd[1]: sshd@6-10.200.20.10:22-10.200.16.10:55310.service: Deactivated successfully. Mar 13 12:22:01.232188 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 12:22:01.232442 systemd[1]: session-9.scope: Consumed 5.170s CPU time, 149.6M memory peak, 0B memory swap peak. Mar 13 12:22:01.233266 systemd-logind[1690]: Removed session 9. Mar 13 12:22:07.790579 kernel: icmp: detected local route for 10.200.20.10 during ICMP sending, src 10.105.109.17 Mar 13 12:22:07.790751 kernel: icmp: detected local route for 10.200.20.10 during ICMP sending, src 10.105.109.17 Mar 13 12:22:08.194116 systemd[1]: Created slice kubepods-besteffort-pod6f20b30d_a33f_4525_9a4b_8fabf59c81f9.slice - libcontainer container kubepods-besteffort-pod6f20b30d_a33f_4525_9a4b_8fabf59c81f9.slice. Mar 13 12:22:08.263101 kubelet[3140]: I0313 12:22:08.263058 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6f20b30d-a33f-4525-9a4b-8fabf59c81f9-typha-certs\") pod \"calico-typha-5b6c8847d6-8chbx\" (UID: \"6f20b30d-a33f-4525-9a4b-8fabf59c81f9\") " pod="calico-system/calico-typha-5b6c8847d6-8chbx" Mar 13 12:22:08.263101 kubelet[3140]: I0313 12:22:08.263104 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f20b30d-a33f-4525-9a4b-8fabf59c81f9-tigera-ca-bundle\") pod \"calico-typha-5b6c8847d6-8chbx\" (UID: \"6f20b30d-a33f-4525-9a4b-8fabf59c81f9\") " pod="calico-system/calico-typha-5b6c8847d6-8chbx" Mar 13 12:22:08.263574 kubelet[3140]: I0313 12:22:08.263125 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxk4j\" (UniqueName: \"kubernetes.io/projected/6f20b30d-a33f-4525-9a4b-8fabf59c81f9-kube-api-access-mxk4j\") pod \"calico-typha-5b6c8847d6-8chbx\" (UID: \"6f20b30d-a33f-4525-9a4b-8fabf59c81f9\") " pod="calico-system/calico-typha-5b6c8847d6-8chbx" Mar 13 12:22:08.298337 systemd[1]: Created slice kubepods-besteffort-poda0e4129e_fdd6_4517_90dc_6cda0984e688.slice - libcontainer container kubepods-besteffort-poda0e4129e_fdd6_4517_90dc_6cda0984e688.slice. Mar 13 12:22:08.364256 kubelet[3140]: I0313 12:22:08.364208 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/a0e4129e-fdd6-4517-90dc-6cda0984e688-sys-fs\") pod \"calico-node-bvvnz\" (UID: \"a0e4129e-fdd6-4517-90dc-6cda0984e688\") " pod="calico-system/calico-node-bvvnz" Mar 13 12:22:08.364256 kubelet[3140]: I0313 12:22:08.364294 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a0e4129e-fdd6-4517-90dc-6cda0984e688-node-certs\") pod \"calico-node-bvvnz\" (UID: \"a0e4129e-fdd6-4517-90dc-6cda0984e688\") " pod="calico-system/calico-node-bvvnz" Mar 13 12:22:08.364256 kubelet[3140]: I0313 12:22:08.364317 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0e4129e-fdd6-4517-90dc-6cda0984e688-tigera-ca-bundle\") pod \"calico-node-bvvnz\" (UID: \"a0e4129e-fdd6-4517-90dc-6cda0984e688\") " pod="calico-system/calico-node-bvvnz" Mar 13 12:22:08.366113 kubelet[3140]: I0313 12:22:08.364594 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ddgn\" (UniqueName: \"kubernetes.io/projected/a0e4129e-fdd6-4517-90dc-6cda0984e688-kube-api-access-7ddgn\") pod \"calico-node-bvvnz\" (UID: \"a0e4129e-fdd6-4517-90dc-6cda0984e688\") " pod="calico-system/calico-node-bvvnz" Mar 13 12:22:08.366113 kubelet[3140]: I0313 12:22:08.364623 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a0e4129e-fdd6-4517-90dc-6cda0984e688-policysync\") pod \"calico-node-bvvnz\" (UID: \"a0e4129e-fdd6-4517-90dc-6cda0984e688\") " pod="calico-system/calico-node-bvvnz" Mar 13 12:22:08.366113 kubelet[3140]: I0313 12:22:08.364663 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/a0e4129e-fdd6-4517-90dc-6cda0984e688-bpffs\") pod \"calico-node-bvvnz\" (UID: \"a0e4129e-fdd6-4517-90dc-6cda0984e688\") " pod="calico-system/calico-node-bvvnz" Mar 13 12:22:08.366113 kubelet[3140]: I0313 12:22:08.364678 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a0e4129e-fdd6-4517-90dc-6cda0984e688-cni-log-dir\") pod \"calico-node-bvvnz\" (UID: \"a0e4129e-fdd6-4517-90dc-6cda0984e688\") " pod="calico-system/calico-node-bvvnz" Mar 13 12:22:08.366113 kubelet[3140]: I0313 12:22:08.364693 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a0e4129e-fdd6-4517-90dc-6cda0984e688-cni-net-dir\") pod \"calico-node-bvvnz\" (UID: \"a0e4129e-fdd6-4517-90dc-6cda0984e688\") " pod="calico-system/calico-node-bvvnz" Mar 13 12:22:08.366307 kubelet[3140]: I0313 12:22:08.364713 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/a0e4129e-fdd6-4517-90dc-6cda0984e688-nodeproc\") pod \"calico-node-bvvnz\" (UID: \"a0e4129e-fdd6-4517-90dc-6cda0984e688\") " pod="calico-system/calico-node-bvvnz" Mar 13 12:22:08.366307 kubelet[3140]: I0313 12:22:08.364730 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a0e4129e-fdd6-4517-90dc-6cda0984e688-var-run-calico\") pod \"calico-node-bvvnz\" (UID: \"a0e4129e-fdd6-4517-90dc-6cda0984e688\") " pod="calico-system/calico-node-bvvnz" Mar 13 12:22:08.366307 kubelet[3140]: I0313 12:22:08.364745 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a0e4129e-fdd6-4517-90dc-6cda0984e688-xtables-lock\") pod \"calico-node-bvvnz\" (UID: \"a0e4129e-fdd6-4517-90dc-6cda0984e688\") " pod="calico-system/calico-node-bvvnz" Mar 13 12:22:08.366307 kubelet[3140]: I0313 12:22:08.364783 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a0e4129e-fdd6-4517-90dc-6cda0984e688-flexvol-driver-host\") pod \"calico-node-bvvnz\" (UID: \"a0e4129e-fdd6-4517-90dc-6cda0984e688\") " pod="calico-system/calico-node-bvvnz" Mar 13 12:22:08.366307 kubelet[3140]: I0313 12:22:08.364800 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a0e4129e-fdd6-4517-90dc-6cda0984e688-cni-bin-dir\") pod \"calico-node-bvvnz\" (UID: \"a0e4129e-fdd6-4517-90dc-6cda0984e688\") " pod="calico-system/calico-node-bvvnz" Mar 13 12:22:08.366415 kubelet[3140]: I0313 12:22:08.364814 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0e4129e-fdd6-4517-90dc-6cda0984e688-lib-modules\") pod \"calico-node-bvvnz\" (UID: \"a0e4129e-fdd6-4517-90dc-6cda0984e688\") " pod="calico-system/calico-node-bvvnz" Mar 13 12:22:08.366415 kubelet[3140]: I0313 12:22:08.364828 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a0e4129e-fdd6-4517-90dc-6cda0984e688-var-lib-calico\") pod \"calico-node-bvvnz\" (UID: \"a0e4129e-fdd6-4517-90dc-6cda0984e688\") " pod="calico-system/calico-node-bvvnz" Mar 13 12:22:08.420999 kubelet[3140]: E0313 12:22:08.420954 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4rnvb" podUID="b9f33bdd-2737-45fd-8259-eb04da313d49" Mar 13 12:22:08.466538 kubelet[3140]: I0313 12:22:08.465841 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b9f33bdd-2737-45fd-8259-eb04da313d49-varrun\") pod \"csi-node-driver-4rnvb\" (UID: \"b9f33bdd-2737-45fd-8259-eb04da313d49\") " pod="calico-system/csi-node-driver-4rnvb" Mar 13 12:22:08.466783 kubelet[3140]: I0313 12:22:08.466755 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6px8n\" (UniqueName: \"kubernetes.io/projected/b9f33bdd-2737-45fd-8259-eb04da313d49-kube-api-access-6px8n\") pod \"csi-node-driver-4rnvb\" (UID: \"b9f33bdd-2737-45fd-8259-eb04da313d49\") " pod="calico-system/csi-node-driver-4rnvb" Mar 13 12:22:08.467684 kubelet[3140]: I0313 12:22:08.466867 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9f33bdd-2737-45fd-8259-eb04da313d49-kubelet-dir\") pod \"csi-node-driver-4rnvb\" (UID: \"b9f33bdd-2737-45fd-8259-eb04da313d49\") " pod="calico-system/csi-node-driver-4rnvb" Mar 13 12:22:08.467684 kubelet[3140]: I0313 12:22:08.466936 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b9f33bdd-2737-45fd-8259-eb04da313d49-socket-dir\") pod \"csi-node-driver-4rnvb\" (UID: \"b9f33bdd-2737-45fd-8259-eb04da313d49\") " pod="calico-system/csi-node-driver-4rnvb" Mar 13 12:22:08.467684 kubelet[3140]: I0313 12:22:08.466965 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b9f33bdd-2737-45fd-8259-eb04da313d49-registration-dir\") pod \"csi-node-driver-4rnvb\" (UID: \"b9f33bdd-2737-45fd-8259-eb04da313d49\") " pod="calico-system/csi-node-driver-4rnvb" Mar 13 12:22:08.468716 kubelet[3140]: E0313 12:22:08.468614 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.469016 kubelet[3140]: W0313 12:22:08.468997 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.469200 kubelet[3140]: E0313 12:22:08.469186 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.469804 kubelet[3140]: E0313 12:22:08.469736 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.469969 kubelet[3140]: W0313 12:22:08.469899 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.470041 kubelet[3140]: E0313 12:22:08.470030 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.470563 kubelet[3140]: E0313 12:22:08.470457 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.471519 kubelet[3140]: W0313 12:22:08.470652 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.471646 kubelet[3140]: E0313 12:22:08.471629 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.473526 kubelet[3140]: E0313 12:22:08.472292 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.473723 kubelet[3140]: W0313 12:22:08.473709 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.473810 kubelet[3140]: E0313 12:22:08.473799 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.474493 kubelet[3140]: E0313 12:22:08.474373 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.478141 kubelet[3140]: W0313 12:22:08.477908 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.478141 kubelet[3140]: E0313 12:22:08.477933 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.478763 kubelet[3140]: E0313 12:22:08.478709 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.478763 kubelet[3140]: W0313 12:22:08.478724 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.478763 kubelet[3140]: E0313 12:22:08.478736 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.479556 kubelet[3140]: E0313 12:22:08.479299 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.479556 kubelet[3140]: W0313 12:22:08.479324 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.479556 kubelet[3140]: E0313 12:22:08.479338 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.480025 kubelet[3140]: E0313 12:22:08.479888 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.480025 kubelet[3140]: W0313 12:22:08.479903 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.480025 kubelet[3140]: E0313 12:22:08.479917 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.480813 kubelet[3140]: E0313 12:22:08.480518 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.480813 kubelet[3140]: W0313 12:22:08.480533 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.480813 kubelet[3140]: E0313 12:22:08.480546 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.481290 kubelet[3140]: E0313 12:22:08.481164 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.481290 kubelet[3140]: W0313 12:22:08.481182 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.481290 kubelet[3140]: E0313 12:22:08.481196 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.481699 kubelet[3140]: E0313 12:22:08.481600 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.481699 kubelet[3140]: W0313 12:22:08.481610 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.481699 kubelet[3140]: E0313 12:22:08.481625 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.482901 kubelet[3140]: E0313 12:22:08.482838 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.482901 kubelet[3140]: W0313 12:22:08.482854 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.482901 kubelet[3140]: E0313 12:22:08.482867 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.483440 kubelet[3140]: E0313 12:22:08.483360 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.483440 kubelet[3140]: W0313 12:22:08.483373 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.483440 kubelet[3140]: E0313 12:22:08.483386 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.483911 kubelet[3140]: E0313 12:22:08.483896 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.483911 kubelet[3140]: W0313 12:22:08.483926 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.483911 kubelet[3140]: E0313 12:22:08.483940 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.484213 kubelet[3140]: E0313 12:22:08.484202 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.484327 kubelet[3140]: W0313 12:22:08.484269 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.484327 kubelet[3140]: E0313 12:22:08.484284 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.485714 kubelet[3140]: E0313 12:22:08.485591 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.485714 kubelet[3140]: W0313 12:22:08.485609 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.485714 kubelet[3140]: E0313 12:22:08.485622 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.486211 kubelet[3140]: E0313 12:22:08.486112 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.486211 kubelet[3140]: W0313 12:22:08.486126 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.486211 kubelet[3140]: E0313 12:22:08.486139 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.486594 kubelet[3140]: E0313 12:22:08.486502 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.486594 kubelet[3140]: W0313 12:22:08.486515 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.486594 kubelet[3140]: E0313 12:22:08.486528 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.487689 kubelet[3140]: E0313 12:22:08.487545 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.487689 kubelet[3140]: W0313 12:22:08.487561 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.487689 kubelet[3140]: E0313 12:22:08.487573 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.488250 kubelet[3140]: E0313 12:22:08.488108 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.488250 kubelet[3140]: W0313 12:22:08.488120 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.488250 kubelet[3140]: E0313 12:22:08.488130 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.489049 kubelet[3140]: E0313 12:22:08.489031 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.489194 kubelet[3140]: W0313 12:22:08.489119 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.489194 kubelet[3140]: E0313 12:22:08.489138 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.489474 kubelet[3140]: E0313 12:22:08.489461 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.489618 kubelet[3140]: W0313 12:22:08.489569 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.489618 kubelet[3140]: E0313 12:22:08.489590 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.507539 containerd[1713]: time="2026-03-13T12:22:08.506021500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b6c8847d6-8chbx,Uid:6f20b30d-a33f-4525-9a4b-8fabf59c81f9,Namespace:calico-system,Attempt:0,}" Mar 13 12:22:08.521589 kubelet[3140]: E0313 12:22:08.519720 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.521589 kubelet[3140]: W0313 12:22:08.521506 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.521589 kubelet[3140]: E0313 12:22:08.521535 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.552583 containerd[1713]: time="2026-03-13T12:22:08.552164181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:22:08.552583 containerd[1713]: time="2026-03-13T12:22:08.552302782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:22:08.552583 containerd[1713]: time="2026-03-13T12:22:08.552323382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:22:08.552583 containerd[1713]: time="2026-03-13T12:22:08.552423542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:22:08.567962 kubelet[3140]: E0313 12:22:08.567863 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.567962 kubelet[3140]: W0313 12:22:08.567902 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.568190 kubelet[3140]: E0313 12:22:08.568136 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.568409 kubelet[3140]: E0313 12:22:08.568390 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.568409 kubelet[3140]: W0313 12:22:08.568405 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.568510 kubelet[3140]: E0313 12:22:08.568421 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.568638 kubelet[3140]: E0313 12:22:08.568621 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.568638 kubelet[3140]: W0313 12:22:08.568637 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.568694 kubelet[3140]: E0313 12:22:08.568648 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.568835 kubelet[3140]: E0313 12:22:08.568823 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.568835 kubelet[3140]: W0313 12:22:08.568834 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.568887 kubelet[3140]: E0313 12:22:08.568843 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.569076 kubelet[3140]: E0313 12:22:08.569062 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.569076 kubelet[3140]: W0313 12:22:08.569074 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.569137 kubelet[3140]: E0313 12:22:08.569083 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.569286 kubelet[3140]: E0313 12:22:08.569273 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.569286 kubelet[3140]: W0313 12:22:08.569284 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.569351 kubelet[3140]: E0313 12:22:08.569294 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.570347 kubelet[3140]: E0313 12:22:08.569499 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.570347 kubelet[3140]: W0313 12:22:08.569518 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.570347 kubelet[3140]: E0313 12:22:08.569526 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.570347 kubelet[3140]: E0313 12:22:08.569757 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.570347 kubelet[3140]: W0313 12:22:08.569766 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.570347 kubelet[3140]: E0313 12:22:08.569776 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.570347 kubelet[3140]: E0313 12:22:08.569982 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.570347 kubelet[3140]: W0313 12:22:08.569990 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.570347 kubelet[3140]: E0313 12:22:08.570000 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.569686 systemd[1]: Started cri-containerd-af9a5e806e0092f6ed4b7263fd2e4d4671db8fd8450b848d8c3e8da4521d36cf.scope - libcontainer container af9a5e806e0092f6ed4b7263fd2e4d4671db8fd8450b848d8c3e8da4521d36cf. Mar 13 12:22:08.570683 kubelet[3140]: E0313 12:22:08.570599 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.570683 kubelet[3140]: W0313 12:22:08.570614 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.570683 kubelet[3140]: E0313 12:22:08.570627 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.570909 kubelet[3140]: E0313 12:22:08.570884 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.570909 kubelet[3140]: W0313 12:22:08.570903 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.570966 kubelet[3140]: E0313 12:22:08.570914 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.571198 kubelet[3140]: E0313 12:22:08.571182 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.571198 kubelet[3140]: W0313 12:22:08.571196 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.571301 kubelet[3140]: E0313 12:22:08.571207 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.571522 kubelet[3140]: E0313 12:22:08.571475 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.571522 kubelet[3140]: W0313 12:22:08.571520 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.571602 kubelet[3140]: E0313 12:22:08.571538 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.571808 kubelet[3140]: E0313 12:22:08.571791 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.571808 kubelet[3140]: W0313 12:22:08.571803 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.571904 kubelet[3140]: E0313 12:22:08.571814 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.572304 kubelet[3140]: E0313 12:22:08.572273 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.572304 kubelet[3140]: W0313 12:22:08.572296 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.572396 kubelet[3140]: E0313 12:22:08.572308 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.572578 kubelet[3140]: E0313 12:22:08.572563 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.572578 kubelet[3140]: W0313 12:22:08.572575 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.572679 kubelet[3140]: E0313 12:22:08.572585 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.574012 kubelet[3140]: E0313 12:22:08.573805 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.574012 kubelet[3140]: W0313 12:22:08.573820 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.574012 kubelet[3140]: E0313 12:22:08.573834 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.574766 kubelet[3140]: E0313 12:22:08.574197 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.574766 kubelet[3140]: W0313 12:22:08.574211 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.575003 kubelet[3140]: E0313 12:22:08.574880 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.575594 kubelet[3140]: E0313 12:22:08.575573 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.575770 kubelet[3140]: W0313 12:22:08.575697 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.575770 kubelet[3140]: E0313 12:22:08.575718 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.576315 kubelet[3140]: E0313 12:22:08.576226 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.576315 kubelet[3140]: W0313 12:22:08.576242 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.576315 kubelet[3140]: E0313 12:22:08.576255 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.577103 kubelet[3140]: E0313 12:22:08.576969 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.577103 kubelet[3140]: W0313 12:22:08.576984 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.577103 kubelet[3140]: E0313 12:22:08.576996 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.577766 kubelet[3140]: E0313 12:22:08.577660 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.577766 kubelet[3140]: W0313 12:22:08.577674 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.577766 kubelet[3140]: E0313 12:22:08.577687 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.578123 kubelet[3140]: E0313 12:22:08.577986 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.578123 kubelet[3140]: W0313 12:22:08.577998 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.578123 kubelet[3140]: E0313 12:22:08.578009 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.579000 kubelet[3140]: E0313 12:22:08.578987 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.579059 kubelet[3140]: W0313 12:22:08.579048 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.579144 kubelet[3140]: E0313 12:22:08.579123 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.579631 kubelet[3140]: E0313 12:22:08.579585 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.579631 kubelet[3140]: W0313 12:22:08.579600 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.579631 kubelet[3140]: E0313 12:22:08.579611 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.586395 kubelet[3140]: E0313 12:22:08.586322 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:08.586395 kubelet[3140]: W0313 12:22:08.586340 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:08.586395 kubelet[3140]: E0313 12:22:08.586356 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:08.608930 containerd[1713]: time="2026-03-13T12:22:08.608748562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bvvnz,Uid:a0e4129e-fdd6-4517-90dc-6cda0984e688,Namespace:calico-system,Attempt:0,}" Mar 13 12:22:08.610605 containerd[1713]: time="2026-03-13T12:22:08.610499125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b6c8847d6-8chbx,Uid:6f20b30d-a33f-4525-9a4b-8fabf59c81f9,Namespace:calico-system,Attempt:0,} returns sandbox id \"af9a5e806e0092f6ed4b7263fd2e4d4671db8fd8450b848d8c3e8da4521d36cf\"" Mar 13 12:22:08.612951 containerd[1713]: time="2026-03-13T12:22:08.612899089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 13 12:22:08.662413 containerd[1713]: time="2026-03-13T12:22:08.661249454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:22:08.662413 containerd[1713]: time="2026-03-13T12:22:08.661328255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:22:08.662413 containerd[1713]: time="2026-03-13T12:22:08.661344135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:22:08.662413 containerd[1713]: time="2026-03-13T12:22:08.661428895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:22:08.679657 systemd[1]: Started cri-containerd-dabd40a418f812b42d48758dbd68e0f904b1fcb81ea8b1db5cd93f439afae3c6.scope - libcontainer container dabd40a418f812b42d48758dbd68e0f904b1fcb81ea8b1db5cd93f439afae3c6. Mar 13 12:22:08.701896 containerd[1713]: time="2026-03-13T12:22:08.701801366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bvvnz,Uid:a0e4129e-fdd6-4517-90dc-6cda0984e688,Namespace:calico-system,Attempt:0,} returns sandbox id \"dabd40a418f812b42d48758dbd68e0f904b1fcb81ea8b1db5cd93f439afae3c6\"" Mar 13 12:22:09.958242 kubelet[3140]: E0313 12:22:09.958173 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4rnvb" podUID="b9f33bdd-2737-45fd-8259-eb04da313d49" Mar 13 12:22:10.149686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3600798195.mount: Deactivated successfully. Mar 13 12:22:10.940822 containerd[1713]: time="2026-03-13T12:22:10.940219968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:10.943558 containerd[1713]: time="2026-03-13T12:22:10.943513854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 13 12:22:10.947641 containerd[1713]: time="2026-03-13T12:22:10.947603781Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:10.954365 containerd[1713]: time="2026-03-13T12:22:10.954179113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:10.956998 containerd[1713]: time="2026-03-13T12:22:10.956001076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.343066427s" Mar 13 12:22:10.956998 containerd[1713]: time="2026-03-13T12:22:10.956041156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 13 12:22:10.964677 containerd[1713]: time="2026-03-13T12:22:10.962757488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 13 12:22:10.980780 containerd[1713]: time="2026-03-13T12:22:10.980739320Z" level=info msg="CreateContainer within sandbox \"af9a5e806e0092f6ed4b7263fd2e4d4671db8fd8450b848d8c3e8da4521d36cf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 13 12:22:11.028691 containerd[1713]: time="2026-03-13T12:22:11.028633164Z" level=info msg="CreateContainer within sandbox \"af9a5e806e0092f6ed4b7263fd2e4d4671db8fd8450b848d8c3e8da4521d36cf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6eaa13840d63aa9f1e4f5baafa6983ce72859d7952b271b51421415bf52be4c8\"" Mar 13 12:22:11.029475 containerd[1713]: time="2026-03-13T12:22:11.029437806Z" level=info msg="StartContainer for \"6eaa13840d63aa9f1e4f5baafa6983ce72859d7952b271b51421415bf52be4c8\"" Mar 13 12:22:11.060229 systemd[1]: Started cri-containerd-6eaa13840d63aa9f1e4f5baafa6983ce72859d7952b271b51421415bf52be4c8.scope - libcontainer container 6eaa13840d63aa9f1e4f5baafa6983ce72859d7952b271b51421415bf52be4c8. Mar 13 12:22:11.106449 containerd[1713]: time="2026-03-13T12:22:11.106398542Z" level=info msg="StartContainer for \"6eaa13840d63aa9f1e4f5baafa6983ce72859d7952b271b51421415bf52be4c8\" returns successfully" Mar 13 12:22:11.958173 kubelet[3140]: E0313 12:22:11.957681 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4rnvb" podUID="b9f33bdd-2737-45fd-8259-eb04da313d49" Mar 13 12:22:12.080408 kubelet[3140]: E0313 12:22:12.080359 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.080565 kubelet[3140]: W0313 12:22:12.080396 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.080565 kubelet[3140]: E0313 12:22:12.080454 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.080679 kubelet[3140]: E0313 12:22:12.080663 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.080679 kubelet[3140]: W0313 12:22:12.080675 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.080761 kubelet[3140]: E0313 12:22:12.080685 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.080849 kubelet[3140]: E0313 12:22:12.080837 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.080849 kubelet[3140]: W0313 12:22:12.080847 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.080927 kubelet[3140]: E0313 12:22:12.080855 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.081033 kubelet[3140]: E0313 12:22:12.081021 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.081033 kubelet[3140]: W0313 12:22:12.081030 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.081098 kubelet[3140]: E0313 12:22:12.081039 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.082708 kubelet[3140]: E0313 12:22:12.082609 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.082708 kubelet[3140]: W0313 12:22:12.082660 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.082708 kubelet[3140]: E0313 12:22:12.082678 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.083213 kubelet[3140]: E0313 12:22:12.083104 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.083213 kubelet[3140]: W0313 12:22:12.083117 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.083213 kubelet[3140]: E0313 12:22:12.083128 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.083680 kubelet[3140]: E0313 12:22:12.083549 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.083680 kubelet[3140]: W0313 12:22:12.083563 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.083680 kubelet[3140]: E0313 12:22:12.083575 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.083864 kubelet[3140]: E0313 12:22:12.083853 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.084002 kubelet[3140]: W0313 12:22:12.083885 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.084002 kubelet[3140]: E0313 12:22:12.083897 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.084300 kubelet[3140]: E0313 12:22:12.084199 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.084300 kubelet[3140]: W0313 12:22:12.084216 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.084300 kubelet[3140]: E0313 12:22:12.084227 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.084474 kubelet[3140]: E0313 12:22:12.084464 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.084474 kubelet[3140]: W0313 12:22:12.084502 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.084474 kubelet[3140]: E0313 12:22:12.084514 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.084921 kubelet[3140]: E0313 12:22:12.084811 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.084921 kubelet[3140]: W0313 12:22:12.084826 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.084921 kubelet[3140]: E0313 12:22:12.084837 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.085115 kubelet[3140]: E0313 12:22:12.085088 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.085115 kubelet[3140]: W0313 12:22:12.085099 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.085320 kubelet[3140]: E0313 12:22:12.085234 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.085584 kubelet[3140]: E0313 12:22:12.085518 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.085584 kubelet[3140]: W0313 12:22:12.085532 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.085584 kubelet[3140]: E0313 12:22:12.085543 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.085911 kubelet[3140]: E0313 12:22:12.085840 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.085911 kubelet[3140]: W0313 12:22:12.085853 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.085911 kubelet[3140]: E0313 12:22:12.085865 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.086243 kubelet[3140]: E0313 12:22:12.086167 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.086243 kubelet[3140]: W0313 12:22:12.086178 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.086243 kubelet[3140]: E0313 12:22:12.086188 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.092562 kubelet[3140]: I0313 12:22:12.092394 3140 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-5b6c8847d6-8chbx" podStartSLOduration=1.745077733 podStartE2EDuration="4.092379527s" podCreationTimestamp="2026-03-13 12:22:08 +0000 UTC" firstStartedPulling="2026-03-13 12:22:08.612398288 +0000 UTC m=+23.773178608" lastFinishedPulling="2026-03-13 12:22:10.959700082 +0000 UTC m=+26.120480402" observedRunningTime="2026-03-13 12:22:12.089804043 +0000 UTC m=+27.250584363" watchObservedRunningTime="2026-03-13 12:22:12.092379527 +0000 UTC m=+27.253159847" Mar 13 12:22:12.094144 kubelet[3140]: E0313 12:22:12.093972 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.094144 kubelet[3140]: W0313 12:22:12.093992 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.094144 kubelet[3140]: E0313 12:22:12.094010 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.095324 kubelet[3140]: E0313 12:22:12.095300 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.095611 kubelet[3140]: W0313 12:22:12.095461 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.095611 kubelet[3140]: E0313 12:22:12.095494 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.096294 kubelet[3140]: E0313 12:22:12.096125 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.096294 kubelet[3140]: W0313 12:22:12.096139 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.096294 kubelet[3140]: E0313 12:22:12.096150 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.096631 kubelet[3140]: E0313 12:22:12.096619 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.096801 kubelet[3140]: W0313 12:22:12.096713 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.096801 kubelet[3140]: E0313 12:22:12.096743 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.097639 kubelet[3140]: E0313 12:22:12.097573 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.097639 kubelet[3140]: W0313 12:22:12.097614 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.097639 kubelet[3140]: E0313 12:22:12.097628 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.098161 kubelet[3140]: E0313 12:22:12.098043 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.098161 kubelet[3140]: W0313 12:22:12.098055 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.098161 kubelet[3140]: E0313 12:22:12.098080 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.098427 kubelet[3140]: E0313 12:22:12.098415 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.098530 kubelet[3140]: W0313 12:22:12.098500 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.098530 kubelet[3140]: E0313 12:22:12.098516 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.098809 kubelet[3140]: E0313 12:22:12.098766 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.098809 kubelet[3140]: W0313 12:22:12.098781 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.098809 kubelet[3140]: E0313 12:22:12.098791 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.099691 kubelet[3140]: E0313 12:22:12.099580 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.099691 kubelet[3140]: W0313 12:22:12.099594 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.099691 kubelet[3140]: E0313 12:22:12.099605 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.100204 kubelet[3140]: E0313 12:22:12.100058 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.100204 kubelet[3140]: W0313 12:22:12.100071 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.100204 kubelet[3140]: E0313 12:22:12.100082 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.100574 kubelet[3140]: E0313 12:22:12.100560 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.100940 kubelet[3140]: W0313 12:22:12.100924 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.101025 kubelet[3140]: E0313 12:22:12.101014 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.101524 kubelet[3140]: E0313 12:22:12.101407 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.101524 kubelet[3140]: W0313 12:22:12.101420 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.101524 kubelet[3140]: E0313 12:22:12.101430 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.102670 kubelet[3140]: E0313 12:22:12.102555 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.102670 kubelet[3140]: W0313 12:22:12.102569 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.102670 kubelet[3140]: E0313 12:22:12.102580 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.103104 kubelet[3140]: E0313 12:22:12.103005 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.103104 kubelet[3140]: W0313 12:22:12.103017 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.103104 kubelet[3140]: E0313 12:22:12.103028 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.103470 kubelet[3140]: E0313 12:22:12.103387 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.103470 kubelet[3140]: W0313 12:22:12.103398 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.103470 kubelet[3140]: E0313 12:22:12.103408 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.103850 kubelet[3140]: E0313 12:22:12.103707 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.103850 kubelet[3140]: W0313 12:22:12.103719 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.103850 kubelet[3140]: E0313 12:22:12.103729 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.104035 kubelet[3140]: E0313 12:22:12.104023 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.104172 kubelet[3140]: W0313 12:22:12.104073 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.104172 kubelet[3140]: E0313 12:22:12.104089 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.104555 kubelet[3140]: E0313 12:22:12.104542 3140 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 12:22:12.104672 kubelet[3140]: W0313 12:22:12.104633 3140 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 12:22:12.104672 kubelet[3140]: E0313 12:22:12.104650 3140 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 12:22:12.452614 containerd[1713]: time="2026-03-13T12:22:12.451728203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:12.454786 containerd[1713]: time="2026-03-13T12:22:12.454724088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 13 12:22:12.459442 containerd[1713]: time="2026-03-13T12:22:12.459397097Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:12.464609 containerd[1713]: time="2026-03-13T12:22:12.464566306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:12.465515 containerd[1713]: time="2026-03-13T12:22:12.465247067Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.502451619s" Mar 13 12:22:12.465515 containerd[1713]: time="2026-03-13T12:22:12.465286987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 13 12:22:12.475442 containerd[1713]: time="2026-03-13T12:22:12.475222925Z" level=info msg="CreateContainer within sandbox \"dabd40a418f812b42d48758dbd68e0f904b1fcb81ea8b1db5cd93f439afae3c6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 13 12:22:12.512344 containerd[1713]: time="2026-03-13T12:22:12.512205390Z" level=info msg="CreateContainer within sandbox \"dabd40a418f812b42d48758dbd68e0f904b1fcb81ea8b1db5cd93f439afae3c6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f646cee6bd5c5d879ca687332704de2d93b083d3bb6f343923605fc037acd925\"" Mar 13 12:22:12.513597 containerd[1713]: time="2026-03-13T12:22:12.512859031Z" level=info msg="StartContainer for \"f646cee6bd5c5d879ca687332704de2d93b083d3bb6f343923605fc037acd925\"" Mar 13 12:22:12.542792 systemd[1]: Started cri-containerd-f646cee6bd5c5d879ca687332704de2d93b083d3bb6f343923605fc037acd925.scope - libcontainer container f646cee6bd5c5d879ca687332704de2d93b083d3bb6f343923605fc037acd925. Mar 13 12:22:12.574135 containerd[1713]: time="2026-03-13T12:22:12.573904339Z" level=info msg="StartContainer for \"f646cee6bd5c5d879ca687332704de2d93b083d3bb6f343923605fc037acd925\" returns successfully" Mar 13 12:22:12.583876 systemd[1]: cri-containerd-f646cee6bd5c5d879ca687332704de2d93b083d3bb6f343923605fc037acd925.scope: Deactivated successfully. Mar 13 12:22:12.964930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f646cee6bd5c5d879ca687332704de2d93b083d3bb6f343923605fc037acd925-rootfs.mount: Deactivated successfully. Mar 13 12:22:13.814220 containerd[1713]: time="2026-03-13T12:22:13.814159563Z" level=info msg="shim disconnected" id=f646cee6bd5c5d879ca687332704de2d93b083d3bb6f343923605fc037acd925 namespace=k8s.io Mar 13 12:22:13.814220 containerd[1713]: time="2026-03-13T12:22:13.814213603Z" level=warning msg="cleaning up after shim disconnected" id=f646cee6bd5c5d879ca687332704de2d93b083d3bb6f343923605fc037acd925 namespace=k8s.io Mar 13 12:22:13.814220 containerd[1713]: time="2026-03-13T12:22:13.814222923Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 12:22:13.957837 kubelet[3140]: E0313 12:22:13.957791 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4rnvb" podUID="b9f33bdd-2737-45fd-8259-eb04da313d49" Mar 13 12:22:14.081017 containerd[1713]: time="2026-03-13T12:22:14.080449309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 13 12:22:15.958047 kubelet[3140]: E0313 12:22:15.957989 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4rnvb" podUID="b9f33bdd-2737-45fd-8259-eb04da313d49" Mar 13 12:22:17.958496 kubelet[3140]: E0313 12:22:17.958441 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4rnvb" podUID="b9f33bdd-2737-45fd-8259-eb04da313d49" Mar 13 12:22:19.958346 kubelet[3140]: E0313 12:22:19.958290 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4rnvb" podUID="b9f33bdd-2737-45fd-8259-eb04da313d49" Mar 13 12:22:20.213204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount914773222.mount: Deactivated successfully. Mar 13 12:22:20.409525 containerd[1713]: time="2026-03-13T12:22:20.408790940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:20.413316 containerd[1713]: time="2026-03-13T12:22:20.413212429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 13 12:22:20.417713 containerd[1713]: time="2026-03-13T12:22:20.417623037Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:20.424303 containerd[1713]: time="2026-03-13T12:22:20.424237810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:20.425460 containerd[1713]: time="2026-03-13T12:22:20.424921411Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.344419262s" Mar 13 12:22:20.425460 containerd[1713]: time="2026-03-13T12:22:20.424957291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 13 12:22:20.434390 containerd[1713]: time="2026-03-13T12:22:20.434261469Z" level=info msg="CreateContainer within sandbox \"dabd40a418f812b42d48758dbd68e0f904b1fcb81ea8b1db5cd93f439afae3c6\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 13 12:22:20.478204 containerd[1713]: time="2026-03-13T12:22:20.477578951Z" level=info msg="CreateContainer within sandbox \"dabd40a418f812b42d48758dbd68e0f904b1fcb81ea8b1db5cd93f439afae3c6\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"d33a073429e1093621868e88d0e5a8dcadf4aa26d3d658e7e81567dd88aaa555\"" Mar 13 12:22:20.478317 containerd[1713]: time="2026-03-13T12:22:20.478297232Z" level=info msg="StartContainer for \"d33a073429e1093621868e88d0e5a8dcadf4aa26d3d658e7e81567dd88aaa555\"" Mar 13 12:22:20.516673 systemd[1]: Started cri-containerd-d33a073429e1093621868e88d0e5a8dcadf4aa26d3d658e7e81567dd88aaa555.scope - libcontainer container d33a073429e1093621868e88d0e5a8dcadf4aa26d3d658e7e81567dd88aaa555. Mar 13 12:22:20.549811 containerd[1713]: time="2026-03-13T12:22:20.549678288Z" level=info msg="StartContainer for \"d33a073429e1093621868e88d0e5a8dcadf4aa26d3d658e7e81567dd88aaa555\" returns successfully" Mar 13 12:22:20.596358 systemd[1]: cri-containerd-d33a073429e1093621868e88d0e5a8dcadf4aa26d3d658e7e81567dd88aaa555.scope: Deactivated successfully. Mar 13 12:22:21.213146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d33a073429e1093621868e88d0e5a8dcadf4aa26d3d658e7e81567dd88aaa555-rootfs.mount: Deactivated successfully. Mar 13 12:22:21.958172 kubelet[3140]: E0313 12:22:21.958106 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4rnvb" podUID="b9f33bdd-2737-45fd-8259-eb04da313d49" Mar 13 12:22:22.325918 containerd[1713]: time="2026-03-13T12:22:22.325699046Z" level=info msg="shim disconnected" id=d33a073429e1093621868e88d0e5a8dcadf4aa26d3d658e7e81567dd88aaa555 namespace=k8s.io Mar 13 12:22:22.325918 containerd[1713]: time="2026-03-13T12:22:22.325754966Z" level=warning msg="cleaning up after shim disconnected" id=d33a073429e1093621868e88d0e5a8dcadf4aa26d3d658e7e81567dd88aaa555 namespace=k8s.io Mar 13 12:22:22.325918 containerd[1713]: time="2026-03-13T12:22:22.325763326Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 12:22:23.103751 containerd[1713]: time="2026-03-13T12:22:23.102455632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 13 12:22:23.958582 kubelet[3140]: E0313 12:22:23.958526 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4rnvb" podUID="b9f33bdd-2737-45fd-8259-eb04da313d49" Mar 13 12:22:25.957725 kubelet[3140]: E0313 12:22:25.957671 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4rnvb" podUID="b9f33bdd-2737-45fd-8259-eb04da313d49" Mar 13 12:22:26.765117 containerd[1713]: time="2026-03-13T12:22:26.764287261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:26.772640 containerd[1713]: time="2026-03-13T12:22:26.772595437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 13 12:22:26.778091 containerd[1713]: time="2026-03-13T12:22:26.777304566Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:26.786531 containerd[1713]: time="2026-03-13T12:22:26.786490263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:26.787279 containerd[1713]: time="2026-03-13T12:22:26.787248345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.684729473s" Mar 13 12:22:26.787376 containerd[1713]: time="2026-03-13T12:22:26.787361385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 13 12:22:26.848090 containerd[1713]: time="2026-03-13T12:22:26.848029579Z" level=info msg="CreateContainer within sandbox \"dabd40a418f812b42d48758dbd68e0f904b1fcb81ea8b1db5cd93f439afae3c6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 13 12:22:26.889430 containerd[1713]: time="2026-03-13T12:22:26.889380897Z" level=info msg="CreateContainer within sandbox \"dabd40a418f812b42d48758dbd68e0f904b1fcb81ea8b1db5cd93f439afae3c6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a991b3def1b91dab42024bcea1d46df338725067ba675b425077825667f1c031\"" Mar 13 12:22:26.891570 containerd[1713]: time="2026-03-13T12:22:26.891529181Z" level=info msg="StartContainer for \"a991b3def1b91dab42024bcea1d46df338725067ba675b425077825667f1c031\"" Mar 13 12:22:26.921739 systemd[1]: Started cri-containerd-a991b3def1b91dab42024bcea1d46df338725067ba675b425077825667f1c031.scope - libcontainer container a991b3def1b91dab42024bcea1d46df338725067ba675b425077825667f1c031. Mar 13 12:22:26.949139 containerd[1713]: time="2026-03-13T12:22:26.948990770Z" level=info msg="StartContainer for \"a991b3def1b91dab42024bcea1d46df338725067ba675b425077825667f1c031\" returns successfully" Mar 13 12:22:27.959200 kubelet[3140]: E0313 12:22:27.957925 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4rnvb" podUID="b9f33bdd-2737-45fd-8259-eb04da313d49" Mar 13 12:22:28.425634 systemd[1]: cri-containerd-a991b3def1b91dab42024bcea1d46df338725067ba675b425077825667f1c031.scope: Deactivated successfully. Mar 13 12:22:28.442704 kubelet[3140]: I0313 12:22:28.442674 3140 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 13 12:22:28.452859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a991b3def1b91dab42024bcea1d46df338725067ba675b425077825667f1c031-rootfs.mount: Deactivated successfully. Mar 13 12:22:29.365115 systemd[1]: Created slice kubepods-burstable-pod2dd1ee1d_a2bd_4d4b_a04a_c030316e7f2c.slice - libcontainer container kubepods-burstable-pod2dd1ee1d_a2bd_4d4b_a04a_c030316e7f2c.slice. Mar 13 12:22:29.375654 containerd[1713]: time="2026-03-13T12:22:29.375185050Z" level=info msg="shim disconnected" id=a991b3def1b91dab42024bcea1d46df338725067ba675b425077825667f1c031 namespace=k8s.io Mar 13 12:22:29.375654 containerd[1713]: time="2026-03-13T12:22:29.375251811Z" level=warning msg="cleaning up after shim disconnected" id=a991b3def1b91dab42024bcea1d46df338725067ba675b425077825667f1c031 namespace=k8s.io Mar 13 12:22:29.375654 containerd[1713]: time="2026-03-13T12:22:29.375260531Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 12:22:29.376386 systemd[1]: Created slice kubepods-besteffort-podb9f33bdd_2737_45fd_8259_eb04da313d49.slice - libcontainer container kubepods-besteffort-podb9f33bdd_2737_45fd_8259_eb04da313d49.slice. Mar 13 12:22:29.398842 systemd[1]: Created slice kubepods-besteffort-pod077001e3_6017_4dce_830f_acc444babdea.slice - libcontainer container kubepods-besteffort-pod077001e3_6017_4dce_830f_acc444babdea.slice. Mar 13 12:22:29.401591 containerd[1713]: time="2026-03-13T12:22:29.400221942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4rnvb,Uid:b9f33bdd-2737-45fd-8259-eb04da313d49,Namespace:calico-system,Attempt:0,}" Mar 13 12:22:29.411073 systemd[1]: Created slice kubepods-besteffort-pode647b091_316e_49d6_91db_ea686f6b4ba4.slice - libcontainer container kubepods-besteffort-pode647b091_316e_49d6_91db_ea686f6b4ba4.slice. Mar 13 12:22:29.418654 systemd[1]: Created slice kubepods-besteffort-pod5012c233_3465_4ca7_bdc4_e2ef60a160c5.slice - libcontainer container kubepods-besteffort-pod5012c233_3465_4ca7_bdc4_e2ef60a160c5.slice. Mar 13 12:22:29.433093 systemd[1]: Created slice kubepods-besteffort-pod5f01a07f_bab0_4312_a812_86688ae7f0c8.slice - libcontainer container kubepods-besteffort-pod5f01a07f_bab0_4312_a812_86688ae7f0c8.slice. Mar 13 12:22:29.456407 systemd[1]: Created slice kubepods-besteffort-podb22d9553_9cf2_4f77_9f81_2807136a31dd.slice - libcontainer container kubepods-besteffort-podb22d9553_9cf2_4f77_9f81_2807136a31dd.slice. Mar 13 12:22:29.464513 kubelet[3140]: I0313 12:22:29.464200 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b22d9553-9cf2-4f77-9f81-2807136a31dd-tigera-ca-bundle\") pod \"calico-kube-controllers-77b8d6f4b5-8vb87\" (UID: \"b22d9553-9cf2-4f77-9f81-2807136a31dd\") " pod="calico-system/calico-kube-controllers-77b8d6f4b5-8vb87" Mar 13 12:22:29.464513 kubelet[3140]: I0313 12:22:29.464240 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj7r9\" (UniqueName: \"kubernetes.io/projected/b22d9553-9cf2-4f77-9f81-2807136a31dd-kube-api-access-tj7r9\") pod \"calico-kube-controllers-77b8d6f4b5-8vb87\" (UID: \"b22d9553-9cf2-4f77-9f81-2807136a31dd\") " pod="calico-system/calico-kube-controllers-77b8d6f4b5-8vb87" Mar 13 12:22:29.464513 kubelet[3140]: I0313 12:22:29.464258 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5012c233-3465-4ca7-bdc4-e2ef60a160c5-calico-apiserver-certs\") pod \"calico-apiserver-6d8b56c7bd-4ctwx\" (UID: \"5012c233-3465-4ca7-bdc4-e2ef60a160c5\") " pod="calico-system/calico-apiserver-6d8b56c7bd-4ctwx" Mar 13 12:22:29.464513 kubelet[3140]: I0313 12:22:29.464310 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5f01a07f-bab0-4312-a812-86688ae7f0c8-calico-apiserver-certs\") pod \"calico-apiserver-6d8b56c7bd-nczjw\" (UID: \"5f01a07f-bab0-4312-a812-86688ae7f0c8\") " pod="calico-system/calico-apiserver-6d8b56c7bd-nczjw" Mar 13 12:22:29.464513 kubelet[3140]: I0313 12:22:29.464380 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5xr5\" (UniqueName: \"kubernetes.io/projected/e647b091-316e-49d6-91db-ea686f6b4ba4-kube-api-access-n5xr5\") pod \"goldmane-9f7667bb8-xl56r\" (UID: \"e647b091-316e-49d6-91db-ea686f6b4ba4\") " pod="calico-system/goldmane-9f7667bb8-xl56r" Mar 13 12:22:29.464937 kubelet[3140]: I0313 12:22:29.464403 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcqmf\" (UniqueName: \"kubernetes.io/projected/077001e3-6017-4dce-830f-acc444babdea-kube-api-access-wcqmf\") pod \"whisker-859785876c-5rs9x\" (UID: \"077001e3-6017-4dce-830f-acc444babdea\") " pod="calico-system/whisker-859785876c-5rs9x" Mar 13 12:22:29.464937 kubelet[3140]: I0313 12:22:29.464420 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv4v8\" (UniqueName: \"kubernetes.io/projected/f9601ee0-5635-473a-aecf-cb8b509e8382-kube-api-access-gv4v8\") pod \"coredns-7d764666f9-749vw\" (UID: \"f9601ee0-5635-473a-aecf-cb8b509e8382\") " pod="kube-system/coredns-7d764666f9-749vw" Mar 13 12:22:29.464937 kubelet[3140]: I0313 12:22:29.464448 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/077001e3-6017-4dce-830f-acc444babdea-whisker-backend-key-pair\") pod \"whisker-859785876c-5rs9x\" (UID: \"077001e3-6017-4dce-830f-acc444babdea\") " pod="calico-system/whisker-859785876c-5rs9x" Mar 13 12:22:29.464937 kubelet[3140]: I0313 12:22:29.464464 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nwpv\" (UniqueName: \"kubernetes.io/projected/5012c233-3465-4ca7-bdc4-e2ef60a160c5-kube-api-access-7nwpv\") pod \"calico-apiserver-6d8b56c7bd-4ctwx\" (UID: \"5012c233-3465-4ca7-bdc4-e2ef60a160c5\") " pod="calico-system/calico-apiserver-6d8b56c7bd-4ctwx" Mar 13 12:22:29.466370 kubelet[3140]: I0313 12:22:29.464496 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e647b091-316e-49d6-91db-ea686f6b4ba4-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-xl56r\" (UID: \"e647b091-316e-49d6-91db-ea686f6b4ba4\") " pod="calico-system/goldmane-9f7667bb8-xl56r" Mar 13 12:22:29.466370 kubelet[3140]: I0313 12:22:29.465558 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/077001e3-6017-4dce-830f-acc444babdea-nginx-config\") pod \"whisker-859785876c-5rs9x\" (UID: \"077001e3-6017-4dce-830f-acc444babdea\") " pod="calico-system/whisker-859785876c-5rs9x" Mar 13 12:22:29.466370 kubelet[3140]: I0313 12:22:29.465759 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/077001e3-6017-4dce-830f-acc444babdea-whisker-ca-bundle\") pod \"whisker-859785876c-5rs9x\" (UID: \"077001e3-6017-4dce-830f-acc444babdea\") " pod="calico-system/whisker-859785876c-5rs9x" Mar 13 12:22:29.466370 kubelet[3140]: I0313 12:22:29.465780 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9601ee0-5635-473a-aecf-cb8b509e8382-config-volume\") pod \"coredns-7d764666f9-749vw\" (UID: \"f9601ee0-5635-473a-aecf-cb8b509e8382\") " pod="kube-system/coredns-7d764666f9-749vw" Mar 13 12:22:29.466370 kubelet[3140]: I0313 12:22:29.465798 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e647b091-316e-49d6-91db-ea686f6b4ba4-config\") pod \"goldmane-9f7667bb8-xl56r\" (UID: \"e647b091-316e-49d6-91db-ea686f6b4ba4\") " pod="calico-system/goldmane-9f7667bb8-xl56r" Mar 13 12:22:29.466981 kubelet[3140]: I0313 12:22:29.465827 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dd1ee1d-a2bd-4d4b-a04a-c030316e7f2c-config-volume\") pod \"coredns-7d764666f9-mh4sx\" (UID: \"2dd1ee1d-a2bd-4d4b-a04a-c030316e7f2c\") " pod="kube-system/coredns-7d764666f9-mh4sx" Mar 13 12:22:29.466981 kubelet[3140]: I0313 12:22:29.465843 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdhbd\" (UniqueName: \"kubernetes.io/projected/2dd1ee1d-a2bd-4d4b-a04a-c030316e7f2c-kube-api-access-cdhbd\") pod \"coredns-7d764666f9-mh4sx\" (UID: \"2dd1ee1d-a2bd-4d4b-a04a-c030316e7f2c\") " pod="kube-system/coredns-7d764666f9-mh4sx" Mar 13 12:22:29.466981 kubelet[3140]: I0313 12:22:29.465864 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e647b091-316e-49d6-91db-ea686f6b4ba4-goldmane-key-pair\") pod \"goldmane-9f7667bb8-xl56r\" (UID: \"e647b091-316e-49d6-91db-ea686f6b4ba4\") " pod="calico-system/goldmane-9f7667bb8-xl56r" Mar 13 12:22:29.466981 kubelet[3140]: I0313 12:22:29.465881 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4j7r\" (UniqueName: \"kubernetes.io/projected/5f01a07f-bab0-4312-a812-86688ae7f0c8-kube-api-access-n4j7r\") pod \"calico-apiserver-6d8b56c7bd-nczjw\" (UID: \"5f01a07f-bab0-4312-a812-86688ae7f0c8\") " pod="calico-system/calico-apiserver-6d8b56c7bd-nczjw" Mar 13 12:22:29.468065 systemd[1]: Created slice kubepods-burstable-podf9601ee0_5635_473a_aecf_cb8b509e8382.slice - libcontainer container kubepods-burstable-podf9601ee0_5635_473a_aecf_cb8b509e8382.slice. Mar 13 12:22:29.516492 containerd[1713]: time="2026-03-13T12:22:29.516297541Z" level=error msg="Failed to destroy network for sandbox \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:29.518011 containerd[1713]: time="2026-03-13T12:22:29.517757544Z" level=error msg="encountered an error cleaning up failed sandbox \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:29.518011 containerd[1713]: time="2026-03-13T12:22:29.517819824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4rnvb,Uid:b9f33bdd-2737-45fd-8259-eb04da313d49,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:29.518175 kubelet[3140]: E0313 12:22:29.518035 3140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:29.518175 kubelet[3140]: E0313 12:22:29.518103 3140 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4rnvb" Mar 13 12:22:29.518175 kubelet[3140]: E0313 12:22:29.518120 3140 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4rnvb" Mar 13 12:22:29.518290 kubelet[3140]: E0313 12:22:29.518177 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4rnvb_calico-system(b9f33bdd-2737-45fd-8259-eb04da313d49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4rnvb_calico-system(b9f33bdd-2737-45fd-8259-eb04da313d49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4rnvb" podUID="b9f33bdd-2737-45fd-8259-eb04da313d49" Mar 13 12:22:29.519094 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd-shm.mount: Deactivated successfully. Mar 13 12:22:29.682522 containerd[1713]: time="2026-03-13T12:22:29.681504841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mh4sx,Uid:2dd1ee1d-a2bd-4d4b-a04a-c030316e7f2c,Namespace:kube-system,Attempt:0,}" Mar 13 12:22:29.722064 containerd[1713]: time="2026-03-13T12:22:29.722019164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-859785876c-5rs9x,Uid:077001e3-6017-4dce-830f-acc444babdea,Namespace:calico-system,Attempt:0,}" Mar 13 12:22:29.741566 containerd[1713]: time="2026-03-13T12:22:29.741518524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-xl56r,Uid:e647b091-316e-49d6-91db-ea686f6b4ba4,Namespace:calico-system,Attempt:0,}" Mar 13 12:22:29.753173 containerd[1713]: time="2026-03-13T12:22:29.752873147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8b56c7bd-4ctwx,Uid:5012c233-3465-4ca7-bdc4-e2ef60a160c5,Namespace:calico-system,Attempt:0,}" Mar 13 12:22:29.773896 containerd[1713]: time="2026-03-13T12:22:29.773857271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8b56c7bd-nczjw,Uid:5f01a07f-bab0-4312-a812-86688ae7f0c8,Namespace:calico-system,Attempt:0,}" Mar 13 12:22:29.782728 containerd[1713]: time="2026-03-13T12:22:29.782474928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77b8d6f4b5-8vb87,Uid:b22d9553-9cf2-4f77-9f81-2807136a31dd,Namespace:calico-system,Attempt:0,}" Mar 13 12:22:29.788569 containerd[1713]: time="2026-03-13T12:22:29.788325660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-749vw,Uid:f9601ee0-5635-473a-aecf-cb8b509e8382,Namespace:kube-system,Attempt:0,}" Mar 13 12:22:29.822095 containerd[1713]: time="2026-03-13T12:22:29.822046530Z" level=error msg="Failed to destroy network for sandbox \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:29.822547 containerd[1713]: time="2026-03-13T12:22:29.822515171Z" level=error msg="encountered an error cleaning up failed sandbox \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:29.822771 containerd[1713]: time="2026-03-13T12:22:29.822653171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mh4sx,Uid:2dd1ee1d-a2bd-4d4b-a04a-c030316e7f2c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:29.823131 kubelet[3140]: E0313 12:22:29.822859 3140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:29.823131 kubelet[3140]: E0313 12:22:29.822916 3140 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-mh4sx" Mar 13 12:22:29.823131 kubelet[3140]: E0313 12:22:29.822936 3140 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-mh4sx" Mar 13 12:22:29.823254 kubelet[3140]: E0313 12:22:29.822981 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-mh4sx_kube-system(2dd1ee1d-a2bd-4d4b-a04a-c030316e7f2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-mh4sx_kube-system(2dd1ee1d-a2bd-4d4b-a04a-c030316e7f2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-mh4sx" podUID="2dd1ee1d-a2bd-4d4b-a04a-c030316e7f2c" Mar 13 12:22:30.014289 containerd[1713]: time="2026-03-13T12:22:30.013984565Z" level=error msg="Failed to destroy network for sandbox \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.014866 containerd[1713]: time="2026-03-13T12:22:30.014826686Z" level=error msg="encountered an error cleaning up failed sandbox \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.014951 containerd[1713]: time="2026-03-13T12:22:30.014888687Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-859785876c-5rs9x,Uid:077001e3-6017-4dce-830f-acc444babdea,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.015669 kubelet[3140]: E0313 12:22:30.015602 3140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.015669 kubelet[3140]: E0313 12:22:30.015662 3140 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-859785876c-5rs9x" Mar 13 12:22:30.015891 kubelet[3140]: E0313 12:22:30.015680 3140 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-859785876c-5rs9x" Mar 13 12:22:30.015891 kubelet[3140]: E0313 12:22:30.015729 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-859785876c-5rs9x_calico-system(077001e3-6017-4dce-830f-acc444babdea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-859785876c-5rs9x_calico-system(077001e3-6017-4dce-830f-acc444babdea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-859785876c-5rs9x" podUID="077001e3-6017-4dce-830f-acc444babdea" Mar 13 12:22:30.052114 containerd[1713]: time="2026-03-13T12:22:30.051691002Z" level=error msg="Failed to destroy network for sandbox \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.052448 containerd[1713]: time="2026-03-13T12:22:30.052415164Z" level=error msg="encountered an error cleaning up failed sandbox \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.052594 containerd[1713]: time="2026-03-13T12:22:30.052570484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8b56c7bd-nczjw,Uid:5f01a07f-bab0-4312-a812-86688ae7f0c8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.052947 kubelet[3140]: E0313 12:22:30.052913 3140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.053070 kubelet[3140]: E0313 12:22:30.053055 3140 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d8b56c7bd-nczjw" Mar 13 12:22:30.053141 kubelet[3140]: E0313 12:22:30.053123 3140 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d8b56c7bd-nczjw" Mar 13 12:22:30.053256 kubelet[3140]: E0313 12:22:30.053232 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d8b56c7bd-nczjw_calico-system(5f01a07f-bab0-4312-a812-86688ae7f0c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d8b56c7bd-nczjw_calico-system(5f01a07f-bab0-4312-a812-86688ae7f0c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6d8b56c7bd-nczjw" podUID="5f01a07f-bab0-4312-a812-86688ae7f0c8" Mar 13 12:22:30.066149 containerd[1713]: time="2026-03-13T12:22:30.066093072Z" level=error msg="Failed to destroy network for sandbox \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.066752 containerd[1713]: time="2026-03-13T12:22:30.066605833Z" level=error msg="encountered an error cleaning up failed sandbox \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.066752 containerd[1713]: time="2026-03-13T12:22:30.066659353Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-xl56r,Uid:e647b091-316e-49d6-91db-ea686f6b4ba4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.066917 kubelet[3140]: E0313 12:22:30.066874 3140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.066969 kubelet[3140]: E0313 12:22:30.066931 3140 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-xl56r" Mar 13 12:22:30.067017 kubelet[3140]: E0313 12:22:30.066954 3140 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-xl56r" Mar 13 12:22:30.067045 kubelet[3140]: E0313 12:22:30.067020 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-xl56r_calico-system(e647b091-316e-49d6-91db-ea686f6b4ba4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-xl56r_calico-system(e647b091-316e-49d6-91db-ea686f6b4ba4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-xl56r" podUID="e647b091-316e-49d6-91db-ea686f6b4ba4" Mar 13 12:22:30.100723 containerd[1713]: time="2026-03-13T12:22:30.100600143Z" level=error msg="Failed to destroy network for sandbox \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.101181 containerd[1713]: time="2026-03-13T12:22:30.101123104Z" level=error msg="encountered an error cleaning up failed sandbox \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.101286 containerd[1713]: time="2026-03-13T12:22:30.101264024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-749vw,Uid:f9601ee0-5635-473a-aecf-cb8b509e8382,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.101697 kubelet[3140]: E0313 12:22:30.101602 3140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.101697 kubelet[3140]: E0313 12:22:30.101668 3140 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-749vw" Mar 13 12:22:30.101893 kubelet[3140]: E0313 12:22:30.101685 3140 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-749vw" Mar 13 12:22:30.102309 kubelet[3140]: E0313 12:22:30.101875 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-749vw_kube-system(f9601ee0-5635-473a-aecf-cb8b509e8382)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-749vw_kube-system(f9601ee0-5635-473a-aecf-cb8b509e8382)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-749vw" podUID="f9601ee0-5635-473a-aecf-cb8b509e8382" Mar 13 12:22:30.106821 containerd[1713]: time="2026-03-13T12:22:30.106773556Z" level=error msg="Failed to destroy network for sandbox \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.107995 containerd[1713]: time="2026-03-13T12:22:30.107964398Z" level=error msg="encountered an error cleaning up failed sandbox \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.108047 containerd[1713]: time="2026-03-13T12:22:30.108019718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8b56c7bd-4ctwx,Uid:5012c233-3465-4ca7-bdc4-e2ef60a160c5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.108311 kubelet[3140]: E0313 12:22:30.108275 3140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.108372 kubelet[3140]: E0313 12:22:30.108327 3140 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d8b56c7bd-4ctwx" Mar 13 12:22:30.108372 kubelet[3140]: E0313 12:22:30.108344 3140 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d8b56c7bd-4ctwx" Mar 13 12:22:30.108430 kubelet[3140]: E0313 12:22:30.108391 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d8b56c7bd-4ctwx_calico-system(5012c233-3465-4ca7-bdc4-e2ef60a160c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d8b56c7bd-4ctwx_calico-system(5012c233-3465-4ca7-bdc4-e2ef60a160c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6d8b56c7bd-4ctwx" podUID="5012c233-3465-4ca7-bdc4-e2ef60a160c5" Mar 13 12:22:30.108681 containerd[1713]: time="2026-03-13T12:22:30.108562639Z" level=error msg="Failed to destroy network for sandbox \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.108977 containerd[1713]: time="2026-03-13T12:22:30.108883960Z" level=error msg="encountered an error cleaning up failed sandbox \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.108977 containerd[1713]: time="2026-03-13T12:22:30.108944560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77b8d6f4b5-8vb87,Uid:b22d9553-9cf2-4f77-9f81-2807136a31dd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.109636 kubelet[3140]: E0313 12:22:30.109086 3140 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.109636 kubelet[3140]: E0313 12:22:30.109124 3140 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77b8d6f4b5-8vb87" Mar 13 12:22:30.109636 kubelet[3140]: E0313 12:22:30.109139 3140 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77b8d6f4b5-8vb87" Mar 13 12:22:30.109747 kubelet[3140]: E0313 12:22:30.109179 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77b8d6f4b5-8vb87_calico-system(b22d9553-9cf2-4f77-9f81-2807136a31dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77b8d6f4b5-8vb87_calico-system(b22d9553-9cf2-4f77-9f81-2807136a31dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77b8d6f4b5-8vb87" podUID="b22d9553-9cf2-4f77-9f81-2807136a31dd" Mar 13 12:22:30.120057 kubelet[3140]: I0313 12:22:30.119729 3140 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Mar 13 12:22:30.120467 containerd[1713]: time="2026-03-13T12:22:30.120414064Z" level=info msg="StopPodSandbox for \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\"" Mar 13 12:22:30.120699 containerd[1713]: time="2026-03-13T12:22:30.120607464Z" level=info msg="Ensure that sandbox 73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e in task-service has been cleanup successfully" Mar 13 12:22:30.122534 kubelet[3140]: I0313 12:22:30.122467 3140 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Mar 13 12:22:30.122990 containerd[1713]: time="2026-03-13T12:22:30.122953749Z" level=info msg="StopPodSandbox for \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\"" Mar 13 12:22:30.124704 containerd[1713]: time="2026-03-13T12:22:30.123139989Z" level=info msg="Ensure that sandbox 5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6 in task-service has been cleanup successfully" Mar 13 12:22:30.154708 kubelet[3140]: I0313 12:22:30.154073 3140 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Mar 13 12:22:30.157489 containerd[1713]: time="2026-03-13T12:22:30.155699976Z" level=info msg="StopPodSandbox for \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\"" Mar 13 12:22:30.157489 containerd[1713]: time="2026-03-13T12:22:30.155887497Z" level=info msg="Ensure that sandbox f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3 in task-service has been cleanup successfully" Mar 13 12:22:30.162918 kubelet[3140]: I0313 12:22:30.162867 3140 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Mar 13 12:22:30.163766 containerd[1713]: time="2026-03-13T12:22:30.163710473Z" level=info msg="StopPodSandbox for \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\"" Mar 13 12:22:30.165541 containerd[1713]: time="2026-03-13T12:22:30.163893553Z" level=info msg="Ensure that sandbox 13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd in task-service has been cleanup successfully" Mar 13 12:22:30.169502 kubelet[3140]: I0313 12:22:30.168951 3140 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Mar 13 12:22:30.170229 containerd[1713]: time="2026-03-13T12:22:30.169836525Z" level=info msg="CreateContainer within sandbox \"dabd40a418f812b42d48758dbd68e0f904b1fcb81ea8b1db5cd93f439afae3c6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 13 12:22:30.171023 containerd[1713]: time="2026-03-13T12:22:30.170036486Z" level=info msg="StopPodSandbox for \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\"" Mar 13 12:22:30.171730 containerd[1713]: time="2026-03-13T12:22:30.171701769Z" level=info msg="Ensure that sandbox b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450 in task-service has been cleanup successfully" Mar 13 12:22:30.179726 kubelet[3140]: I0313 12:22:30.179248 3140 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Mar 13 12:22:30.188889 containerd[1713]: time="2026-03-13T12:22:30.188652324Z" level=info msg="StopPodSandbox for \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\"" Mar 13 12:22:30.188889 containerd[1713]: time="2026-03-13T12:22:30.188845284Z" level=info msg="Ensure that sandbox d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71 in task-service has been cleanup successfully" Mar 13 12:22:30.221704 kubelet[3140]: I0313 12:22:30.220880 3140 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Mar 13 12:22:30.222372 containerd[1713]: time="2026-03-13T12:22:30.222104153Z" level=info msg="StopPodSandbox for \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\"" Mar 13 12:22:30.223436 containerd[1713]: time="2026-03-13T12:22:30.222931475Z" level=info msg="Ensure that sandbox df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91 in task-service has been cleanup successfully" Mar 13 12:22:30.229350 kubelet[3140]: I0313 12:22:30.228987 3140 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Mar 13 12:22:30.232614 containerd[1713]: time="2026-03-13T12:22:30.232581094Z" level=info msg="StopPodSandbox for \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\"" Mar 13 12:22:30.233209 containerd[1713]: time="2026-03-13T12:22:30.233183896Z" level=info msg="Ensure that sandbox 69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038 in task-service has been cleanup successfully" Mar 13 12:22:30.239592 containerd[1713]: time="2026-03-13T12:22:30.239545709Z" level=error msg="StopPodSandbox for \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\" failed" error="failed to destroy network for sandbox \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.240002 kubelet[3140]: E0313 12:22:30.239758 3140 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Mar 13 12:22:30.240002 kubelet[3140]: E0313 12:22:30.239811 3140 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e"} Mar 13 12:22:30.240002 kubelet[3140]: E0313 12:22:30.239864 3140 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"077001e3-6017-4dce-830f-acc444babdea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 13 12:22:30.240002 kubelet[3140]: E0313 12:22:30.239905 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"077001e3-6017-4dce-830f-acc444babdea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-859785876c-5rs9x" podUID="077001e3-6017-4dce-830f-acc444babdea" Mar 13 12:22:30.268514 containerd[1713]: time="2026-03-13T12:22:30.268185728Z" level=info msg="CreateContainer within sandbox \"dabd40a418f812b42d48758dbd68e0f904b1fcb81ea8b1db5cd93f439afae3c6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"72120054aa1d93facf4522f5a5bccf62798d663f279f50bbf00f4f33fbb8a026\"" Mar 13 12:22:30.269200 containerd[1713]: time="2026-03-13T12:22:30.269156130Z" level=info msg="StartContainer for \"72120054aa1d93facf4522f5a5bccf62798d663f279f50bbf00f4f33fbb8a026\"" Mar 13 12:22:30.282967 containerd[1713]: time="2026-03-13T12:22:30.282846118Z" level=error msg="StopPodSandbox for \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\" failed" error="failed to destroy network for sandbox \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.283463 kubelet[3140]: E0313 12:22:30.283224 3140 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Mar 13 12:22:30.283463 kubelet[3140]: E0313 12:22:30.283277 3140 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd"} Mar 13 12:22:30.283463 kubelet[3140]: E0313 12:22:30.283307 3140 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b9f33bdd-2737-45fd-8259-eb04da313d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 13 12:22:30.283463 kubelet[3140]: E0313 12:22:30.283332 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b9f33bdd-2737-45fd-8259-eb04da313d49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4rnvb" podUID="b9f33bdd-2737-45fd-8259-eb04da313d49" Mar 13 12:22:30.292072 containerd[1713]: time="2026-03-13T12:22:30.291936616Z" level=error msg="StopPodSandbox for \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\" failed" error="failed to destroy network for sandbox \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.292498 kubelet[3140]: E0313 12:22:30.292450 3140 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Mar 13 12:22:30.294450 kubelet[3140]: E0313 12:22:30.294294 3140 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6"} Mar 13 12:22:30.294450 kubelet[3140]: E0313 12:22:30.294368 3140 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2dd1ee1d-a2bd-4d4b-a04a-c030316e7f2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 13 12:22:30.294450 kubelet[3140]: E0313 12:22:30.294404 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2dd1ee1d-a2bd-4d4b-a04a-c030316e7f2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-mh4sx" podUID="2dd1ee1d-a2bd-4d4b-a04a-c030316e7f2c" Mar 13 12:22:30.303140 containerd[1713]: time="2026-03-13T12:22:30.303087319Z" level=error msg="StopPodSandbox for \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\" failed" error="failed to destroy network for sandbox \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.303547 kubelet[3140]: E0313 12:22:30.303509 3140 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Mar 13 12:22:30.303674 kubelet[3140]: E0313 12:22:30.303654 3140 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71"} Mar 13 12:22:30.303863 kubelet[3140]: E0313 12:22:30.303764 3140 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f9601ee0-5635-473a-aecf-cb8b509e8382\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 13 12:22:30.303863 kubelet[3140]: E0313 12:22:30.303813 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f9601ee0-5635-473a-aecf-cb8b509e8382\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-749vw" podUID="f9601ee0-5635-473a-aecf-cb8b509e8382" Mar 13 12:22:30.310682 containerd[1713]: time="2026-03-13T12:22:30.310632775Z" level=error msg="StopPodSandbox for \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\" failed" error="failed to destroy network for sandbox \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.311188 kubelet[3140]: E0313 12:22:30.310875 3140 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Mar 13 12:22:30.311188 kubelet[3140]: E0313 12:22:30.310924 3140 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91"} Mar 13 12:22:30.311188 kubelet[3140]: E0313 12:22:30.310983 3140 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5012c233-3465-4ca7-bdc4-e2ef60a160c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 13 12:22:30.311188 kubelet[3140]: E0313 12:22:30.311011 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5012c233-3465-4ca7-bdc4-e2ef60a160c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6d8b56c7bd-4ctwx" podUID="5012c233-3465-4ca7-bdc4-e2ef60a160c5" Mar 13 12:22:30.322667 containerd[1713]: time="2026-03-13T12:22:30.322617200Z" level=error msg="StopPodSandbox for \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\" failed" error="failed to destroy network for sandbox \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.323262 kubelet[3140]: E0313 12:22:30.323111 3140 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Mar 13 12:22:30.323262 kubelet[3140]: E0313 12:22:30.323159 3140 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3"} Mar 13 12:22:30.323262 kubelet[3140]: E0313 12:22:30.323192 3140 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5f01a07f-bab0-4312-a812-86688ae7f0c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 13 12:22:30.323262 kubelet[3140]: E0313 12:22:30.323219 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5f01a07f-bab0-4312-a812-86688ae7f0c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6d8b56c7bd-nczjw" podUID="5f01a07f-bab0-4312-a812-86688ae7f0c8" Mar 13 12:22:30.329517 containerd[1713]: time="2026-03-13T12:22:30.329446454Z" level=error msg="StopPodSandbox for \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\" failed" error="failed to destroy network for sandbox \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.329874 kubelet[3140]: E0313 12:22:30.329713 3140 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Mar 13 12:22:30.329874 kubelet[3140]: E0313 12:22:30.329771 3140 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450"} Mar 13 12:22:30.329874 kubelet[3140]: E0313 12:22:30.329803 3140 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b22d9553-9cf2-4f77-9f81-2807136a31dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 13 12:22:30.329874 kubelet[3140]: E0313 12:22:30.329833 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b22d9553-9cf2-4f77-9f81-2807136a31dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77b8d6f4b5-8vb87" podUID="b22d9553-9cf2-4f77-9f81-2807136a31dd" Mar 13 12:22:30.334887 containerd[1713]: time="2026-03-13T12:22:30.334667984Z" level=error msg="StopPodSandbox for \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\" failed" error="failed to destroy network for sandbox \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 12:22:30.335443 kubelet[3140]: E0313 12:22:30.335380 3140 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Mar 13 12:22:30.335443 kubelet[3140]: E0313 12:22:30.335429 3140 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038"} Mar 13 12:22:30.335639 kubelet[3140]: E0313 12:22:30.335460 3140 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e647b091-316e-49d6-91db-ea686f6b4ba4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 13 12:22:30.335639 kubelet[3140]: E0313 12:22:30.335520 3140 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e647b091-316e-49d6-91db-ea686f6b4ba4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-xl56r" podUID="e647b091-316e-49d6-91db-ea686f6b4ba4" Mar 13 12:22:30.342677 systemd[1]: Started cri-containerd-72120054aa1d93facf4522f5a5bccf62798d663f279f50bbf00f4f33fbb8a026.scope - libcontainer container 72120054aa1d93facf4522f5a5bccf62798d663f279f50bbf00f4f33fbb8a026. Mar 13 12:22:30.374690 containerd[1713]: time="2026-03-13T12:22:30.374642747Z" level=info msg="StartContainer for \"72120054aa1d93facf4522f5a5bccf62798d663f279f50bbf00f4f33fbb8a026\" returns successfully" Mar 13 12:22:31.233687 containerd[1713]: time="2026-03-13T12:22:31.233261993Z" level=info msg="StopPodSandbox for \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\"" Mar 13 12:22:31.303555 kubelet[3140]: I0313 12:22:31.303249 3140 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-bvvnz" podStartSLOduration=1.866045521 podStartE2EDuration="23.303234297s" podCreationTimestamp="2026-03-13 12:22:08 +0000 UTC" firstStartedPulling="2026-03-13 12:22:08.703355489 +0000 UTC m=+23.864135809" lastFinishedPulling="2026-03-13 12:22:30.140544265 +0000 UTC m=+45.301324585" observedRunningTime="2026-03-13 12:22:31.287406304 +0000 UTC m=+46.448186624" watchObservedRunningTime="2026-03-13 12:22:31.303234297 +0000 UTC m=+46.464014617" Mar 13 12:22:31.343416 containerd[1713]: 2026-03-13 12:22:31.303 [INFO][4382] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Mar 13 12:22:31.343416 containerd[1713]: 2026-03-13 12:22:31.303 [INFO][4382] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" iface="eth0" netns="/var/run/netns/cni-4c72f3e4-8148-5b73-2a0e-f8936ffde589" Mar 13 12:22:31.343416 containerd[1713]: 2026-03-13 12:22:31.304 [INFO][4382] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" iface="eth0" netns="/var/run/netns/cni-4c72f3e4-8148-5b73-2a0e-f8936ffde589" Mar 13 12:22:31.343416 containerd[1713]: 2026-03-13 12:22:31.305 [INFO][4382] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" iface="eth0" netns="/var/run/netns/cni-4c72f3e4-8148-5b73-2a0e-f8936ffde589" Mar 13 12:22:31.343416 containerd[1713]: 2026-03-13 12:22:31.305 [INFO][4382] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Mar 13 12:22:31.343416 containerd[1713]: 2026-03-13 12:22:31.305 [INFO][4382] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Mar 13 12:22:31.343416 containerd[1713]: 2026-03-13 12:22:31.327 [INFO][4409] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" HandleID="k8s-pod-network.73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Workload="ci--4081.3.101--d13a81acd8-k8s-whisker--859785876c--5rs9x-eth0" Mar 13 12:22:31.343416 containerd[1713]: 2026-03-13 12:22:31.327 [INFO][4409] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:31.343416 containerd[1713]: 2026-03-13 12:22:31.327 [INFO][4409] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:31.343416 containerd[1713]: 2026-03-13 12:22:31.336 [WARNING][4409] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" HandleID="k8s-pod-network.73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Workload="ci--4081.3.101--d13a81acd8-k8s-whisker--859785876c--5rs9x-eth0" Mar 13 12:22:31.343416 containerd[1713]: 2026-03-13 12:22:31.336 [INFO][4409] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" HandleID="k8s-pod-network.73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Workload="ci--4081.3.101--d13a81acd8-k8s-whisker--859785876c--5rs9x-eth0" Mar 13 12:22:31.343416 containerd[1713]: 2026-03-13 12:22:31.338 [INFO][4409] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:31.343416 containerd[1713]: 2026-03-13 12:22:31.341 [INFO][4382] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Mar 13 12:22:31.344343 containerd[1713]: time="2026-03-13T12:22:31.343942061Z" level=info msg="TearDown network for sandbox \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\" successfully" Mar 13 12:22:31.344343 containerd[1713]: time="2026-03-13T12:22:31.343975181Z" level=info msg="StopPodSandbox for \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\" returns successfully" Mar 13 12:22:31.346900 systemd[1]: run-netns-cni\x2d4c72f3e4\x2d8148\x2d5b73\x2d2a0e\x2df8936ffde589.mount: Deactivated successfully. Mar 13 12:22:31.381713 kubelet[3140]: I0313 12:22:31.381410 3140 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/077001e3-6017-4dce-830f-acc444babdea-nginx-config\" (UniqueName: \"kubernetes.io/configmap/077001e3-6017-4dce-830f-acc444babdea-nginx-config\") pod \"077001e3-6017-4dce-830f-acc444babdea\" (UID: \"077001e3-6017-4dce-830f-acc444babdea\") " Mar 13 12:22:31.381713 kubelet[3140]: I0313 12:22:31.381450 3140 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/077001e3-6017-4dce-830f-acc444babdea-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/077001e3-6017-4dce-830f-acc444babdea-whisker-ca-bundle\") pod \"077001e3-6017-4dce-830f-acc444babdea\" (UID: \"077001e3-6017-4dce-830f-acc444babdea\") " Mar 13 12:22:31.381713 kubelet[3140]: I0313 12:22:31.381491 3140 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/077001e3-6017-4dce-830f-acc444babdea-kube-api-access-wcqmf\" (UniqueName: \"kubernetes.io/projected/077001e3-6017-4dce-830f-acc444babdea-kube-api-access-wcqmf\") pod \"077001e3-6017-4dce-830f-acc444babdea\" (UID: \"077001e3-6017-4dce-830f-acc444babdea\") " Mar 13 12:22:31.381713 kubelet[3140]: I0313 12:22:31.381514 3140 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/077001e3-6017-4dce-830f-acc444babdea-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/077001e3-6017-4dce-830f-acc444babdea-whisker-backend-key-pair\") pod \"077001e3-6017-4dce-830f-acc444babdea\" (UID: \"077001e3-6017-4dce-830f-acc444babdea\") " Mar 13 12:22:31.381909 kubelet[3140]: I0313 12:22:31.381846 3140 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/077001e3-6017-4dce-830f-acc444babdea-nginx-config" pod "077001e3-6017-4dce-830f-acc444babdea" (UID: "077001e3-6017-4dce-830f-acc444babdea"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 12:22:31.382431 kubelet[3140]: I0313 12:22:31.382180 3140 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/077001e3-6017-4dce-830f-acc444babdea-whisker-ca-bundle" pod "077001e3-6017-4dce-830f-acc444babdea" (UID: "077001e3-6017-4dce-830f-acc444babdea"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 12:22:31.384706 kubelet[3140]: I0313 12:22:31.384670 3140 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/077001e3-6017-4dce-830f-acc444babdea-whisker-backend-key-pair" pod "077001e3-6017-4dce-830f-acc444babdea" (UID: "077001e3-6017-4dce-830f-acc444babdea"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 12:22:31.385220 kubelet[3140]: I0313 12:22:31.385186 3140 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/077001e3-6017-4dce-830f-acc444babdea-kube-api-access-wcqmf" pod "077001e3-6017-4dce-830f-acc444babdea" (UID: "077001e3-6017-4dce-830f-acc444babdea"). InnerVolumeSpecName "kube-api-access-wcqmf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 12:22:31.386261 systemd[1]: var-lib-kubelet-pods-077001e3\x2d6017\x2d4dce\x2d830f\x2dacc444babdea-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 13 12:22:31.481927 kubelet[3140]: I0313 12:22:31.481844 3140 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wcqmf\" (UniqueName: \"kubernetes.io/projected/077001e3-6017-4dce-830f-acc444babdea-kube-api-access-wcqmf\") on node \"ci-4081.3.101-d13a81acd8\" DevicePath \"\"" Mar 13 12:22:31.481927 kubelet[3140]: I0313 12:22:31.481877 3140 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/077001e3-6017-4dce-830f-acc444babdea-whisker-backend-key-pair\") on node \"ci-4081.3.101-d13a81acd8\" DevicePath \"\"" Mar 13 12:22:31.481927 kubelet[3140]: I0313 12:22:31.481889 3140 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/077001e3-6017-4dce-830f-acc444babdea-nginx-config\") on node \"ci-4081.3.101-d13a81acd8\" DevicePath \"\"" Mar 13 12:22:31.481927 kubelet[3140]: I0313 12:22:31.481899 3140 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/077001e3-6017-4dce-830f-acc444babdea-whisker-ca-bundle\") on node \"ci-4081.3.101-d13a81acd8\" DevicePath \"\"" Mar 13 12:22:31.518637 systemd[1]: var-lib-kubelet-pods-077001e3\x2d6017\x2d4dce\x2d830f\x2dacc444babdea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwcqmf.mount: Deactivated successfully. Mar 13 12:22:32.108503 kernel: calico-node[4508]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 13 12:22:32.269716 systemd[1]: Removed slice kubepods-besteffort-pod077001e3_6017_4dce_830f_acc444babdea.slice - libcontainer container kubepods-besteffort-pod077001e3_6017_4dce_830f_acc444babdea.slice. Mar 13 12:22:32.409447 systemd[1]: Created slice kubepods-besteffort-podf5d2c42c_d12c_48cb_a794_f88ea3784f72.slice - libcontainer container kubepods-besteffort-podf5d2c42c_d12c_48cb_a794_f88ea3784f72.slice. Mar 13 12:22:32.491896 kubelet[3140]: I0313 12:22:32.491670 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f5d2c42c-d12c-48cb-a794-f88ea3784f72-whisker-backend-key-pair\") pod \"whisker-fb4945cbd-5trv9\" (UID: \"f5d2c42c-d12c-48cb-a794-f88ea3784f72\") " pod="calico-system/whisker-fb4945cbd-5trv9" Mar 13 12:22:32.491896 kubelet[3140]: I0313 12:22:32.491721 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5d2c42c-d12c-48cb-a794-f88ea3784f72-whisker-ca-bundle\") pod \"whisker-fb4945cbd-5trv9\" (UID: \"f5d2c42c-d12c-48cb-a794-f88ea3784f72\") " pod="calico-system/whisker-fb4945cbd-5trv9" Mar 13 12:22:32.491896 kubelet[3140]: I0313 12:22:32.491744 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f5d2c42c-d12c-48cb-a794-f88ea3784f72-nginx-config\") pod \"whisker-fb4945cbd-5trv9\" (UID: \"f5d2c42c-d12c-48cb-a794-f88ea3784f72\") " pod="calico-system/whisker-fb4945cbd-5trv9" Mar 13 12:22:32.491896 kubelet[3140]: I0313 12:22:32.491849 3140 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrs4f\" (UniqueName: \"kubernetes.io/projected/f5d2c42c-d12c-48cb-a794-f88ea3784f72-kube-api-access-lrs4f\") pod \"whisker-fb4945cbd-5trv9\" (UID: \"f5d2c42c-d12c-48cb-a794-f88ea3784f72\") " pod="calico-system/whisker-fb4945cbd-5trv9" Mar 13 12:22:32.532774 kernel: icmp: detected local route for 10.200.20.10 during ICMP sending, src 10.96.0.10 Mar 13 12:22:32.532881 kernel: icmp: detected local route for 10.200.20.10 during ICMP sending, src 10.96.0.10 Mar 13 12:22:32.539012 kernel: icmp: detected local route for 10.200.20.10 during ICMP sending, src 10.96.0.10 Mar 13 12:22:32.544778 kernel: icmp: detected local route for 10.200.20.10 during ICMP sending, src 10.96.0.10 Mar 13 12:22:32.561518 kernel: icmp: detected local route for 10.200.20.10 during ICMP sending, src 10.96.0.10 Mar 13 12:22:32.567594 kernel: icmp: detected local route for 10.200.20.10 during ICMP sending, src 10.96.0.10 Mar 13 12:22:32.567702 kernel: icmp: detected local route for 10.200.20.10 during ICMP sending, src 10.96.0.10 Mar 13 12:22:32.588670 kernel: icmp: detected local route for 10.200.20.10 during ICMP sending, src 10.96.0.10 Mar 13 12:22:32.588792 kernel: icmp: detected local route for 10.200.20.10 during ICMP sending, src 10.96.0.10 Mar 13 12:22:32.588811 kernel: icmp: detected local route for 10.200.20.10 during ICMP sending, src 10.96.0.10 Mar 13 12:22:32.723468 containerd[1713]: time="2026-03-13T12:22:32.723353458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fb4945cbd-5trv9,Uid:f5d2c42c-d12c-48cb-a794-f88ea3784f72,Namespace:calico-system,Attempt:0,}" Mar 13 12:22:32.808522 systemd-networkd[1359]: vxlan.calico: Link UP Mar 13 12:22:32.808529 systemd-networkd[1359]: vxlan.calico: Gained carrier Mar 13 12:22:32.911881 systemd-networkd[1359]: cali895e1484bb4: Link UP Mar 13 12:22:32.912078 systemd-networkd[1359]: cali895e1484bb4: Gained carrier Mar 13 12:22:32.937004 containerd[1713]: 2026-03-13 12:22:32.797 [INFO][4590] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--d13a81acd8-k8s-whisker--fb4945cbd--5trv9-eth0 whisker-fb4945cbd- calico-system f5d2c42c-d12c-48cb-a794-f88ea3784f72 921 0 2026-03-13 12:22:32 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:fb4945cbd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.101-d13a81acd8 whisker-fb4945cbd-5trv9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali895e1484bb4 [] [] }} ContainerID="51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5" Namespace="calico-system" Pod="whisker-fb4945cbd-5trv9" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-whisker--fb4945cbd--5trv9-" Mar 13 12:22:32.937004 containerd[1713]: 2026-03-13 12:22:32.801 [INFO][4590] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5" Namespace="calico-system" Pod="whisker-fb4945cbd-5trv9" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-whisker--fb4945cbd--5trv9-eth0" Mar 13 12:22:32.937004 containerd[1713]: 2026-03-13 12:22:32.845 [INFO][4611] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5" HandleID="k8s-pod-network.51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5" Workload="ci--4081.3.101--d13a81acd8-k8s-whisker--fb4945cbd--5trv9-eth0" Mar 13 12:22:32.937004 containerd[1713]: 2026-03-13 12:22:32.855 [INFO][4611] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5" HandleID="k8s-pod-network.51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5" Workload="ci--4081.3.101--d13a81acd8-k8s-whisker--fb4945cbd--5trv9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273280), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-d13a81acd8", "pod":"whisker-fb4945cbd-5trv9", "timestamp":"2026-03-13 12:22:32.84582867 +0000 UTC"}, Hostname:"ci-4081.3.101-d13a81acd8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000341080)} Mar 13 12:22:32.937004 containerd[1713]: 2026-03-13 12:22:32.855 [INFO][4611] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:32.937004 containerd[1713]: 2026-03-13 12:22:32.855 [INFO][4611] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:32.937004 containerd[1713]: 2026-03-13 12:22:32.855 [INFO][4611] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-d13a81acd8' Mar 13 12:22:32.937004 containerd[1713]: 2026-03-13 12:22:32.858 [INFO][4611] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:32.937004 containerd[1713]: 2026-03-13 12:22:32.862 [INFO][4611] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:32.937004 containerd[1713]: 2026-03-13 12:22:32.868 [INFO][4611] ipam/ipam.go 526: Trying affinity for 192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:32.937004 containerd[1713]: 2026-03-13 12:22:32.870 [INFO][4611] ipam/ipam.go 160: Attempting to load block cidr=192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:32.937004 containerd[1713]: 2026-03-13 12:22:32.872 [INFO][4611] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:32.937004 containerd[1713]: 2026-03-13 12:22:32.872 [INFO][4611] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:32.937004 containerd[1713]: 2026-03-13 12:22:32.875 [INFO][4611] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5 Mar 13 12:22:32.937004 containerd[1713]: 2026-03-13 12:22:32.884 [INFO][4611] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:32.937004 containerd[1713]: 2026-03-13 12:22:32.900 [INFO][4611] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.89.193/26] block=192.168.89.192/26 handle="k8s-pod-network.51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:32.937004 containerd[1713]: 2026-03-13 12:22:32.901 [INFO][4611] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.89.193/26] handle="k8s-pod-network.51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:32.937004 containerd[1713]: 2026-03-13 12:22:32.901 [INFO][4611] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:32.937004 containerd[1713]: 2026-03-13 12:22:32.901 [INFO][4611] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.89.193/26] IPv6=[] ContainerID="51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5" HandleID="k8s-pod-network.51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5" Workload="ci--4081.3.101--d13a81acd8-k8s-whisker--fb4945cbd--5trv9-eth0" Mar 13 12:22:32.938827 containerd[1713]: 2026-03-13 12:22:32.904 [INFO][4590] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5" Namespace="calico-system" Pod="whisker-fb4945cbd-5trv9" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-whisker--fb4945cbd--5trv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-whisker--fb4945cbd--5trv9-eth0", GenerateName:"whisker-fb4945cbd-", Namespace:"calico-system", SelfLink:"", UID:"f5d2c42c-d12c-48cb-a794-f88ea3784f72", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fb4945cbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"", Pod:"whisker-fb4945cbd-5trv9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.89.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali895e1484bb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:32.938827 containerd[1713]: 2026-03-13 12:22:32.904 [INFO][4590] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.193/32] ContainerID="51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5" Namespace="calico-system" Pod="whisker-fb4945cbd-5trv9" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-whisker--fb4945cbd--5trv9-eth0" Mar 13 12:22:32.938827 containerd[1713]: 2026-03-13 12:22:32.904 [INFO][4590] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali895e1484bb4 ContainerID="51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5" Namespace="calico-system" Pod="whisker-fb4945cbd-5trv9" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-whisker--fb4945cbd--5trv9-eth0" Mar 13 12:22:32.938827 containerd[1713]: 2026-03-13 12:22:32.913 [INFO][4590] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5" Namespace="calico-system" Pod="whisker-fb4945cbd-5trv9" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-whisker--fb4945cbd--5trv9-eth0" Mar 13 12:22:32.938827 containerd[1713]: 2026-03-13 12:22:32.914 [INFO][4590] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5" Namespace="calico-system" Pod="whisker-fb4945cbd-5trv9" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-whisker--fb4945cbd--5trv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-whisker--fb4945cbd--5trv9-eth0", GenerateName:"whisker-fb4945cbd-", Namespace:"calico-system", SelfLink:"", UID:"f5d2c42c-d12c-48cb-a794-f88ea3784f72", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fb4945cbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5", Pod:"whisker-fb4945cbd-5trv9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.89.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali895e1484bb4", MAC:"3e:a0:6b:d6:ef:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:32.938827 containerd[1713]: 2026-03-13 12:22:32.932 [INFO][4590] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5" Namespace="calico-system" Pod="whisker-fb4945cbd-5trv9" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-whisker--fb4945cbd--5trv9-eth0" Mar 13 12:22:32.968946 kubelet[3140]: I0313 12:22:32.968841 3140 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="077001e3-6017-4dce-830f-acc444babdea" path="/var/lib/kubelet/pods/077001e3-6017-4dce-830f-acc444babdea/volumes" Mar 13 12:22:32.978537 containerd[1713]: time="2026-03-13T12:22:32.972661251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:22:32.978537 containerd[1713]: time="2026-03-13T12:22:32.972723851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:22:32.978537 containerd[1713]: time="2026-03-13T12:22:32.972739931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:22:32.978537 containerd[1713]: time="2026-03-13T12:22:32.972823652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:22:33.012587 systemd[1]: Started cri-containerd-51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5.scope - libcontainer container 51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5. Mar 13 12:22:33.057815 containerd[1713]: time="2026-03-13T12:22:33.057646186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fb4945cbd-5trv9,Uid:f5d2c42c-d12c-48cb-a794-f88ea3784f72,Namespace:calico-system,Attempt:0,} returns sandbox id \"51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5\"" Mar 13 12:22:33.061725 containerd[1713]: time="2026-03-13T12:22:33.061682234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 13 12:22:33.612005 systemd[1]: run-containerd-runc-k8s.io-51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5-runc.xy37gf.mount: Deactivated successfully. Mar 13 12:22:34.019654 systemd-networkd[1359]: cali895e1484bb4: Gained IPv6LL Mar 13 12:22:34.668802 containerd[1713]: time="2026-03-13T12:22:34.668744541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:34.674340 containerd[1713]: time="2026-03-13T12:22:34.674084032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 13 12:22:34.679263 containerd[1713]: time="2026-03-13T12:22:34.679227842Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:34.685416 containerd[1713]: time="2026-03-13T12:22:34.685084934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:34.685933 containerd[1713]: time="2026-03-13T12:22:34.685902256Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.624178501s" Mar 13 12:22:34.685997 containerd[1713]: time="2026-03-13T12:22:34.685935016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 13 12:22:34.697093 containerd[1713]: time="2026-03-13T12:22:34.696881558Z" level=info msg="CreateContainer within sandbox \"51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 13 12:22:34.744685 containerd[1713]: time="2026-03-13T12:22:34.744538696Z" level=info msg="CreateContainer within sandbox \"51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5f911bd3b0626813bbaf92173cf1b85de5053a61cd4055443af61acb69eb1c12\"" Mar 13 12:22:34.745428 containerd[1713]: time="2026-03-13T12:22:34.745266018Z" level=info msg="StartContainer for \"5f911bd3b0626813bbaf92173cf1b85de5053a61cd4055443af61acb69eb1c12\"" Mar 13 12:22:34.775691 systemd[1]: Started cri-containerd-5f911bd3b0626813bbaf92173cf1b85de5053a61cd4055443af61acb69eb1c12.scope - libcontainer container 5f911bd3b0626813bbaf92173cf1b85de5053a61cd4055443af61acb69eb1c12. Mar 13 12:22:34.812260 containerd[1713]: time="2026-03-13T12:22:34.812044395Z" level=info msg="StartContainer for \"5f911bd3b0626813bbaf92173cf1b85de5053a61cd4055443af61acb69eb1c12\" returns successfully" Mar 13 12:22:34.821090 containerd[1713]: time="2026-03-13T12:22:34.820873574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 13 12:22:34.851640 systemd-networkd[1359]: vxlan.calico: Gained IPv6LL Mar 13 12:22:36.777029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount150634505.mount: Deactivated successfully. Mar 13 12:22:36.840524 containerd[1713]: time="2026-03-13T12:22:36.840027047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:36.845216 containerd[1713]: time="2026-03-13T12:22:36.844970578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 13 12:22:36.849563 containerd[1713]: time="2026-03-13T12:22:36.849203546Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:36.854495 containerd[1713]: time="2026-03-13T12:22:36.854449237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:36.855276 containerd[1713]: time="2026-03-13T12:22:36.855240199Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.034326785s" Mar 13 12:22:36.855324 containerd[1713]: time="2026-03-13T12:22:36.855278599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 13 12:22:36.865605 containerd[1713]: time="2026-03-13T12:22:36.865413980Z" level=info msg="CreateContainer within sandbox \"51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 13 12:22:36.906489 containerd[1713]: time="2026-03-13T12:22:36.906430584Z" level=info msg="CreateContainer within sandbox \"51b747e6424ddef3f2f5703986c2fa7d7842df8b2edec259542f3b5a7e3a23e5\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"564f3c4a4975afd3c6b77d9cd5793133349fe8656161c1ef8391dfa6a95f8d18\"" Mar 13 12:22:36.907328 containerd[1713]: time="2026-03-13T12:22:36.907244466Z" level=info msg="StartContainer for \"564f3c4a4975afd3c6b77d9cd5793133349fe8656161c1ef8391dfa6a95f8d18\"" Mar 13 12:22:36.938651 systemd[1]: Started cri-containerd-564f3c4a4975afd3c6b77d9cd5793133349fe8656161c1ef8391dfa6a95f8d18.scope - libcontainer container 564f3c4a4975afd3c6b77d9cd5793133349fe8656161c1ef8391dfa6a95f8d18. Mar 13 12:22:36.977059 containerd[1713]: time="2026-03-13T12:22:36.977004289Z" level=info msg="StartContainer for \"564f3c4a4975afd3c6b77d9cd5793133349fe8656161c1ef8391dfa6a95f8d18\" returns successfully" Mar 13 12:22:37.604600 kernel: net_ratelimit: 3 callbacks suppressed Mar 13 12:22:37.604722 kernel: icmp: detected local route for 10.200.20.10 during ICMP sending, src 10.110.240.172 Mar 13 12:22:40.944530 kernel: icmp: detected local route for 10.200.20.10 during ICMP sending, src 10.110.240.172 Mar 13 12:22:41.793510 kernel: icmp: detected local route for 10.200.20.10 during ICMP sending, src 10.105.109.17 Mar 13 12:22:41.959958 containerd[1713]: time="2026-03-13T12:22:41.958862600Z" level=info msg="StopPodSandbox for \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\"" Mar 13 12:22:42.019616 kubelet[3140]: I0313 12:22:42.019085 3140 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-fb4945cbd-5trv9" podStartSLOduration=6.2238502239999995 podStartE2EDuration="10.019068431s" podCreationTimestamp="2026-03-13 12:22:32 +0000 UTC" firstStartedPulling="2026-03-13 12:22:33.061392114 +0000 UTC m=+48.222172434" lastFinishedPulling="2026-03-13 12:22:36.856610321 +0000 UTC m=+52.017390641" observedRunningTime="2026-03-13 12:22:37.272278445 +0000 UTC m=+52.433058765" watchObservedRunningTime="2026-03-13 12:22:42.019068431 +0000 UTC m=+57.179848751" Mar 13 12:22:42.061927 containerd[1713]: 2026-03-13 12:22:42.016 [INFO][4849] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Mar 13 12:22:42.061927 containerd[1713]: 2026-03-13 12:22:42.017 [INFO][4849] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" iface="eth0" netns="/var/run/netns/cni-d749cb6f-2f1f-878e-0bab-3db0f510cb6c" Mar 13 12:22:42.061927 containerd[1713]: 2026-03-13 12:22:42.017 [INFO][4849] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" iface="eth0" netns="/var/run/netns/cni-d749cb6f-2f1f-878e-0bab-3db0f510cb6c" Mar 13 12:22:42.061927 containerd[1713]: 2026-03-13 12:22:42.020 [INFO][4849] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" iface="eth0" netns="/var/run/netns/cni-d749cb6f-2f1f-878e-0bab-3db0f510cb6c" Mar 13 12:22:42.061927 containerd[1713]: 2026-03-13 12:22:42.020 [INFO][4849] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Mar 13 12:22:42.061927 containerd[1713]: 2026-03-13 12:22:42.020 [INFO][4849] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Mar 13 12:22:42.061927 containerd[1713]: 2026-03-13 12:22:42.045 [INFO][4856] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" HandleID="k8s-pod-network.df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0" Mar 13 12:22:42.061927 containerd[1713]: 2026-03-13 12:22:42.045 [INFO][4856] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:42.061927 containerd[1713]: 2026-03-13 12:22:42.045 [INFO][4856] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:42.061927 containerd[1713]: 2026-03-13 12:22:42.055 [WARNING][4856] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" HandleID="k8s-pod-network.df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0" Mar 13 12:22:42.061927 containerd[1713]: 2026-03-13 12:22:42.055 [INFO][4856] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" HandleID="k8s-pod-network.df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0" Mar 13 12:22:42.061927 containerd[1713]: 2026-03-13 12:22:42.058 [INFO][4856] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:42.061927 containerd[1713]: 2026-03-13 12:22:42.060 [INFO][4849] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Mar 13 12:22:42.063744 containerd[1713]: time="2026-03-13T12:22:42.063569394Z" level=info msg="TearDown network for sandbox \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\" successfully" Mar 13 12:22:42.063744 containerd[1713]: time="2026-03-13T12:22:42.063608234Z" level=info msg="StopPodSandbox for \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\" returns successfully" Mar 13 12:22:42.065779 systemd[1]: run-netns-cni\x2dd749cb6f\x2d2f1f\x2d878e\x2d0bab\x2d3db0f510cb6c.mount: Deactivated successfully. Mar 13 12:22:42.071583 containerd[1713]: time="2026-03-13T12:22:42.071543368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8b56c7bd-4ctwx,Uid:5012c233-3465-4ca7-bdc4-e2ef60a160c5,Namespace:calico-system,Attempt:1,}" Mar 13 12:22:42.470646 systemd-networkd[1359]: cali4d42079970c: Link UP Mar 13 12:22:42.472684 systemd-networkd[1359]: cali4d42079970c: Gained carrier Mar 13 12:22:42.493329 containerd[1713]: 2026-03-13 12:22:42.399 [INFO][4862] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0 calico-apiserver-6d8b56c7bd- calico-system 5012c233-3465-4ca7-bdc4-e2ef60a160c5 962 0 2026-03-13 12:22:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d8b56c7bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.101-d13a81acd8 calico-apiserver-6d8b56c7bd-4ctwx eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali4d42079970c [] [] }} ContainerID="b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203" Namespace="calico-system" Pod="calico-apiserver-6d8b56c7bd-4ctwx" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-" Mar 13 12:22:42.493329 containerd[1713]: 2026-03-13 12:22:42.399 [INFO][4862] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203" Namespace="calico-system" Pod="calico-apiserver-6d8b56c7bd-4ctwx" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0" Mar 13 12:22:42.493329 containerd[1713]: 2026-03-13 12:22:42.423 [INFO][4874] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203" HandleID="k8s-pod-network.b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0" Mar 13 12:22:42.493329 containerd[1713]: 2026-03-13 12:22:42.432 [INFO][4874] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203" HandleID="k8s-pod-network.b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e5750), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-d13a81acd8", "pod":"calico-apiserver-6d8b56c7bd-4ctwx", "timestamp":"2026-03-13 12:22:42.42330802 +0000 UTC"}, Hostname:"ci-4081.3.101-d13a81acd8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000350dc0)} Mar 13 12:22:42.493329 containerd[1713]: 2026-03-13 12:22:42.432 [INFO][4874] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:42.493329 containerd[1713]: 2026-03-13 12:22:42.432 [INFO][4874] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:42.493329 containerd[1713]: 2026-03-13 12:22:42.432 [INFO][4874] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-d13a81acd8' Mar 13 12:22:42.493329 containerd[1713]: 2026-03-13 12:22:42.435 [INFO][4874] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:42.493329 containerd[1713]: 2026-03-13 12:22:42.439 [INFO][4874] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:42.493329 containerd[1713]: 2026-03-13 12:22:42.444 [INFO][4874] ipam/ipam.go 526: Trying affinity for 192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:42.493329 containerd[1713]: 2026-03-13 12:22:42.446 [INFO][4874] ipam/ipam.go 160: Attempting to load block cidr=192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:42.493329 containerd[1713]: 2026-03-13 12:22:42.448 [INFO][4874] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:42.493329 containerd[1713]: 2026-03-13 12:22:42.448 [INFO][4874] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:42.493329 containerd[1713]: 2026-03-13 12:22:42.450 [INFO][4874] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203 Mar 13 12:22:42.493329 containerd[1713]: 2026-03-13 12:22:42.454 [INFO][4874] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:42.493329 containerd[1713]: 2026-03-13 12:22:42.465 [INFO][4874] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.89.194/26] block=192.168.89.192/26 handle="k8s-pod-network.b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:42.493329 containerd[1713]: 2026-03-13 12:22:42.465 [INFO][4874] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.89.194/26] handle="k8s-pod-network.b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:42.493329 containerd[1713]: 2026-03-13 12:22:42.465 [INFO][4874] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:42.493329 containerd[1713]: 2026-03-13 12:22:42.465 [INFO][4874] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.89.194/26] IPv6=[] ContainerID="b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203" HandleID="k8s-pod-network.b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0" Mar 13 12:22:42.495273 containerd[1713]: 2026-03-13 12:22:42.467 [INFO][4862] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203" Namespace="calico-system" Pod="calico-apiserver-6d8b56c7bd-4ctwx" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0", GenerateName:"calico-apiserver-6d8b56c7bd-", Namespace:"calico-system", SelfLink:"", UID:"5012c233-3465-4ca7-bdc4-e2ef60a160c5", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d8b56c7bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"", Pod:"calico-apiserver-6d8b56c7bd-4ctwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4d42079970c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:42.495273 containerd[1713]: 2026-03-13 12:22:42.467 [INFO][4862] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.194/32] ContainerID="b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203" Namespace="calico-system" Pod="calico-apiserver-6d8b56c7bd-4ctwx" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0" Mar 13 12:22:42.495273 containerd[1713]: 2026-03-13 12:22:42.467 [INFO][4862] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d42079970c ContainerID="b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203" Namespace="calico-system" Pod="calico-apiserver-6d8b56c7bd-4ctwx" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0" Mar 13 12:22:42.495273 containerd[1713]: 2026-03-13 12:22:42.472 [INFO][4862] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203" Namespace="calico-system" Pod="calico-apiserver-6d8b56c7bd-4ctwx" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0" Mar 13 12:22:42.495273 containerd[1713]: 2026-03-13 12:22:42.473 [INFO][4862] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203" Namespace="calico-system" Pod="calico-apiserver-6d8b56c7bd-4ctwx" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0", GenerateName:"calico-apiserver-6d8b56c7bd-", Namespace:"calico-system", SelfLink:"", UID:"5012c233-3465-4ca7-bdc4-e2ef60a160c5", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d8b56c7bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203", Pod:"calico-apiserver-6d8b56c7bd-4ctwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4d42079970c", MAC:"8e:d5:1a:bb:68:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:42.495273 containerd[1713]: 2026-03-13 12:22:42.489 [INFO][4862] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203" Namespace="calico-system" Pod="calico-apiserver-6d8b56c7bd-4ctwx" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0" Mar 13 12:22:42.513615 containerd[1713]: time="2026-03-13T12:22:42.513354266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:22:42.513760 containerd[1713]: time="2026-03-13T12:22:42.513409866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:22:42.513760 containerd[1713]: time="2026-03-13T12:22:42.513435186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:22:42.513760 containerd[1713]: time="2026-03-13T12:22:42.513560987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:22:42.545629 systemd[1]: Started cri-containerd-b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203.scope - libcontainer container b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203. Mar 13 12:22:42.578277 containerd[1713]: time="2026-03-13T12:22:42.578218106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8b56c7bd-4ctwx,Uid:5012c233-3465-4ca7-bdc4-e2ef60a160c5,Namespace:calico-system,Attempt:1,} returns sandbox id \"b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203\"" Mar 13 12:22:42.591499 containerd[1713]: time="2026-03-13T12:22:42.590920130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 12:22:43.959538 containerd[1713]: time="2026-03-13T12:22:43.959119702Z" level=info msg="StopPodSandbox for \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\"" Mar 13 12:22:43.959538 containerd[1713]: time="2026-03-13T12:22:43.959193623Z" level=info msg="StopPodSandbox for \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\"" Mar 13 12:22:43.960995 containerd[1713]: time="2026-03-13T12:22:43.959149143Z" level=info msg="StopPodSandbox for \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\"" Mar 13 12:22:44.068560 systemd-networkd[1359]: cali4d42079970c: Gained IPv6LL Mar 13 12:22:44.168166 containerd[1713]: 2026-03-13 12:22:44.055 [INFO][4983] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Mar 13 12:22:44.168166 containerd[1713]: 2026-03-13 12:22:44.056 [INFO][4983] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" iface="eth0" netns="/var/run/netns/cni-6fbc6fea-fc78-9182-4676-390c7fedf1e8" Mar 13 12:22:44.168166 containerd[1713]: 2026-03-13 12:22:44.056 [INFO][4983] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" iface="eth0" netns="/var/run/netns/cni-6fbc6fea-fc78-9182-4676-390c7fedf1e8" Mar 13 12:22:44.168166 containerd[1713]: 2026-03-13 12:22:44.056 [INFO][4983] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" iface="eth0" netns="/var/run/netns/cni-6fbc6fea-fc78-9182-4676-390c7fedf1e8" Mar 13 12:22:44.168166 containerd[1713]: 2026-03-13 12:22:44.056 [INFO][4983] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Mar 13 12:22:44.168166 containerd[1713]: 2026-03-13 12:22:44.056 [INFO][4983] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Mar 13 12:22:44.168166 containerd[1713]: 2026-03-13 12:22:44.131 [INFO][5000] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" HandleID="k8s-pod-network.5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0" Mar 13 12:22:44.168166 containerd[1713]: 2026-03-13 12:22:44.134 [INFO][5000] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:44.168166 containerd[1713]: 2026-03-13 12:22:44.134 [INFO][5000] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:44.168166 containerd[1713]: 2026-03-13 12:22:44.151 [WARNING][5000] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" HandleID="k8s-pod-network.5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0" Mar 13 12:22:44.168166 containerd[1713]: 2026-03-13 12:22:44.152 [INFO][5000] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" HandleID="k8s-pod-network.5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0" Mar 13 12:22:44.168166 containerd[1713]: 2026-03-13 12:22:44.158 [INFO][5000] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:44.168166 containerd[1713]: 2026-03-13 12:22:44.164 [INFO][4983] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Mar 13 12:22:44.170424 containerd[1713]: time="2026-03-13T12:22:44.169579292Z" level=info msg="TearDown network for sandbox \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\" successfully" Mar 13 12:22:44.170424 containerd[1713]: time="2026-03-13T12:22:44.169613452Z" level=info msg="StopPodSandbox for \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\" returns successfully" Mar 13 12:22:44.171285 systemd[1]: run-netns-cni\x2d6fbc6fea\x2dfc78\x2d9182\x2d4676\x2d390c7fedf1e8.mount: Deactivated successfully. Mar 13 12:22:44.177500 containerd[1713]: time="2026-03-13T12:22:44.177001226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mh4sx,Uid:2dd1ee1d-a2bd-4d4b-a04a-c030316e7f2c,Namespace:kube-system,Attempt:1,}" Mar 13 12:22:44.184956 containerd[1713]: 2026-03-13 12:22:44.083 [INFO][4984] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Mar 13 12:22:44.184956 containerd[1713]: 2026-03-13 12:22:44.083 [INFO][4984] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" iface="eth0" netns="/var/run/netns/cni-90ecfda8-6d3c-6596-2070-dce25d9fc364" Mar 13 12:22:44.184956 containerd[1713]: 2026-03-13 12:22:44.084 [INFO][4984] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" iface="eth0" netns="/var/run/netns/cni-90ecfda8-6d3c-6596-2070-dce25d9fc364" Mar 13 12:22:44.184956 containerd[1713]: 2026-03-13 12:22:44.084 [INFO][4984] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" iface="eth0" netns="/var/run/netns/cni-90ecfda8-6d3c-6596-2070-dce25d9fc364" Mar 13 12:22:44.184956 containerd[1713]: 2026-03-13 12:22:44.084 [INFO][4984] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Mar 13 12:22:44.184956 containerd[1713]: 2026-03-13 12:22:44.084 [INFO][4984] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Mar 13 12:22:44.184956 containerd[1713]: 2026-03-13 12:22:44.138 [INFO][5006] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" HandleID="k8s-pod-network.b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0" Mar 13 12:22:44.184956 containerd[1713]: 2026-03-13 12:22:44.138 [INFO][5006] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:44.184956 containerd[1713]: 2026-03-13 12:22:44.158 [INFO][5006] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:44.184956 containerd[1713]: 2026-03-13 12:22:44.177 [WARNING][5006] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" HandleID="k8s-pod-network.b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0" Mar 13 12:22:44.184956 containerd[1713]: 2026-03-13 12:22:44.177 [INFO][5006] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" HandleID="k8s-pod-network.b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0" Mar 13 12:22:44.184956 containerd[1713]: 2026-03-13 12:22:44.179 [INFO][5006] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:44.184956 containerd[1713]: 2026-03-13 12:22:44.183 [INFO][4984] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Mar 13 12:22:44.185345 containerd[1713]: time="2026-03-13T12:22:44.185079121Z" level=info msg="TearDown network for sandbox \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\" successfully" Mar 13 12:22:44.185345 containerd[1713]: time="2026-03-13T12:22:44.185107161Z" level=info msg="StopPodSandbox for \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\" returns successfully" Mar 13 12:22:44.188607 systemd[1]: run-netns-cni\x2d90ecfda8\x2d6d3c\x2d6596\x2d2070\x2ddce25d9fc364.mount: Deactivated successfully. Mar 13 12:22:44.192002 containerd[1713]: time="2026-03-13T12:22:44.191942893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77b8d6f4b5-8vb87,Uid:b22d9553-9cf2-4f77-9f81-2807136a31dd,Namespace:calico-system,Attempt:1,}" Mar 13 12:22:44.199399 containerd[1713]: 2026-03-13 12:22:44.091 [INFO][4980] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Mar 13 12:22:44.199399 containerd[1713]: 2026-03-13 12:22:44.091 [INFO][4980] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" iface="eth0" netns="/var/run/netns/cni-e5d41ed9-c690-56e8-8e6e-dd50386d597b" Mar 13 12:22:44.199399 containerd[1713]: 2026-03-13 12:22:44.094 [INFO][4980] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" iface="eth0" netns="/var/run/netns/cni-e5d41ed9-c690-56e8-8e6e-dd50386d597b" Mar 13 12:22:44.199399 containerd[1713]: 2026-03-13 12:22:44.095 [INFO][4980] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" iface="eth0" netns="/var/run/netns/cni-e5d41ed9-c690-56e8-8e6e-dd50386d597b" Mar 13 12:22:44.199399 containerd[1713]: 2026-03-13 12:22:44.095 [INFO][4980] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Mar 13 12:22:44.199399 containerd[1713]: 2026-03-13 12:22:44.095 [INFO][4980] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Mar 13 12:22:44.199399 containerd[1713]: 2026-03-13 12:22:44.166 [INFO][5011] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" HandleID="k8s-pod-network.69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Workload="ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0" Mar 13 12:22:44.199399 containerd[1713]: 2026-03-13 12:22:44.167 [INFO][5011] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:44.199399 containerd[1713]: 2026-03-13 12:22:44.179 [INFO][5011] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:44.199399 containerd[1713]: 2026-03-13 12:22:44.193 [WARNING][5011] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" HandleID="k8s-pod-network.69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Workload="ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0" Mar 13 12:22:44.199399 containerd[1713]: 2026-03-13 12:22:44.193 [INFO][5011] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" HandleID="k8s-pod-network.69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Workload="ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0" Mar 13 12:22:44.199399 containerd[1713]: 2026-03-13 12:22:44.194 [INFO][5011] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:44.199399 containerd[1713]: 2026-03-13 12:22:44.197 [INFO][4980] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Mar 13 12:22:44.199883 containerd[1713]: time="2026-03-13T12:22:44.199642028Z" level=info msg="TearDown network for sandbox \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\" successfully" Mar 13 12:22:44.199883 containerd[1713]: time="2026-03-13T12:22:44.199877668Z" level=info msg="StopPodSandbox for \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\" returns successfully" Mar 13 12:22:44.202962 systemd[1]: run-netns-cni\x2de5d41ed9\x2dc690\x2d56e8\x2d8e6e\x2ddd50386d597b.mount: Deactivated successfully. Mar 13 12:22:44.209797 containerd[1713]: time="2026-03-13T12:22:44.209489406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-xl56r,Uid:e647b091-316e-49d6-91db-ea686f6b4ba4,Namespace:calico-system,Attempt:1,}" Mar 13 12:22:44.462405 systemd-networkd[1359]: calicd27d42fe8d: Link UP Mar 13 12:22:44.462613 systemd-networkd[1359]: calicd27d42fe8d: Gained carrier Mar 13 12:22:44.497406 containerd[1713]: 2026-03-13 12:22:44.302 [INFO][5025] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0 coredns-7d764666f9- kube-system 2dd1ee1d-a2bd-4d4b-a04a-c030316e7f2c 975 0 2026-03-13 12:21:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.101-d13a81acd8 coredns-7d764666f9-mh4sx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicd27d42fe8d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f" Namespace="kube-system" Pod="coredns-7d764666f9-mh4sx" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-" Mar 13 12:22:44.497406 containerd[1713]: 2026-03-13 12:22:44.302 [INFO][5025] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f" Namespace="kube-system" Pod="coredns-7d764666f9-mh4sx" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0" Mar 13 12:22:44.497406 containerd[1713]: 2026-03-13 12:22:44.368 [INFO][5055] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f" HandleID="k8s-pod-network.4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0" Mar 13 12:22:44.497406 containerd[1713]: 2026-03-13 12:22:44.388 [INFO][5055] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f" HandleID="k8s-pod-network.4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.101-d13a81acd8", "pod":"coredns-7d764666f9-mh4sx", "timestamp":"2026-03-13 12:22:44.3683039 +0000 UTC"}, Hostname:"ci-4081.3.101-d13a81acd8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003bfb80)} Mar 13 12:22:44.497406 containerd[1713]: 2026-03-13 12:22:44.388 [INFO][5055] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:44.497406 containerd[1713]: 2026-03-13 12:22:44.389 [INFO][5055] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:44.497406 containerd[1713]: 2026-03-13 12:22:44.389 [INFO][5055] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-d13a81acd8' Mar 13 12:22:44.497406 containerd[1713]: 2026-03-13 12:22:44.392 [INFO][5055] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.497406 containerd[1713]: 2026-03-13 12:22:44.400 [INFO][5055] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.497406 containerd[1713]: 2026-03-13 12:22:44.412 [INFO][5055] ipam/ipam.go 526: Trying affinity for 192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.497406 containerd[1713]: 2026-03-13 12:22:44.415 [INFO][5055] ipam/ipam.go 160: Attempting to load block cidr=192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.497406 containerd[1713]: 2026-03-13 12:22:44.421 [INFO][5055] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.497406 containerd[1713]: 2026-03-13 12:22:44.421 [INFO][5055] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.497406 containerd[1713]: 2026-03-13 12:22:44.425 [INFO][5055] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f Mar 13 12:22:44.497406 containerd[1713]: 2026-03-13 12:22:44.435 [INFO][5055] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.497406 containerd[1713]: 2026-03-13 12:22:44.455 [INFO][5055] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.89.195/26] block=192.168.89.192/26 handle="k8s-pod-network.4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.497406 containerd[1713]: 2026-03-13 12:22:44.455 [INFO][5055] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.89.195/26] handle="k8s-pod-network.4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.497406 containerd[1713]: 2026-03-13 12:22:44.455 [INFO][5055] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:44.497406 containerd[1713]: 2026-03-13 12:22:44.455 [INFO][5055] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.89.195/26] IPv6=[] ContainerID="4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f" HandleID="k8s-pod-network.4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0" Mar 13 12:22:44.498039 containerd[1713]: 2026-03-13 12:22:44.458 [INFO][5025] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f" Namespace="kube-system" Pod="coredns-7d764666f9-mh4sx" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2dd1ee1d-a2bd-4d4b-a04a-c030316e7f2c", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 21, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"", Pod:"coredns-7d764666f9-mh4sx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd27d42fe8d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:44.498039 containerd[1713]: 2026-03-13 12:22:44.459 [INFO][5025] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.195/32] ContainerID="4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f" Namespace="kube-system" Pod="coredns-7d764666f9-mh4sx" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0" Mar 13 12:22:44.498039 containerd[1713]: 2026-03-13 12:22:44.459 [INFO][5025] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd27d42fe8d ContainerID="4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f" Namespace="kube-system" Pod="coredns-7d764666f9-mh4sx" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0" Mar 13 12:22:44.498039 containerd[1713]: 2026-03-13 12:22:44.463 [INFO][5025] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f" Namespace="kube-system" Pod="coredns-7d764666f9-mh4sx" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0" Mar 13 12:22:44.498039 containerd[1713]: 2026-03-13 12:22:44.463 [INFO][5025] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f" Namespace="kube-system" Pod="coredns-7d764666f9-mh4sx" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2dd1ee1d-a2bd-4d4b-a04a-c030316e7f2c", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 21, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f", Pod:"coredns-7d764666f9-mh4sx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd27d42fe8d", MAC:"f2:69:f4:70:2d:6a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:44.498222 containerd[1713]: 2026-03-13 12:22:44.492 [INFO][5025] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f" Namespace="kube-system" Pod="coredns-7d764666f9-mh4sx" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0" Mar 13 12:22:44.535765 containerd[1713]: time="2026-03-13T12:22:44.535534529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:22:44.535765 containerd[1713]: time="2026-03-13T12:22:44.535602290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:22:44.535765 containerd[1713]: time="2026-03-13T12:22:44.535613450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:22:44.535765 containerd[1713]: time="2026-03-13T12:22:44.535720130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:22:44.562086 systemd[1]: Started cri-containerd-4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f.scope - libcontainer container 4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f. Mar 13 12:22:44.572140 systemd-networkd[1359]: cali8bce95fb3c0: Link UP Mar 13 12:22:44.587975 systemd-networkd[1359]: cali8bce95fb3c0: Gained carrier Mar 13 12:22:44.638891 containerd[1713]: 2026-03-13 12:22:44.353 [INFO][5045] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0 goldmane-9f7667bb8- calico-system e647b091-316e-49d6-91db-ea686f6b4ba4 977 0 2026-03-13 12:22:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.101-d13a81acd8 goldmane-9f7667bb8-xl56r eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8bce95fb3c0 [] [] }} ContainerID="52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f" Namespace="calico-system" Pod="goldmane-9f7667bb8-xl56r" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-" Mar 13 12:22:44.638891 containerd[1713]: 2026-03-13 12:22:44.353 [INFO][5045] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f" Namespace="calico-system" Pod="goldmane-9f7667bb8-xl56r" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0" Mar 13 12:22:44.638891 containerd[1713]: 2026-03-13 12:22:44.414 [INFO][5066] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f" HandleID="k8s-pod-network.52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f" Workload="ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0" Mar 13 12:22:44.638891 containerd[1713]: 2026-03-13 12:22:44.445 [INFO][5066] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f" HandleID="k8s-pod-network.52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f" Workload="ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000380140), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-d13a81acd8", "pod":"goldmane-9f7667bb8-xl56r", "timestamp":"2026-03-13 12:22:44.414771666 +0000 UTC"}, Hostname:"ci-4081.3.101-d13a81acd8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400046c2c0)} Mar 13 12:22:44.638891 containerd[1713]: 2026-03-13 12:22:44.446 [INFO][5066] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:44.638891 containerd[1713]: 2026-03-13 12:22:44.456 [INFO][5066] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:44.638891 containerd[1713]: 2026-03-13 12:22:44.456 [INFO][5066] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-d13a81acd8' Mar 13 12:22:44.638891 containerd[1713]: 2026-03-13 12:22:44.496 [INFO][5066] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.638891 containerd[1713]: 2026-03-13 12:22:44.506 [INFO][5066] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.638891 containerd[1713]: 2026-03-13 12:22:44.516 [INFO][5066] ipam/ipam.go 526: Trying affinity for 192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.638891 containerd[1713]: 2026-03-13 12:22:44.520 [INFO][5066] ipam/ipam.go 160: Attempting to load block cidr=192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.638891 containerd[1713]: 2026-03-13 12:22:44.526 [INFO][5066] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.638891 containerd[1713]: 2026-03-13 12:22:44.527 [INFO][5066] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.638891 containerd[1713]: 2026-03-13 12:22:44.529 [INFO][5066] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f Mar 13 12:22:44.638891 containerd[1713]: 2026-03-13 12:22:44.537 [INFO][5066] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.638891 containerd[1713]: 2026-03-13 12:22:44.554 [INFO][5066] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.89.196/26] block=192.168.89.192/26 handle="k8s-pod-network.52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.638891 containerd[1713]: 2026-03-13 12:22:44.554 [INFO][5066] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.89.196/26] handle="k8s-pod-network.52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.638891 containerd[1713]: 2026-03-13 12:22:44.558 [INFO][5066] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:44.638891 containerd[1713]: 2026-03-13 12:22:44.558 [INFO][5066] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.89.196/26] IPv6=[] ContainerID="52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f" HandleID="k8s-pod-network.52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f" Workload="ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0" Mar 13 12:22:44.639522 containerd[1713]: 2026-03-13 12:22:44.567 [INFO][5045] cni-plugin/k8s.go 418: Populated endpoint ContainerID="52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f" Namespace="calico-system" Pod="goldmane-9f7667bb8-xl56r" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"e647b091-316e-49d6-91db-ea686f6b4ba4", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"", Pod:"goldmane-9f7667bb8-xl56r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.89.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8bce95fb3c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:44.639522 containerd[1713]: 2026-03-13 12:22:44.567 [INFO][5045] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.196/32] ContainerID="52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f" Namespace="calico-system" Pod="goldmane-9f7667bb8-xl56r" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0" Mar 13 12:22:44.639522 containerd[1713]: 2026-03-13 12:22:44.567 [INFO][5045] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8bce95fb3c0 ContainerID="52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f" Namespace="calico-system" Pod="goldmane-9f7667bb8-xl56r" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0" Mar 13 12:22:44.639522 containerd[1713]: 2026-03-13 12:22:44.593 [INFO][5045] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f" Namespace="calico-system" Pod="goldmane-9f7667bb8-xl56r" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0" Mar 13 12:22:44.639522 containerd[1713]: 2026-03-13 12:22:44.598 [INFO][5045] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f" Namespace="calico-system" Pod="goldmane-9f7667bb8-xl56r" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"e647b091-316e-49d6-91db-ea686f6b4ba4", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f", Pod:"goldmane-9f7667bb8-xl56r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.89.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8bce95fb3c0", MAC:"32:45:93:d5:75:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:44.639522 containerd[1713]: 2026-03-13 12:22:44.630 [INFO][5045] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f" Namespace="calico-system" Pod="goldmane-9f7667bb8-xl56r" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0" Mar 13 12:22:44.680937 containerd[1713]: time="2026-03-13T12:22:44.680884719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mh4sx,Uid:2dd1ee1d-a2bd-4d4b-a04a-c030316e7f2c,Namespace:kube-system,Attempt:1,} returns sandbox id \"4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f\"" Mar 13 12:22:44.693661 systemd-networkd[1359]: cali249b6f6d1cb: Link UP Mar 13 12:22:44.696351 containerd[1713]: time="2026-03-13T12:22:44.696115027Z" level=info msg="CreateContainer within sandbox \"4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 12:22:44.696351 containerd[1713]: time="2026-03-13T12:22:44.696093107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:22:44.696351 containerd[1713]: time="2026-03-13T12:22:44.696200907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:22:44.696351 containerd[1713]: time="2026-03-13T12:22:44.696231187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:22:44.700276 containerd[1713]: time="2026-03-13T12:22:44.698112190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:22:44.701006 systemd-networkd[1359]: cali249b6f6d1cb: Gained carrier Mar 13 12:22:44.743751 systemd[1]: Started cri-containerd-52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f.scope - libcontainer container 52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f. Mar 13 12:22:44.750579 containerd[1713]: 2026-03-13 12:22:44.366 [INFO][5036] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0 calico-kube-controllers-77b8d6f4b5- calico-system b22d9553-9cf2-4f77-9f81-2807136a31dd 976 0 2026-03-13 12:22:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77b8d6f4b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.101-d13a81acd8 calico-kube-controllers-77b8d6f4b5-8vb87 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali249b6f6d1cb [] [] }} ContainerID="03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1" Namespace="calico-system" Pod="calico-kube-controllers-77b8d6f4b5-8vb87" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-" Mar 13 12:22:44.750579 containerd[1713]: 2026-03-13 12:22:44.367 [INFO][5036] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1" Namespace="calico-system" Pod="calico-kube-controllers-77b8d6f4b5-8vb87" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0" Mar 13 12:22:44.750579 containerd[1713]: 2026-03-13 12:22:44.429 [INFO][5071] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1" HandleID="k8s-pod-network.03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0" Mar 13 12:22:44.750579 containerd[1713]: 2026-03-13 12:22:44.454 [INFO][5071] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1" HandleID="k8s-pod-network.03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e5780), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-d13a81acd8", "pod":"calico-kube-controllers-77b8d6f4b5-8vb87", "timestamp":"2026-03-13 12:22:44.429946814 +0000 UTC"}, Hostname:"ci-4081.3.101-d13a81acd8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003b3080)} Mar 13 12:22:44.750579 containerd[1713]: 2026-03-13 12:22:44.454 [INFO][5071] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:44.750579 containerd[1713]: 2026-03-13 12:22:44.557 [INFO][5071] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:44.750579 containerd[1713]: 2026-03-13 12:22:44.557 [INFO][5071] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-d13a81acd8' Mar 13 12:22:44.750579 containerd[1713]: 2026-03-13 12:22:44.599 [INFO][5071] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.750579 containerd[1713]: 2026-03-13 12:22:44.612 [INFO][5071] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.750579 containerd[1713]: 2026-03-13 12:22:44.628 [INFO][5071] ipam/ipam.go 526: Trying affinity for 192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.750579 containerd[1713]: 2026-03-13 12:22:44.642 [INFO][5071] ipam/ipam.go 160: Attempting to load block cidr=192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.750579 containerd[1713]: 2026-03-13 12:22:44.648 [INFO][5071] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.750579 containerd[1713]: 2026-03-13 12:22:44.648 [INFO][5071] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.750579 containerd[1713]: 2026-03-13 12:22:44.650 [INFO][5071] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1 Mar 13 12:22:44.750579 containerd[1713]: 2026-03-13 12:22:44.667 [INFO][5071] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.750579 containerd[1713]: 2026-03-13 12:22:44.682 [INFO][5071] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.89.197/26] block=192.168.89.192/26 handle="k8s-pod-network.03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.750579 containerd[1713]: 2026-03-13 12:22:44.682 [INFO][5071] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.89.197/26] handle="k8s-pod-network.03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:44.750579 containerd[1713]: 2026-03-13 12:22:44.682 [INFO][5071] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:44.750579 containerd[1713]: 2026-03-13 12:22:44.682 [INFO][5071] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.89.197/26] IPv6=[] ContainerID="03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1" HandleID="k8s-pod-network.03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0" Mar 13 12:22:44.753922 containerd[1713]: 2026-03-13 12:22:44.687 [INFO][5036] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1" Namespace="calico-system" Pod="calico-kube-controllers-77b8d6f4b5-8vb87" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0", GenerateName:"calico-kube-controllers-77b8d6f4b5-", Namespace:"calico-system", SelfLink:"", UID:"b22d9553-9cf2-4f77-9f81-2807136a31dd", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77b8d6f4b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"", Pod:"calico-kube-controllers-77b8d6f4b5-8vb87", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali249b6f6d1cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:44.753922 containerd[1713]: 2026-03-13 12:22:44.688 [INFO][5036] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.197/32] ContainerID="03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1" Namespace="calico-system" Pod="calico-kube-controllers-77b8d6f4b5-8vb87" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0" Mar 13 12:22:44.753922 containerd[1713]: 2026-03-13 12:22:44.688 [INFO][5036] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali249b6f6d1cb ContainerID="03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1" Namespace="calico-system" Pod="calico-kube-controllers-77b8d6f4b5-8vb87" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0" Mar 13 12:22:44.753922 containerd[1713]: 2026-03-13 12:22:44.705 [INFO][5036] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1" Namespace="calico-system" Pod="calico-kube-controllers-77b8d6f4b5-8vb87" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0" Mar 13 12:22:44.753922 containerd[1713]: 2026-03-13 12:22:44.710 [INFO][5036] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1" Namespace="calico-system" Pod="calico-kube-controllers-77b8d6f4b5-8vb87" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0", GenerateName:"calico-kube-controllers-77b8d6f4b5-", Namespace:"calico-system", SelfLink:"", UID:"b22d9553-9cf2-4f77-9f81-2807136a31dd", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77b8d6f4b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1", Pod:"calico-kube-controllers-77b8d6f4b5-8vb87", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali249b6f6d1cb", MAC:"c2:fe:de:40:0d:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:44.753922 containerd[1713]: 2026-03-13 12:22:44.738 [INFO][5036] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1" Namespace="calico-system" Pod="calico-kube-controllers-77b8d6f4b5-8vb87" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0" Mar 13 12:22:44.777440 containerd[1713]: time="2026-03-13T12:22:44.777390537Z" level=info msg="CreateContainer within sandbox \"4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d30dc1ccd9cecb5f5d1c3da059cfb8aa06adcaf09249247ba74273b51aa240c\"" Mar 13 12:22:44.779052 containerd[1713]: time="2026-03-13T12:22:44.779016940Z" level=info msg="StartContainer for \"9d30dc1ccd9cecb5f5d1c3da059cfb8aa06adcaf09249247ba74273b51aa240c\"" Mar 13 12:22:44.801908 containerd[1713]: time="2026-03-13T12:22:44.800889141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:22:44.801908 containerd[1713]: time="2026-03-13T12:22:44.800998541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:22:44.801908 containerd[1713]: time="2026-03-13T12:22:44.801268181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:22:44.801908 containerd[1713]: time="2026-03-13T12:22:44.801511422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:22:44.841983 systemd[1]: Started cri-containerd-03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1.scope - libcontainer container 03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1. Mar 13 12:22:44.850148 containerd[1713]: time="2026-03-13T12:22:44.849412471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-xl56r,Uid:e647b091-316e-49d6-91db-ea686f6b4ba4,Namespace:calico-system,Attempt:1,} returns sandbox id \"52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f\"" Mar 13 12:22:44.849707 systemd[1]: Started cri-containerd-9d30dc1ccd9cecb5f5d1c3da059cfb8aa06adcaf09249247ba74273b51aa240c.scope - libcontainer container 9d30dc1ccd9cecb5f5d1c3da059cfb8aa06adcaf09249247ba74273b51aa240c. Mar 13 12:22:44.902820 containerd[1713]: time="2026-03-13T12:22:44.901788807Z" level=info msg="StartContainer for \"9d30dc1ccd9cecb5f5d1c3da059cfb8aa06adcaf09249247ba74273b51aa240c\" returns successfully" Mar 13 12:22:44.913552 containerd[1713]: time="2026-03-13T12:22:44.913501709Z" level=info msg="StopPodSandbox for \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\"" Mar 13 12:22:44.967999 containerd[1713]: time="2026-03-13T12:22:44.967276809Z" level=info msg="StopPodSandbox for \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\"" Mar 13 12:22:44.996265 containerd[1713]: time="2026-03-13T12:22:44.995956782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77b8d6f4b5-8vb87,Uid:b22d9553-9cf2-4f77-9f81-2807136a31dd,Namespace:calico-system,Attempt:1,} returns sandbox id \"03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1\"" Mar 13 12:22:44.996265 containerd[1713]: time="2026-03-13T12:22:44.996466743Z" level=info msg="StopPodSandbox for \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\"" Mar 13 12:22:45.024740 containerd[1713]: time="2026-03-13T12:22:45.024552835Z" level=info msg="StopPodSandbox for \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\"" Mar 13 12:22:45.156916 containerd[1713]: 2026-03-13 12:22:45.029 [WARNING][5289] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0", GenerateName:"calico-apiserver-6d8b56c7bd-", Namespace:"calico-system", SelfLink:"", UID:"5012c233-3465-4ca7-bdc4-e2ef60a160c5", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d8b56c7bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203", Pod:"calico-apiserver-6d8b56c7bd-4ctwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4d42079970c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:45.156916 containerd[1713]: 2026-03-13 12:22:45.029 [INFO][5289] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Mar 13 12:22:45.156916 containerd[1713]: 2026-03-13 12:22:45.029 [INFO][5289] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" iface="eth0" netns="" Mar 13 12:22:45.156916 containerd[1713]: 2026-03-13 12:22:45.029 [INFO][5289] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Mar 13 12:22:45.156916 containerd[1713]: 2026-03-13 12:22:45.029 [INFO][5289] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Mar 13 12:22:45.156916 containerd[1713]: 2026-03-13 12:22:45.093 [INFO][5333] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" HandleID="k8s-pod-network.df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0" Mar 13 12:22:45.156916 containerd[1713]: 2026-03-13 12:22:45.093 [INFO][5333] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:45.156916 containerd[1713]: 2026-03-13 12:22:45.093 [INFO][5333] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:45.156916 containerd[1713]: 2026-03-13 12:22:45.129 [WARNING][5333] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" HandleID="k8s-pod-network.df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0" Mar 13 12:22:45.156916 containerd[1713]: 2026-03-13 12:22:45.130 [INFO][5333] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" HandleID="k8s-pod-network.df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0" Mar 13 12:22:45.156916 containerd[1713]: 2026-03-13 12:22:45.137 [INFO][5333] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:45.156916 containerd[1713]: 2026-03-13 12:22:45.147 [INFO][5289] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Mar 13 12:22:45.156916 containerd[1713]: time="2026-03-13T12:22:45.156727453Z" level=info msg="TearDown network for sandbox \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\" successfully" Mar 13 12:22:45.156916 containerd[1713]: time="2026-03-13T12:22:45.156761053Z" level=info msg="StopPodSandbox for \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\" returns successfully" Mar 13 12:22:45.161809 containerd[1713]: time="2026-03-13T12:22:45.161764261Z" level=info msg="RemovePodSandbox for \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\"" Mar 13 12:22:45.198727 containerd[1713]: time="2026-03-13T12:22:45.197890600Z" level=info msg="Forcibly stopping sandbox \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\"" Mar 13 12:22:45.307533 containerd[1713]: 2026-03-13 12:22:45.184 [INFO][5323] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Mar 13 12:22:45.307533 containerd[1713]: 2026-03-13 12:22:45.188 [INFO][5323] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" iface="eth0" netns="/var/run/netns/cni-e79569b9-e931-a3ef-92f1-d0899d736570" Mar 13 12:22:45.307533 containerd[1713]: 2026-03-13 12:22:45.189 [INFO][5323] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" iface="eth0" netns="/var/run/netns/cni-e79569b9-e931-a3ef-92f1-d0899d736570" Mar 13 12:22:45.307533 containerd[1713]: 2026-03-13 12:22:45.193 [INFO][5323] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" iface="eth0" netns="/var/run/netns/cni-e79569b9-e931-a3ef-92f1-d0899d736570" Mar 13 12:22:45.307533 containerd[1713]: 2026-03-13 12:22:45.193 [INFO][5323] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Mar 13 12:22:45.307533 containerd[1713]: 2026-03-13 12:22:45.193 [INFO][5323] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Mar 13 12:22:45.307533 containerd[1713]: 2026-03-13 12:22:45.267 [INFO][5376] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" HandleID="k8s-pod-network.d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0" Mar 13 12:22:45.307533 containerd[1713]: 2026-03-13 12:22:45.267 [INFO][5376] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:45.307533 containerd[1713]: 2026-03-13 12:22:45.267 [INFO][5376] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:45.307533 containerd[1713]: 2026-03-13 12:22:45.291 [WARNING][5376] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" HandleID="k8s-pod-network.d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0" Mar 13 12:22:45.307533 containerd[1713]: 2026-03-13 12:22:45.291 [INFO][5376] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" HandleID="k8s-pod-network.d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0" Mar 13 12:22:45.307533 containerd[1713]: 2026-03-13 12:22:45.295 [INFO][5376] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:45.307533 containerd[1713]: 2026-03-13 12:22:45.305 [INFO][5323] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Mar 13 12:22:45.309913 containerd[1713]: time="2026-03-13T12:22:45.309413663Z" level=info msg="TearDown network for sandbox \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\" successfully" Mar 13 12:22:45.309913 containerd[1713]: time="2026-03-13T12:22:45.309448223Z" level=info msg="StopPodSandbox for \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\" returns successfully" Mar 13 12:22:45.312881 systemd[1]: run-netns-cni\x2de79569b9\x2de931\x2da3ef\x2d92f1\x2dd0899d736570.mount: Deactivated successfully. Mar 13 12:22:45.319908 containerd[1713]: time="2026-03-13T12:22:45.319705720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-749vw,Uid:f9601ee0-5635-473a-aecf-cb8b509e8382,Namespace:kube-system,Attempt:1,}" Mar 13 12:22:45.355071 containerd[1713]: 2026-03-13 12:22:45.166 [INFO][5332] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Mar 13 12:22:45.355071 containerd[1713]: 2026-03-13 12:22:45.172 [INFO][5332] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" iface="eth0" netns="/var/run/netns/cni-f1159643-671c-3402-7004-5b675cff9cc0" Mar 13 12:22:45.355071 containerd[1713]: 2026-03-13 12:22:45.172 [INFO][5332] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" iface="eth0" netns="/var/run/netns/cni-f1159643-671c-3402-7004-5b675cff9cc0" Mar 13 12:22:45.355071 containerd[1713]: 2026-03-13 12:22:45.172 [INFO][5332] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" iface="eth0" netns="/var/run/netns/cni-f1159643-671c-3402-7004-5b675cff9cc0" Mar 13 12:22:45.355071 containerd[1713]: 2026-03-13 12:22:45.172 [INFO][5332] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Mar 13 12:22:45.355071 containerd[1713]: 2026-03-13 12:22:45.172 [INFO][5332] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Mar 13 12:22:45.355071 containerd[1713]: 2026-03-13 12:22:45.269 [INFO][5367] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" HandleID="k8s-pod-network.f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0" Mar 13 12:22:45.355071 containerd[1713]: 2026-03-13 12:22:45.269 [INFO][5367] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:45.355071 containerd[1713]: 2026-03-13 12:22:45.296 [INFO][5367] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:45.355071 containerd[1713]: 2026-03-13 12:22:45.325 [WARNING][5367] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" HandleID="k8s-pod-network.f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0" Mar 13 12:22:45.355071 containerd[1713]: 2026-03-13 12:22:45.325 [INFO][5367] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" HandleID="k8s-pod-network.f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0" Mar 13 12:22:45.355071 containerd[1713]: 2026-03-13 12:22:45.333 [INFO][5367] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:45.355071 containerd[1713]: 2026-03-13 12:22:45.345 [INFO][5332] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Mar 13 12:22:45.355071 containerd[1713]: time="2026-03-13T12:22:45.355056098Z" level=info msg="TearDown network for sandbox \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\" successfully" Mar 13 12:22:45.355071 containerd[1713]: time="2026-03-13T12:22:45.355082898Z" level=info msg="StopPodSandbox for \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\" returns successfully" Mar 13 12:22:45.360172 systemd[1]: run-netns-cni\x2df1159643\x2d671c\x2d3402\x2d7004\x2d5b675cff9cc0.mount: Deactivated successfully. Mar 13 12:22:45.365244 kubelet[3140]: I0313 12:22:45.364293 3140 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-mh4sx" podStartSLOduration=54.364278153 podStartE2EDuration="54.364278153s" podCreationTimestamp="2026-03-13 12:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:22:45.320260681 +0000 UTC m=+60.481041161" watchObservedRunningTime="2026-03-13 12:22:45.364278153 +0000 UTC m=+60.525058473" Mar 13 12:22:45.369753 containerd[1713]: time="2026-03-13T12:22:45.369704002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8b56c7bd-nczjw,Uid:5f01a07f-bab0-4312-a812-86688ae7f0c8,Namespace:calico-system,Attempt:1,}" Mar 13 12:22:45.412670 containerd[1713]: 2026-03-13 12:22:45.160 [INFO][5349] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Mar 13 12:22:45.412670 containerd[1713]: 2026-03-13 12:22:45.180 [INFO][5349] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" iface="eth0" netns="/var/run/netns/cni-e6e1b644-6555-34fc-2897-7ee1736b502f" Mar 13 12:22:45.412670 containerd[1713]: 2026-03-13 12:22:45.181 [INFO][5349] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" iface="eth0" netns="/var/run/netns/cni-e6e1b644-6555-34fc-2897-7ee1736b502f" Mar 13 12:22:45.412670 containerd[1713]: 2026-03-13 12:22:45.183 [INFO][5349] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" iface="eth0" netns="/var/run/netns/cni-e6e1b644-6555-34fc-2897-7ee1736b502f" Mar 13 12:22:45.412670 containerd[1713]: 2026-03-13 12:22:45.183 [INFO][5349] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Mar 13 12:22:45.412670 containerd[1713]: 2026-03-13 12:22:45.183 [INFO][5349] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Mar 13 12:22:45.412670 containerd[1713]: 2026-03-13 12:22:45.272 [INFO][5369] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" HandleID="k8s-pod-network.13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Workload="ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0" Mar 13 12:22:45.412670 containerd[1713]: 2026-03-13 12:22:45.272 [INFO][5369] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:45.412670 containerd[1713]: 2026-03-13 12:22:45.337 [INFO][5369] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:45.412670 containerd[1713]: 2026-03-13 12:22:45.373 [WARNING][5369] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" HandleID="k8s-pod-network.13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Workload="ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0" Mar 13 12:22:45.412670 containerd[1713]: 2026-03-13 12:22:45.373 [INFO][5369] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" HandleID="k8s-pod-network.13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Workload="ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0" Mar 13 12:22:45.412670 containerd[1713]: 2026-03-13 12:22:45.394 [INFO][5369] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:45.412670 containerd[1713]: 2026-03-13 12:22:45.404 [INFO][5349] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Mar 13 12:22:45.415239 containerd[1713]: time="2026-03-13T12:22:45.414752035Z" level=info msg="TearDown network for sandbox \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\" successfully" Mar 13 12:22:45.415239 containerd[1713]: time="2026-03-13T12:22:45.414786476Z" level=info msg="StopPodSandbox for \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\" returns successfully" Mar 13 12:22:45.423341 containerd[1713]: time="2026-03-13T12:22:45.423165649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4rnvb,Uid:b9f33bdd-2737-45fd-8259-eb04da313d49,Namespace:calico-system,Attempt:1,}" Mar 13 12:22:45.455686 containerd[1713]: 2026-03-13 12:22:45.339 [WARNING][5389] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0", GenerateName:"calico-apiserver-6d8b56c7bd-", Namespace:"calico-system", SelfLink:"", UID:"5012c233-3465-4ca7-bdc4-e2ef60a160c5", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d8b56c7bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203", Pod:"calico-apiserver-6d8b56c7bd-4ctwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4d42079970c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:45.455686 containerd[1713]: 2026-03-13 12:22:45.339 [INFO][5389] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Mar 13 12:22:45.455686 containerd[1713]: 2026-03-13 12:22:45.339 [INFO][5389] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" iface="eth0" netns="" Mar 13 12:22:45.455686 containerd[1713]: 2026-03-13 12:22:45.339 [INFO][5389] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Mar 13 12:22:45.455686 containerd[1713]: 2026-03-13 12:22:45.339 [INFO][5389] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Mar 13 12:22:45.455686 containerd[1713]: 2026-03-13 12:22:45.420 [INFO][5406] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" HandleID="k8s-pod-network.df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0" Mar 13 12:22:45.455686 containerd[1713]: 2026-03-13 12:22:45.420 [INFO][5406] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:45.455686 containerd[1713]: 2026-03-13 12:22:45.420 [INFO][5406] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:45.455686 containerd[1713]: 2026-03-13 12:22:45.438 [WARNING][5406] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" HandleID="k8s-pod-network.df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0" Mar 13 12:22:45.455686 containerd[1713]: 2026-03-13 12:22:45.438 [INFO][5406] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" HandleID="k8s-pod-network.df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--4ctwx-eth0" Mar 13 12:22:45.455686 containerd[1713]: 2026-03-13 12:22:45.443 [INFO][5406] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:45.455686 containerd[1713]: 2026-03-13 12:22:45.448 [INFO][5389] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91" Mar 13 12:22:45.456140 containerd[1713]: time="2026-03-13T12:22:45.455693983Z" level=info msg="TearDown network for sandbox \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\" successfully" Mar 13 12:22:45.505383 containerd[1713]: time="2026-03-13T12:22:45.504873263Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 13 12:22:45.505383 containerd[1713]: time="2026-03-13T12:22:45.504953223Z" level=info msg="RemovePodSandbox \"df41784033d6b0cc0bf16fbaa42370ea65ab8819aa2cf3069066657efe484f91\" returns successfully" Mar 13 12:22:45.506076 containerd[1713]: time="2026-03-13T12:22:45.505794825Z" level=info msg="StopPodSandbox for \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\"" Mar 13 12:22:45.746557 systemd-networkd[1359]: caliad1552ac588: Link UP Mar 13 12:22:45.746769 systemd-networkd[1359]: caliad1552ac588: Gained carrier Mar 13 12:22:45.781082 containerd[1713]: 2026-03-13 12:22:45.500 [INFO][5411] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0 coredns-7d764666f9- kube-system f9601ee0-5635-473a-aecf-cb8b509e8382 999 0 2026-03-13 12:21:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.101-d13a81acd8 coredns-7d764666f9-749vw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliad1552ac588 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3" Namespace="kube-system" Pod="coredns-7d764666f9-749vw" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-" Mar 13 12:22:45.781082 containerd[1713]: 2026-03-13 12:22:45.501 [INFO][5411] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3" Namespace="kube-system" Pod="coredns-7d764666f9-749vw" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0" Mar 13 12:22:45.781082 containerd[1713]: 2026-03-13 12:22:45.617 [INFO][5456] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3" HandleID="k8s-pod-network.84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0" Mar 13 12:22:45.781082 containerd[1713]: 2026-03-13 12:22:45.649 [INFO][5456] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3" HandleID="k8s-pod-network.84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000395f20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.101-d13a81acd8", "pod":"coredns-7d764666f9-749vw", "timestamp":"2026-03-13 12:22:45.617506208 +0000 UTC"}, Hostname:"ci-4081.3.101-d13a81acd8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002cedc0)} Mar 13 12:22:45.781082 containerd[1713]: 2026-03-13 12:22:45.649 [INFO][5456] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:45.781082 containerd[1713]: 2026-03-13 12:22:45.649 [INFO][5456] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:45.781082 containerd[1713]: 2026-03-13 12:22:45.649 [INFO][5456] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-d13a81acd8' Mar 13 12:22:45.781082 containerd[1713]: 2026-03-13 12:22:45.653 [INFO][5456] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:45.781082 containerd[1713]: 2026-03-13 12:22:45.663 [INFO][5456] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:45.781082 containerd[1713]: 2026-03-13 12:22:45.693 [INFO][5456] ipam/ipam.go 526: Trying affinity for 192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:45.781082 containerd[1713]: 2026-03-13 12:22:45.701 [INFO][5456] ipam/ipam.go 160: Attempting to load block cidr=192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:45.781082 containerd[1713]: 2026-03-13 12:22:45.704 [INFO][5456] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:45.781082 containerd[1713]: 2026-03-13 12:22:45.704 [INFO][5456] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:45.781082 containerd[1713]: 2026-03-13 12:22:45.708 [INFO][5456] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3 Mar 13 12:22:45.781082 containerd[1713]: 2026-03-13 12:22:45.719 [INFO][5456] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:45.781082 containerd[1713]: 2026-03-13 12:22:45.739 [INFO][5456] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.89.198/26] block=192.168.89.192/26 handle="k8s-pod-network.84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:45.781082 containerd[1713]: 2026-03-13 12:22:45.739 [INFO][5456] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.89.198/26] handle="k8s-pod-network.84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:45.781082 containerd[1713]: 2026-03-13 12:22:45.739 [INFO][5456] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:45.781082 containerd[1713]: 2026-03-13 12:22:45.739 [INFO][5456] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.89.198/26] IPv6=[] ContainerID="84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3" HandleID="k8s-pod-network.84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0" Mar 13 12:22:45.782985 containerd[1713]: 2026-03-13 12:22:45.743 [INFO][5411] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3" Namespace="kube-system" Pod="coredns-7d764666f9-749vw" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"f9601ee0-5635-473a-aecf-cb8b509e8382", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 21, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"", Pod:"coredns-7d764666f9-749vw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad1552ac588", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:45.782985 containerd[1713]: 2026-03-13 12:22:45.743 [INFO][5411] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.198/32] ContainerID="84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3" Namespace="kube-system" Pod="coredns-7d764666f9-749vw" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0" Mar 13 12:22:45.782985 containerd[1713]: 2026-03-13 12:22:45.743 [INFO][5411] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad1552ac588 ContainerID="84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3" Namespace="kube-system" Pod="coredns-7d764666f9-749vw" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0" Mar 13 12:22:45.782985 containerd[1713]: 2026-03-13 12:22:45.746 [INFO][5411] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3" Namespace="kube-system" Pod="coredns-7d764666f9-749vw" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0" Mar 13 12:22:45.782985 containerd[1713]: 2026-03-13 12:22:45.750 [INFO][5411] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3" Namespace="kube-system" Pod="coredns-7d764666f9-749vw" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"f9601ee0-5635-473a-aecf-cb8b509e8382", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 21, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3", Pod:"coredns-7d764666f9-749vw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad1552ac588", MAC:"96:ae:e5:cf:50:67", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:45.783173 containerd[1713]: 2026-03-13 12:22:45.774 [INFO][5411] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3" Namespace="kube-system" Pod="coredns-7d764666f9-749vw" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0" Mar 13 12:22:45.846202 systemd-networkd[1359]: califa0b2c41c77: Link UP Mar 13 12:22:45.846352 systemd-networkd[1359]: califa0b2c41c77: Gained carrier Mar 13 12:22:45.859855 systemd-networkd[1359]: cali8bce95fb3c0: Gained IPv6LL Mar 13 12:22:45.860111 systemd-networkd[1359]: calicd27d42fe8d: Gained IPv6LL Mar 13 12:22:45.871461 containerd[1713]: time="2026-03-13T12:22:45.870714263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:22:45.871461 containerd[1713]: time="2026-03-13T12:22:45.870792143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:22:45.871461 containerd[1713]: time="2026-03-13T12:22:45.870807183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:22:45.871461 containerd[1713]: time="2026-03-13T12:22:45.870905903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:22:45.878295 containerd[1713]: 2026-03-13 12:22:45.599 [INFO][5435] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0 csi-node-driver- calico-system b9f33bdd-2737-45fd-8259-eb04da313d49 997 0 2026-03-13 12:22:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.101-d13a81acd8 csi-node-driver-4rnvb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califa0b2c41c77 [] [] }} ContainerID="7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692" Namespace="calico-system" Pod="csi-node-driver-4rnvb" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-" Mar 13 12:22:45.878295 containerd[1713]: 2026-03-13 12:22:45.600 [INFO][5435] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692" Namespace="calico-system" Pod="csi-node-driver-4rnvb" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0" Mar 13 12:22:45.878295 containerd[1713]: 2026-03-13 12:22:45.666 [INFO][5476] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692" HandleID="k8s-pod-network.7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692" Workload="ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0" Mar 13 12:22:45.878295 containerd[1713]: 2026-03-13 12:22:45.700 [INFO][5476] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692" HandleID="k8s-pod-network.7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692" Workload="ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-d13a81acd8", "pod":"csi-node-driver-4rnvb", "timestamp":"2026-03-13 12:22:45.666889009 +0000 UTC"}, Hostname:"ci-4081.3.101-d13a81acd8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001866e0)} Mar 13 12:22:45.878295 containerd[1713]: 2026-03-13 12:22:45.700 [INFO][5476] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:45.878295 containerd[1713]: 2026-03-13 12:22:45.739 [INFO][5476] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:45.878295 containerd[1713]: 2026-03-13 12:22:45.739 [INFO][5476] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-d13a81acd8' Mar 13 12:22:45.878295 containerd[1713]: 2026-03-13 12:22:45.754 [INFO][5476] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:45.878295 containerd[1713]: 2026-03-13 12:22:45.775 [INFO][5476] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:45.878295 containerd[1713]: 2026-03-13 12:22:45.797 [INFO][5476] ipam/ipam.go 526: Trying affinity for 192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:45.878295 containerd[1713]: 2026-03-13 12:22:45.800 [INFO][5476] ipam/ipam.go 160: Attempting to load block cidr=192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:45.878295 containerd[1713]: 2026-03-13 12:22:45.805 [INFO][5476] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:45.878295 containerd[1713]: 2026-03-13 12:22:45.806 [INFO][5476] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:45.878295 containerd[1713]: 2026-03-13 12:22:45.813 [INFO][5476] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692 Mar 13 12:22:45.878295 containerd[1713]: 2026-03-13 12:22:45.821 [INFO][5476] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:45.878295 containerd[1713]: 2026-03-13 12:22:45.836 [INFO][5476] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.89.199/26] block=192.168.89.192/26 handle="k8s-pod-network.7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:45.878295 containerd[1713]: 2026-03-13 12:22:45.836 [INFO][5476] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.89.199/26] handle="k8s-pod-network.7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:45.878295 containerd[1713]: 2026-03-13 12:22:45.836 [INFO][5476] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:45.878295 containerd[1713]: 2026-03-13 12:22:45.836 [INFO][5476] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.89.199/26] IPv6=[] ContainerID="7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692" HandleID="k8s-pod-network.7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692" Workload="ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0" Mar 13 12:22:45.879594 containerd[1713]: 2026-03-13 12:22:45.840 [INFO][5435] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692" Namespace="calico-system" Pod="csi-node-driver-4rnvb" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b9f33bdd-2737-45fd-8259-eb04da313d49", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"", Pod:"csi-node-driver-4rnvb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califa0b2c41c77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:45.879594 containerd[1713]: 2026-03-13 12:22:45.841 [INFO][5435] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.199/32] ContainerID="7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692" Namespace="calico-system" Pod="csi-node-driver-4rnvb" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0" Mar 13 12:22:45.879594 containerd[1713]: 2026-03-13 12:22:45.841 [INFO][5435] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa0b2c41c77 ContainerID="7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692" Namespace="calico-system" Pod="csi-node-driver-4rnvb" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0" Mar 13 12:22:45.879594 containerd[1713]: 2026-03-13 12:22:45.845 [INFO][5435] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692" Namespace="calico-system" Pod="csi-node-driver-4rnvb" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0" Mar 13 12:22:45.879594 containerd[1713]: 2026-03-13 12:22:45.847 [INFO][5435] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692" Namespace="calico-system" Pod="csi-node-driver-4rnvb" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b9f33bdd-2737-45fd-8259-eb04da313d49", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692", Pod:"csi-node-driver-4rnvb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califa0b2c41c77", MAC:"26:14:0f:43:de:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:45.879594 containerd[1713]: 2026-03-13 12:22:45.869 [INFO][5435] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692" Namespace="calico-system" Pod="csi-node-driver-4rnvb" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0" Mar 13 12:22:45.915210 systemd[1]: Started cri-containerd-84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3.scope - libcontainer container 84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3. Mar 13 12:22:45.957527 containerd[1713]: time="2026-03-13T12:22:45.957064564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:22:45.957527 containerd[1713]: time="2026-03-13T12:22:45.957153444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:22:45.957527 containerd[1713]: time="2026-03-13T12:22:45.957169204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:22:45.957527 containerd[1713]: time="2026-03-13T12:22:45.957261604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:22:45.979046 systemd-networkd[1359]: cali6536bd528fc: Link UP Mar 13 12:22:45.988338 systemd-networkd[1359]: cali6536bd528fc: Gained carrier Mar 13 12:22:46.011709 containerd[1713]: 2026-03-13 12:22:45.647 [WARNING][5458] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2dd1ee1d-a2bd-4d4b-a04a-c030316e7f2c", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 21, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f", Pod:"coredns-7d764666f9-mh4sx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd27d42fe8d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:46.011709 containerd[1713]: 2026-03-13 12:22:45.648 [INFO][5458] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Mar 13 12:22:46.011709 containerd[1713]: 2026-03-13 12:22:45.648 [INFO][5458] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" iface="eth0" netns="" Mar 13 12:22:46.011709 containerd[1713]: 2026-03-13 12:22:45.648 [INFO][5458] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Mar 13 12:22:46.011709 containerd[1713]: 2026-03-13 12:22:45.648 [INFO][5458] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Mar 13 12:22:46.011709 containerd[1713]: 2026-03-13 12:22:45.711 [INFO][5487] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" HandleID="k8s-pod-network.5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0" Mar 13 12:22:46.011709 containerd[1713]: 2026-03-13 12:22:45.712 [INFO][5487] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:46.011709 containerd[1713]: 2026-03-13 12:22:45.949 [INFO][5487] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:46.011709 containerd[1713]: 2026-03-13 12:22:45.974 [WARNING][5487] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" HandleID="k8s-pod-network.5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0" Mar 13 12:22:46.011709 containerd[1713]: 2026-03-13 12:22:45.974 [INFO][5487] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" HandleID="k8s-pod-network.5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0" Mar 13 12:22:46.011709 containerd[1713]: 2026-03-13 12:22:45.988 [INFO][5487] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:46.011709 containerd[1713]: 2026-03-13 12:22:46.001 [INFO][5458] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Mar 13 12:22:46.011709 containerd[1713]: time="2026-03-13T12:22:46.010068451Z" level=info msg="TearDown network for sandbox \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\" successfully" Mar 13 12:22:46.011709 containerd[1713]: time="2026-03-13T12:22:46.010093731Z" level=info msg="StopPodSandbox for \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\" returns successfully" Mar 13 12:22:46.016570 systemd[1]: Started cri-containerd-7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692.scope - libcontainer container 7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692. Mar 13 12:22:46.024590 containerd[1713]: time="2026-03-13T12:22:46.024383874Z" level=info msg="RemovePodSandbox for \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\"" Mar 13 12:22:46.025700 containerd[1713]: time="2026-03-13T12:22:46.024839115Z" level=info msg="Forcibly stopping sandbox \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\"" Mar 13 12:22:46.032184 containerd[1713]: 2026-03-13 12:22:45.542 [INFO][5424] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0 calico-apiserver-6d8b56c7bd- calico-system 5f01a07f-bab0-4312-a812-86688ae7f0c8 998 0 2026-03-13 12:22:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d8b56c7bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.101-d13a81acd8 calico-apiserver-6d8b56c7bd-nczjw eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali6536bd528fc [] [] }} ContainerID="0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063" Namespace="calico-system" Pod="calico-apiserver-6d8b56c7bd-nczjw" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-" Mar 13 12:22:46.032184 containerd[1713]: 2026-03-13 12:22:45.544 [INFO][5424] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063" Namespace="calico-system" Pod="calico-apiserver-6d8b56c7bd-nczjw" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0" Mar 13 12:22:46.032184 containerd[1713]: 2026-03-13 12:22:45.669 [INFO][5466] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063" HandleID="k8s-pod-network.0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0" Mar 13 12:22:46.032184 containerd[1713]: 2026-03-13 12:22:45.709 [INFO][5466] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063" HandleID="k8s-pod-network.0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003809c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.101-d13a81acd8", "pod":"calico-apiserver-6d8b56c7bd-nczjw", "timestamp":"2026-03-13 12:22:45.669586693 +0000 UTC"}, Hostname:"ci-4081.3.101-d13a81acd8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000280420)} Mar 13 12:22:46.032184 containerd[1713]: 2026-03-13 12:22:45.710 [INFO][5466] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:46.032184 containerd[1713]: 2026-03-13 12:22:45.836 [INFO][5466] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:46.032184 containerd[1713]: 2026-03-13 12:22:45.837 [INFO][5466] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.101-d13a81acd8' Mar 13 12:22:46.032184 containerd[1713]: 2026-03-13 12:22:45.865 [INFO][5466] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:46.032184 containerd[1713]: 2026-03-13 12:22:45.878 [INFO][5466] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:46.032184 containerd[1713]: 2026-03-13 12:22:45.897 [INFO][5466] ipam/ipam.go 526: Trying affinity for 192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:46.032184 containerd[1713]: 2026-03-13 12:22:45.909 [INFO][5466] ipam/ipam.go 160: Attempting to load block cidr=192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:46.032184 containerd[1713]: 2026-03-13 12:22:45.916 [INFO][5466] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:46.032184 containerd[1713]: 2026-03-13 12:22:45.916 [INFO][5466] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:46.032184 containerd[1713]: 2026-03-13 12:22:45.922 [INFO][5466] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063 Mar 13 12:22:46.032184 containerd[1713]: 2026-03-13 12:22:45.933 [INFO][5466] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:46.032184 containerd[1713]: 2026-03-13 12:22:45.949 [INFO][5466] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.89.200/26] block=192.168.89.192/26 handle="k8s-pod-network.0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:46.032184 containerd[1713]: 2026-03-13 12:22:45.949 [INFO][5466] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.89.200/26] handle="k8s-pod-network.0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063" host="ci-4081.3.101-d13a81acd8" Mar 13 12:22:46.032184 containerd[1713]: 2026-03-13 12:22:45.949 [INFO][5466] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:46.032184 containerd[1713]: 2026-03-13 12:22:45.949 [INFO][5466] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.89.200/26] IPv6=[] ContainerID="0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063" HandleID="k8s-pod-network.0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0" Mar 13 12:22:46.032991 containerd[1713]: 2026-03-13 12:22:45.960 [INFO][5424] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063" Namespace="calico-system" Pod="calico-apiserver-6d8b56c7bd-nczjw" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0", GenerateName:"calico-apiserver-6d8b56c7bd-", Namespace:"calico-system", SelfLink:"", UID:"5f01a07f-bab0-4312-a812-86688ae7f0c8", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d8b56c7bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"", Pod:"calico-apiserver-6d8b56c7bd-nczjw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6536bd528fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:46.032991 containerd[1713]: 2026-03-13 12:22:45.961 [INFO][5424] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.200/32] ContainerID="0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063" Namespace="calico-system" Pod="calico-apiserver-6d8b56c7bd-nczjw" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0" Mar 13 12:22:46.032991 containerd[1713]: 2026-03-13 12:22:45.961 [INFO][5424] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6536bd528fc ContainerID="0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063" Namespace="calico-system" Pod="calico-apiserver-6d8b56c7bd-nczjw" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0" Mar 13 12:22:46.032991 containerd[1713]: 2026-03-13 12:22:45.991 [INFO][5424] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063" Namespace="calico-system" Pod="calico-apiserver-6d8b56c7bd-nczjw" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0" Mar 13 12:22:46.032991 containerd[1713]: 2026-03-13 12:22:45.992 [INFO][5424] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063" Namespace="calico-system" Pod="calico-apiserver-6d8b56c7bd-nczjw" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0", GenerateName:"calico-apiserver-6d8b56c7bd-", Namespace:"calico-system", SelfLink:"", UID:"5f01a07f-bab0-4312-a812-86688ae7f0c8", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d8b56c7bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063", Pod:"calico-apiserver-6d8b56c7bd-nczjw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6536bd528fc", MAC:"ee:ff:43:80:d7:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:46.032991 containerd[1713]: 2026-03-13 12:22:46.023 [INFO][5424] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063" Namespace="calico-system" Pod="calico-apiserver-6d8b56c7bd-nczjw" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0" Mar 13 12:22:46.068661 containerd[1713]: time="2026-03-13T12:22:46.067438905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-749vw,Uid:f9601ee0-5635-473a-aecf-cb8b509e8382,Namespace:kube-system,Attempt:1,} returns sandbox id \"84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3\"" Mar 13 12:22:46.083012 containerd[1713]: time="2026-03-13T12:22:46.082967890Z" level=info msg="CreateContainer within sandbox \"84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 12:22:46.110960 containerd[1713]: time="2026-03-13T12:22:46.110329495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 13 12:22:46.110960 containerd[1713]: time="2026-03-13T12:22:46.110384855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 13 12:22:46.110960 containerd[1713]: time="2026-03-13T12:22:46.110395615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:22:46.110960 containerd[1713]: time="2026-03-13T12:22:46.110469295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 13 12:22:46.114095 containerd[1713]: time="2026-03-13T12:22:46.113232300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4rnvb,Uid:b9f33bdd-2737-45fd-8259-eb04da313d49,Namespace:calico-system,Attempt:1,} returns sandbox id \"7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692\"" Mar 13 12:22:46.140118 containerd[1713]: time="2026-03-13T12:22:46.140060744Z" level=info msg="CreateContainer within sandbox \"84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c99f841e5dcd7c46aa89c826f4ea1fcebb411745c4fe7761a4a11fa39da65061\"" Mar 13 12:22:46.143502 containerd[1713]: time="2026-03-13T12:22:46.143408669Z" level=info msg="StartContainer for \"c99f841e5dcd7c46aa89c826f4ea1fcebb411745c4fe7761a4a11fa39da65061\"" Mar 13 12:22:46.173420 systemd[1]: Started cri-containerd-0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063.scope - libcontainer container 0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063. Mar 13 12:22:46.191664 systemd[1]: run-netns-cni\x2de6e1b644\x2d6555\x2d34fc\x2d2897\x2d7ee1736b502f.mount: Deactivated successfully. Mar 13 12:22:46.225048 systemd[1]: Started cri-containerd-c99f841e5dcd7c46aa89c826f4ea1fcebb411745c4fe7761a4a11fa39da65061.scope - libcontainer container c99f841e5dcd7c46aa89c826f4ea1fcebb411745c4fe7761a4a11fa39da65061. Mar 13 12:22:46.289135 containerd[1713]: time="2026-03-13T12:22:46.288810108Z" level=info msg="StartContainer for \"c99f841e5dcd7c46aa89c826f4ea1fcebb411745c4fe7761a4a11fa39da65061\" returns successfully" Mar 13 12:22:46.319007 containerd[1713]: time="2026-03-13T12:22:46.318962757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d8b56c7bd-nczjw,Uid:5f01a07f-bab0-4312-a812-86688ae7f0c8,Namespace:calico-system,Attempt:1,} returns sandbox id \"0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063\"" Mar 13 12:22:46.389471 containerd[1713]: 2026-03-13 12:22:46.236 [WARNING][5620] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"2dd1ee1d-a2bd-4d4b-a04a-c030316e7f2c", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 21, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"4c807803c204f32a9bb8d3b41e27a3f34662cae001cab6a6c7136bd7f222739f", Pod:"coredns-7d764666f9-mh4sx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd27d42fe8d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:46.389471 containerd[1713]: 2026-03-13 12:22:46.237 [INFO][5620] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Mar 13 12:22:46.389471 containerd[1713]: 2026-03-13 12:22:46.239 [INFO][5620] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" iface="eth0" netns="" Mar 13 12:22:46.389471 containerd[1713]: 2026-03-13 12:22:46.239 [INFO][5620] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Mar 13 12:22:46.389471 containerd[1713]: 2026-03-13 12:22:46.239 [INFO][5620] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Mar 13 12:22:46.389471 containerd[1713]: 2026-03-13 12:22:46.347 [INFO][5704] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" HandleID="k8s-pod-network.5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0" Mar 13 12:22:46.389471 containerd[1713]: 2026-03-13 12:22:46.349 [INFO][5704] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:46.389471 containerd[1713]: 2026-03-13 12:22:46.349 [INFO][5704] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:46.389471 containerd[1713]: 2026-03-13 12:22:46.372 [WARNING][5704] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" HandleID="k8s-pod-network.5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0" Mar 13 12:22:46.389471 containerd[1713]: 2026-03-13 12:22:46.372 [INFO][5704] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" HandleID="k8s-pod-network.5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--mh4sx-eth0" Mar 13 12:22:46.389471 containerd[1713]: 2026-03-13 12:22:46.378 [INFO][5704] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:46.389471 containerd[1713]: 2026-03-13 12:22:46.386 [INFO][5620] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6" Mar 13 12:22:46.389912 containerd[1713]: time="2026-03-13T12:22:46.389530673Z" level=info msg="TearDown network for sandbox \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\" successfully" Mar 13 12:22:46.401627 containerd[1713]: time="2026-03-13T12:22:46.401353732Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 13 12:22:46.401627 containerd[1713]: time="2026-03-13T12:22:46.401584532Z" level=info msg="RemovePodSandbox \"5e63b527183fdd153695bc580e4966f3407665d72a2e8c8c4ea1e95ffcbcbfd6\" returns successfully" Mar 13 12:22:46.402801 containerd[1713]: time="2026-03-13T12:22:46.402729134Z" level=info msg="StopPodSandbox for \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\"" Mar 13 12:22:46.524628 containerd[1713]: 2026-03-13 12:22:46.475 [WARNING][5742] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-whisker--859785876c--5rs9x-eth0" Mar 13 12:22:46.524628 containerd[1713]: 2026-03-13 12:22:46.475 [INFO][5742] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Mar 13 12:22:46.524628 containerd[1713]: 2026-03-13 12:22:46.475 [INFO][5742] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" iface="eth0" netns="" Mar 13 12:22:46.524628 containerd[1713]: 2026-03-13 12:22:46.475 [INFO][5742] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Mar 13 12:22:46.524628 containerd[1713]: 2026-03-13 12:22:46.475 [INFO][5742] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Mar 13 12:22:46.524628 containerd[1713]: 2026-03-13 12:22:46.505 [INFO][5751] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" HandleID="k8s-pod-network.73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Workload="ci--4081.3.101--d13a81acd8-k8s-whisker--859785876c--5rs9x-eth0" Mar 13 12:22:46.524628 containerd[1713]: 2026-03-13 12:22:46.505 [INFO][5751] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:46.524628 containerd[1713]: 2026-03-13 12:22:46.505 [INFO][5751] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:46.524628 containerd[1713]: 2026-03-13 12:22:46.517 [WARNING][5751] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" HandleID="k8s-pod-network.73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Workload="ci--4081.3.101--d13a81acd8-k8s-whisker--859785876c--5rs9x-eth0" Mar 13 12:22:46.524628 containerd[1713]: 2026-03-13 12:22:46.517 [INFO][5751] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" HandleID="k8s-pod-network.73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Workload="ci--4081.3.101--d13a81acd8-k8s-whisker--859785876c--5rs9x-eth0" Mar 13 12:22:46.524628 containerd[1713]: 2026-03-13 12:22:46.520 [INFO][5751] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:46.524628 containerd[1713]: 2026-03-13 12:22:46.522 [INFO][5742] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Mar 13 12:22:46.524977 containerd[1713]: time="2026-03-13T12:22:46.524669974Z" level=info msg="TearDown network for sandbox \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\" successfully" Mar 13 12:22:46.524977 containerd[1713]: time="2026-03-13T12:22:46.524696414Z" level=info msg="StopPodSandbox for \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\" returns successfully" Mar 13 12:22:46.525152 containerd[1713]: time="2026-03-13T12:22:46.525120215Z" level=info msg="RemovePodSandbox for \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\"" Mar 13 12:22:46.525191 containerd[1713]: time="2026-03-13T12:22:46.525154815Z" level=info msg="Forcibly stopping sandbox \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\"" Mar 13 12:22:46.632709 containerd[1713]: 2026-03-13 12:22:46.577 [WARNING][5766] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" WorkloadEndpoint="ci--4081.3.101--d13a81acd8-k8s-whisker--859785876c--5rs9x-eth0" Mar 13 12:22:46.632709 containerd[1713]: 2026-03-13 12:22:46.580 [INFO][5766] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Mar 13 12:22:46.632709 containerd[1713]: 2026-03-13 12:22:46.580 [INFO][5766] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" iface="eth0" netns="" Mar 13 12:22:46.632709 containerd[1713]: 2026-03-13 12:22:46.580 [INFO][5766] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Mar 13 12:22:46.632709 containerd[1713]: 2026-03-13 12:22:46.581 [INFO][5766] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Mar 13 12:22:46.632709 containerd[1713]: 2026-03-13 12:22:46.614 [INFO][5773] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" HandleID="k8s-pod-network.73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Workload="ci--4081.3.101--d13a81acd8-k8s-whisker--859785876c--5rs9x-eth0" Mar 13 12:22:46.632709 containerd[1713]: 2026-03-13 12:22:46.614 [INFO][5773] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:46.632709 containerd[1713]: 2026-03-13 12:22:46.614 [INFO][5773] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:46.632709 containerd[1713]: 2026-03-13 12:22:46.626 [WARNING][5773] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" HandleID="k8s-pod-network.73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Workload="ci--4081.3.101--d13a81acd8-k8s-whisker--859785876c--5rs9x-eth0" Mar 13 12:22:46.632709 containerd[1713]: 2026-03-13 12:22:46.626 [INFO][5773] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" HandleID="k8s-pod-network.73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Workload="ci--4081.3.101--d13a81acd8-k8s-whisker--859785876c--5rs9x-eth0" Mar 13 12:22:46.632709 containerd[1713]: 2026-03-13 12:22:46.628 [INFO][5773] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:46.632709 containerd[1713]: 2026-03-13 12:22:46.630 [INFO][5766] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e" Mar 13 12:22:46.633486 containerd[1713]: time="2026-03-13T12:22:46.633255992Z" level=info msg="TearDown network for sandbox \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\" successfully" Mar 13 12:22:46.643668 containerd[1713]: time="2026-03-13T12:22:46.643621209Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 13 12:22:46.643877 containerd[1713]: time="2026-03-13T12:22:46.643694969Z" level=info msg="RemovePodSandbox \"73903aa5306e857eb96b09f76803874454be79efabfa5454a606d8bdecf2910e\" returns successfully" Mar 13 12:22:46.644279 containerd[1713]: time="2026-03-13T12:22:46.644254090Z" level=info msg="StopPodSandbox for \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\"" Mar 13 12:22:46.691659 systemd-networkd[1359]: cali249b6f6d1cb: Gained IPv6LL Mar 13 12:22:46.704666 kernel: icmp: detected local route for 10.200.20.10 during ICMP sending, src 10.110.240.172 Mar 13 12:22:46.791828 containerd[1713]: 2026-03-13 12:22:46.711 [WARNING][5788] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"e647b091-316e-49d6-91db-ea686f6b4ba4", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f", Pod:"goldmane-9f7667bb8-xl56r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.89.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8bce95fb3c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:46.791828 containerd[1713]: 2026-03-13 12:22:46.712 [INFO][5788] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Mar 13 12:22:46.791828 containerd[1713]: 2026-03-13 12:22:46.712 [INFO][5788] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" iface="eth0" netns="" Mar 13 12:22:46.791828 containerd[1713]: 2026-03-13 12:22:46.712 [INFO][5788] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Mar 13 12:22:46.791828 containerd[1713]: 2026-03-13 12:22:46.712 [INFO][5788] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Mar 13 12:22:46.791828 containerd[1713]: 2026-03-13 12:22:46.767 [INFO][5797] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" HandleID="k8s-pod-network.69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Workload="ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0" Mar 13 12:22:46.791828 containerd[1713]: 2026-03-13 12:22:46.768 [INFO][5797] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:46.791828 containerd[1713]: 2026-03-13 12:22:46.768 [INFO][5797] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:46.791828 containerd[1713]: 2026-03-13 12:22:46.781 [WARNING][5797] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" HandleID="k8s-pod-network.69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Workload="ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0" Mar 13 12:22:46.791828 containerd[1713]: 2026-03-13 12:22:46.781 [INFO][5797] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" HandleID="k8s-pod-network.69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Workload="ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0" Mar 13 12:22:46.791828 containerd[1713]: 2026-03-13 12:22:46.784 [INFO][5797] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:46.791828 containerd[1713]: 2026-03-13 12:22:46.787 [INFO][5788] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Mar 13 12:22:46.791828 containerd[1713]: time="2026-03-13T12:22:46.791716771Z" level=info msg="TearDown network for sandbox \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\" successfully" Mar 13 12:22:46.791828 containerd[1713]: time="2026-03-13T12:22:46.791740731Z" level=info msg="StopPodSandbox for \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\" returns successfully" Mar 13 12:22:46.793072 containerd[1713]: time="2026-03-13T12:22:46.792888493Z" level=info msg="RemovePodSandbox for \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\"" Mar 13 12:22:46.793072 containerd[1713]: time="2026-03-13T12:22:46.792982854Z" level=info msg="Forcibly stopping sandbox \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\"" Mar 13 12:22:46.882296 containerd[1713]: 2026-03-13 12:22:46.845 [WARNING][5814] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"e647b091-316e-49d6-91db-ea686f6b4ba4", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f", Pod:"goldmane-9f7667bb8-xl56r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.89.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8bce95fb3c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:22:46.882296 containerd[1713]: 2026-03-13 12:22:46.845 [INFO][5814] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Mar 13 12:22:46.882296 containerd[1713]: 2026-03-13 12:22:46.845 [INFO][5814] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" iface="eth0" netns="" Mar 13 12:22:46.882296 containerd[1713]: 2026-03-13 12:22:46.845 [INFO][5814] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Mar 13 12:22:46.882296 containerd[1713]: 2026-03-13 12:22:46.845 [INFO][5814] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Mar 13 12:22:46.882296 containerd[1713]: 2026-03-13 12:22:46.866 [INFO][5821] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" HandleID="k8s-pod-network.69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Workload="ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0" Mar 13 12:22:46.882296 containerd[1713]: 2026-03-13 12:22:46.867 [INFO][5821] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:22:46.882296 containerd[1713]: 2026-03-13 12:22:46.867 [INFO][5821] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:22:46.882296 containerd[1713]: 2026-03-13 12:22:46.877 [WARNING][5821] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" HandleID="k8s-pod-network.69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Workload="ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0" Mar 13 12:22:46.882296 containerd[1713]: 2026-03-13 12:22:46.877 [INFO][5821] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" HandleID="k8s-pod-network.69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Workload="ci--4081.3.101--d13a81acd8-k8s-goldmane--9f7667bb8--xl56r-eth0" Mar 13 12:22:46.882296 containerd[1713]: 2026-03-13 12:22:46.878 [INFO][5821] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:22:46.882296 containerd[1713]: 2026-03-13 12:22:46.880 [INFO][5814] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038" Mar 13 12:22:46.882839 containerd[1713]: time="2026-03-13T12:22:46.882333560Z" level=info msg="TearDown network for sandbox \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\" successfully" Mar 13 12:22:46.893703 containerd[1713]: time="2026-03-13T12:22:46.893644258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:46.897157 containerd[1713]: time="2026-03-13T12:22:46.897100184Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 13 12:22:46.897275 containerd[1713]: time="2026-03-13T12:22:46.897192944Z" level=info msg="RemovePodSandbox \"69e53e54d21fc6b78aa231fc2ee19a030380bb59df0ab1dd27e52eee7d1b5038\" returns successfully" Mar 13 12:22:46.900683 containerd[1713]: time="2026-03-13T12:22:46.900437230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 13 12:22:46.905083 containerd[1713]: time="2026-03-13T12:22:46.904747757Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:46.910199 containerd[1713]: time="2026-03-13T12:22:46.910161366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:46.911070 containerd[1713]: time="2026-03-13T12:22:46.910990647Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 4.320017917s" Mar 13 12:22:46.911177 containerd[1713]: time="2026-03-13T12:22:46.911159607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 13 12:22:46.912718 containerd[1713]: time="2026-03-13T12:22:46.912686970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 13 12:22:46.923221 containerd[1713]: time="2026-03-13T12:22:46.923179787Z" level=info msg="CreateContainer within sandbox \"b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 12:22:46.957497 containerd[1713]: time="2026-03-13T12:22:46.957422363Z" level=info msg="CreateContainer within sandbox \"b089792cf8a2c6066fb371ee74cce6036f2792b8c4d944de5186c95a8a534203\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cfa4b5aaa499c254ce2781dbb6b56ead4d9b8103fcd8802cebdc9d2a687f5cf9\"" Mar 13 12:22:46.959823 containerd[1713]: time="2026-03-13T12:22:46.958244924Z" level=info msg="StartContainer for \"cfa4b5aaa499c254ce2781dbb6b56ead4d9b8103fcd8802cebdc9d2a687f5cf9\"" Mar 13 12:22:46.987722 systemd[1]: Started cri-containerd-cfa4b5aaa499c254ce2781dbb6b56ead4d9b8103fcd8802cebdc9d2a687f5cf9.scope - libcontainer container cfa4b5aaa499c254ce2781dbb6b56ead4d9b8103fcd8802cebdc9d2a687f5cf9. Mar 13 12:22:47.024233 containerd[1713]: time="2026-03-13T12:22:47.024178392Z" level=info msg="StartContainer for \"cfa4b5aaa499c254ce2781dbb6b56ead4d9b8103fcd8802cebdc9d2a687f5cf9\" returns successfully" Mar 13 12:22:47.342963 kubelet[3140]: I0313 12:22:47.342475 3140 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6d8b56c7bd-4ctwx" podStartSLOduration=37.011055056 podStartE2EDuration="41.342460834s" podCreationTimestamp="2026-03-13 12:22:06 +0000 UTC" firstStartedPulling="2026-03-13 12:22:42.580843191 +0000 UTC m=+57.741623511" lastFinishedPulling="2026-03-13 12:22:46.912248969 +0000 UTC m=+62.073029289" observedRunningTime="2026-03-13 12:22:47.342377554 +0000 UTC m=+62.503157874" watchObservedRunningTime="2026-03-13 12:22:47.342460834 +0000 UTC m=+62.503241154" Mar 13 12:22:47.587768 systemd-networkd[1359]: caliad1552ac588: Gained IPv6LL Mar 13 12:22:47.652013 systemd-networkd[1359]: cali6536bd528fc: Gained IPv6LL Mar 13 12:22:47.779724 systemd-networkd[1359]: califa0b2c41c77: Gained IPv6LL Mar 13 12:22:48.324421 kubelet[3140]: I0313 12:22:48.324387 3140 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:22:49.352456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount227356857.mount: Deactivated successfully. Mar 13 12:22:49.733279 containerd[1713]: time="2026-03-13T12:22:49.733150191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:49.736551 containerd[1713]: time="2026-03-13T12:22:49.736361836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 13 12:22:49.741578 containerd[1713]: time="2026-03-13T12:22:49.740437883Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:49.745472 containerd[1713]: time="2026-03-13T12:22:49.745434371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:49.746136 containerd[1713]: time="2026-03-13T12:22:49.746103612Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.833382882s" Mar 13 12:22:49.746192 containerd[1713]: time="2026-03-13T12:22:49.746139412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 13 12:22:49.747742 containerd[1713]: time="2026-03-13T12:22:49.747705014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 13 12:22:49.756017 containerd[1713]: time="2026-03-13T12:22:49.755980028Z" level=info msg="CreateContainer within sandbox \"52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 13 12:22:49.799876 containerd[1713]: time="2026-03-13T12:22:49.799821740Z" level=info msg="CreateContainer within sandbox \"52b6033e4e9b3bcd4271cb4a330cd77a8cff9de56bc3eb5e3825a3c4c945829f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"52c22a4463dd916b5d7005c98eb480c667ca86a53df27051cb575a54586c34dc\"" Mar 13 12:22:49.800804 containerd[1713]: time="2026-03-13T12:22:49.800777541Z" level=info msg="StartContainer for \"52c22a4463dd916b5d7005c98eb480c667ca86a53df27051cb575a54586c34dc\"" Mar 13 12:22:49.861009 systemd[1]: Started cri-containerd-52c22a4463dd916b5d7005c98eb480c667ca86a53df27051cb575a54586c34dc.scope - libcontainer container 52c22a4463dd916b5d7005c98eb480c667ca86a53df27051cb575a54586c34dc. Mar 13 12:22:49.899781 containerd[1713]: time="2026-03-13T12:22:49.899734504Z" level=info msg="StartContainer for \"52c22a4463dd916b5d7005c98eb480c667ca86a53df27051cb575a54586c34dc\" returns successfully" Mar 13 12:22:50.354406 kubelet[3140]: I0313 12:22:50.354324 3140 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-xl56r" podStartSLOduration=39.459935791 podStartE2EDuration="44.354310488s" podCreationTimestamp="2026-03-13 12:22:06 +0000 UTC" firstStartedPulling="2026-03-13 12:22:44.853024037 +0000 UTC m=+60.013804357" lastFinishedPulling="2026-03-13 12:22:49.747398734 +0000 UTC m=+64.908179054" observedRunningTime="2026-03-13 12:22:50.352597325 +0000 UTC m=+65.513377645" watchObservedRunningTime="2026-03-13 12:22:50.354310488 +0000 UTC m=+65.515090808" Mar 13 12:22:50.354959 kubelet[3140]: I0313 12:22:50.354791 3140 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-749vw" podStartSLOduration=59.354784089 podStartE2EDuration="59.354784089s" podCreationTimestamp="2026-03-13 12:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:22:47.37093632 +0000 UTC m=+62.531716640" watchObservedRunningTime="2026-03-13 12:22:50.354784089 +0000 UTC m=+65.515564409" Mar 13 12:22:51.488162 kubelet[3140]: I0313 12:22:51.487892 3140 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:22:52.356293 systemd[1]: run-containerd-runc-k8s.io-52c22a4463dd916b5d7005c98eb480c667ca86a53df27051cb575a54586c34dc-runc.73pDzV.mount: Deactivated successfully. Mar 13 12:22:54.378353 containerd[1713]: time="2026-03-13T12:22:54.378304842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:54.382375 containerd[1713]: time="2026-03-13T12:22:54.382328610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 13 12:22:54.385771 containerd[1713]: time="2026-03-13T12:22:54.385710336Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:54.393090 containerd[1713]: time="2026-03-13T12:22:54.392772630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:54.393749 containerd[1713]: time="2026-03-13T12:22:54.393715552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 4.645973818s" Mar 13 12:22:54.393810 containerd[1713]: time="2026-03-13T12:22:54.393752232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 13 12:22:54.395814 containerd[1713]: time="2026-03-13T12:22:54.395778276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 13 12:22:54.415408 containerd[1713]: time="2026-03-13T12:22:54.415364434Z" level=info msg="CreateContainer within sandbox \"03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 13 12:22:54.459781 containerd[1713]: time="2026-03-13T12:22:54.459731961Z" level=info msg="CreateContainer within sandbox \"03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"266cc3e979d1258978787c6f8bc1dbca22977cca56c515d14b6c9a3cf14f0127\"" Mar 13 12:22:54.461534 containerd[1713]: time="2026-03-13T12:22:54.461191404Z" level=info msg="StartContainer for \"266cc3e979d1258978787c6f8bc1dbca22977cca56c515d14b6c9a3cf14f0127\"" Mar 13 12:22:54.507689 systemd[1]: Started cri-containerd-266cc3e979d1258978787c6f8bc1dbca22977cca56c515d14b6c9a3cf14f0127.scope - libcontainer container 266cc3e979d1258978787c6f8bc1dbca22977cca56c515d14b6c9a3cf14f0127. Mar 13 12:22:54.552091 containerd[1713]: time="2026-03-13T12:22:54.551359661Z" level=info msg="StartContainer for \"266cc3e979d1258978787c6f8bc1dbca22977cca56c515d14b6c9a3cf14f0127\" returns successfully" Mar 13 12:22:55.190514 kernel: icmp: detected local route for 10.200.20.10 during ICMP sending, src 10.110.240.172 Mar 13 12:22:55.418519 kubelet[3140]: I0313 12:22:55.418258 3140 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-77b8d6f4b5-8vb87" podStartSLOduration=38.034506377 podStartE2EDuration="47.418242521s" podCreationTimestamp="2026-03-13 12:22:08 +0000 UTC" firstStartedPulling="2026-03-13 12:22:45.01099661 +0000 UTC m=+60.171776890" lastFinishedPulling="2026-03-13 12:22:54.394732714 +0000 UTC m=+69.555513034" observedRunningTime="2026-03-13 12:22:55.365752098 +0000 UTC m=+70.526532418" watchObservedRunningTime="2026-03-13 12:22:55.418242521 +0000 UTC m=+70.579022841" Mar 13 12:22:56.140515 containerd[1713]: time="2026-03-13T12:22:56.140420017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:56.144978 containerd[1713]: time="2026-03-13T12:22:56.144934266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 13 12:22:56.148987 containerd[1713]: time="2026-03-13T12:22:56.148949474Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:56.154302 containerd[1713]: time="2026-03-13T12:22:56.154257884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:56.155059 containerd[1713]: time="2026-03-13T12:22:56.155034286Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.75922145s" Mar 13 12:22:56.155100 containerd[1713]: time="2026-03-13T12:22:56.155066326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 13 12:22:56.159428 containerd[1713]: time="2026-03-13T12:22:56.158524332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 12:22:56.166558 containerd[1713]: time="2026-03-13T12:22:56.166513788Z" level=info msg="CreateContainer within sandbox \"7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 13 12:22:56.203206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2246256367.mount: Deactivated successfully. Mar 13 12:22:56.219082 containerd[1713]: time="2026-03-13T12:22:56.219025331Z" level=info msg="CreateContainer within sandbox \"7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e296eacfdbcf763b9eb3a9762821bc4c030828148f9fff83d60d325c6293ad67\"" Mar 13 12:22:56.220613 containerd[1713]: time="2026-03-13T12:22:56.219902053Z" level=info msg="StartContainer for \"e296eacfdbcf763b9eb3a9762821bc4c030828148f9fff83d60d325c6293ad67\"" Mar 13 12:22:56.256682 systemd[1]: Started cri-containerd-e296eacfdbcf763b9eb3a9762821bc4c030828148f9fff83d60d325c6293ad67.scope - libcontainer container e296eacfdbcf763b9eb3a9762821bc4c030828148f9fff83d60d325c6293ad67. Mar 13 12:22:56.290079 containerd[1713]: time="2026-03-13T12:22:56.290036230Z" level=info msg="StartContainer for \"e296eacfdbcf763b9eb3a9762821bc4c030828148f9fff83d60d325c6293ad67\" returns successfully" Mar 13 12:22:56.512646 containerd[1713]: time="2026-03-13T12:22:56.511697225Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:56.515084 containerd[1713]: time="2026-03-13T12:22:56.515028151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 13 12:22:56.517566 containerd[1713]: time="2026-03-13T12:22:56.517425276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 357.787221ms" Mar 13 12:22:56.517566 containerd[1713]: time="2026-03-13T12:22:56.517463116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 13 12:22:56.520404 containerd[1713]: time="2026-03-13T12:22:56.518743319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 13 12:22:56.527203 containerd[1713]: time="2026-03-13T12:22:56.527045455Z" level=info msg="CreateContainer within sandbox \"0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 12:22:56.572727 containerd[1713]: time="2026-03-13T12:22:56.572587264Z" level=info msg="CreateContainer within sandbox \"0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9ca5d330ffae91236a673e146f2e6960e3e1e5d7d24090cff2944d24f63e5d2b\"" Mar 13 12:22:56.573965 containerd[1713]: time="2026-03-13T12:22:56.573834387Z" level=info msg="StartContainer for \"9ca5d330ffae91236a673e146f2e6960e3e1e5d7d24090cff2944d24f63e5d2b\"" Mar 13 12:22:56.605345 systemd[1]: run-containerd-runc-k8s.io-9ca5d330ffae91236a673e146f2e6960e3e1e5d7d24090cff2944d24f63e5d2b-runc.2AyyFc.mount: Deactivated successfully. Mar 13 12:22:56.616697 systemd[1]: Started cri-containerd-9ca5d330ffae91236a673e146f2e6960e3e1e5d7d24090cff2944d24f63e5d2b.scope - libcontainer container 9ca5d330ffae91236a673e146f2e6960e3e1e5d7d24090cff2944d24f63e5d2b. Mar 13 12:22:56.656331 containerd[1713]: time="2026-03-13T12:22:56.656287828Z" level=info msg="StartContainer for \"9ca5d330ffae91236a673e146f2e6960e3e1e5d7d24090cff2944d24f63e5d2b\" returns successfully" Mar 13 12:22:58.356519 kubelet[3140]: I0313 12:22:58.356466 3140 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:22:58.645583 containerd[1713]: time="2026-03-13T12:22:58.645346769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:58.649982 containerd[1713]: time="2026-03-13T12:22:58.649935178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 13 12:22:58.653687 containerd[1713]: time="2026-03-13T12:22:58.653635225Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:58.659868 containerd[1713]: time="2026-03-13T12:22:58.659811557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 12:22:58.661075 containerd[1713]: time="2026-03-13T12:22:58.660577919Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.14180044s" Mar 13 12:22:58.661075 containerd[1713]: time="2026-03-13T12:22:58.660615359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 13 12:22:58.672006 containerd[1713]: time="2026-03-13T12:22:58.671830821Z" level=info msg="CreateContainer within sandbox \"7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 13 12:22:58.710833 containerd[1713]: time="2026-03-13T12:22:58.710785697Z" level=info msg="CreateContainer within sandbox \"7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c328a4a623e95f33dbd5492d3be83ca3c1c2506e9218158d1f66291b83f2a2f8\"" Mar 13 12:22:58.711737 containerd[1713]: time="2026-03-13T12:22:58.711708139Z" level=info msg="StartContainer for \"c328a4a623e95f33dbd5492d3be83ca3c1c2506e9218158d1f66291b83f2a2f8\"" Mar 13 12:22:58.744941 systemd[1]: run-containerd-runc-k8s.io-c328a4a623e95f33dbd5492d3be83ca3c1c2506e9218158d1f66291b83f2a2f8-runc.3sf8la.mount: Deactivated successfully. Mar 13 12:22:58.752648 systemd[1]: Started cri-containerd-c328a4a623e95f33dbd5492d3be83ca3c1c2506e9218158d1f66291b83f2a2f8.scope - libcontainer container c328a4a623e95f33dbd5492d3be83ca3c1c2506e9218158d1f66291b83f2a2f8. Mar 13 12:22:58.782781 containerd[1713]: time="2026-03-13T12:22:58.782273757Z" level=info msg="StartContainer for \"c328a4a623e95f33dbd5492d3be83ca3c1c2506e9218158d1f66291b83f2a2f8\" returns successfully" Mar 13 12:22:58.979896 kubelet[3140]: I0313 12:22:58.979742 3140 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6d8b56c7bd-nczjw" podStartSLOduration=42.78498899 podStartE2EDuration="52.979725704s" podCreationTimestamp="2026-03-13 12:22:06 +0000 UTC" firstStartedPulling="2026-03-13 12:22:46.323449444 +0000 UTC m=+61.484229764" lastFinishedPulling="2026-03-13 12:22:56.518186158 +0000 UTC m=+71.678966478" observedRunningTime="2026-03-13 12:22:57.373902276 +0000 UTC m=+72.534682596" watchObservedRunningTime="2026-03-13 12:22:58.979725704 +0000 UTC m=+74.140506024" Mar 13 12:22:59.083186 kubelet[3140]: I0313 12:22:59.083078 3140 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 13 12:22:59.083186 kubelet[3140]: I0313 12:22:59.083121 3140 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 13 12:22:59.379462 kubelet[3140]: I0313 12:22:59.378567 3140 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-4rnvb" podStartSLOduration=38.834122391 podStartE2EDuration="51.378550766s" podCreationTimestamp="2026-03-13 12:22:08 +0000 UTC" firstStartedPulling="2026-03-13 12:22:46.11933955 +0000 UTC m=+61.280119870" lastFinishedPulling="2026-03-13 12:22:58.663767925 +0000 UTC m=+73.824548245" observedRunningTime="2026-03-13 12:22:59.378068406 +0000 UTC m=+74.538848686" watchObservedRunningTime="2026-03-13 12:22:59.378550766 +0000 UTC m=+74.539331046" Mar 13 12:23:08.843530 kernel: icmp: detected local route for 10.200.20.10 during ICMP sending, src 10.110.240.172 Mar 13 12:23:46.901042 containerd[1713]: time="2026-03-13T12:23:46.900878612Z" level=info msg="StopPodSandbox for \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\"" Mar 13 12:23:46.976181 containerd[1713]: 2026-03-13 12:23:46.936 [WARNING][6389] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0", GenerateName:"calico-apiserver-6d8b56c7bd-", Namespace:"calico-system", SelfLink:"", UID:"5f01a07f-bab0-4312-a812-86688ae7f0c8", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d8b56c7bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063", Pod:"calico-apiserver-6d8b56c7bd-nczjw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6536bd528fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:23:46.976181 containerd[1713]: 2026-03-13 12:23:46.937 [INFO][6389] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Mar 13 12:23:46.976181 containerd[1713]: 2026-03-13 12:23:46.937 [INFO][6389] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" iface="eth0" netns="" Mar 13 12:23:46.976181 containerd[1713]: 2026-03-13 12:23:46.937 [INFO][6389] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Mar 13 12:23:46.976181 containerd[1713]: 2026-03-13 12:23:46.937 [INFO][6389] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Mar 13 12:23:46.976181 containerd[1713]: 2026-03-13 12:23:46.960 [INFO][6396] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" HandleID="k8s-pod-network.f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0" Mar 13 12:23:46.976181 containerd[1713]: 2026-03-13 12:23:46.961 [INFO][6396] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:23:46.976181 containerd[1713]: 2026-03-13 12:23:46.961 [INFO][6396] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:23:46.976181 containerd[1713]: 2026-03-13 12:23:46.970 [WARNING][6396] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" HandleID="k8s-pod-network.f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0" Mar 13 12:23:46.976181 containerd[1713]: 2026-03-13 12:23:46.970 [INFO][6396] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" HandleID="k8s-pod-network.f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0" Mar 13 12:23:46.976181 containerd[1713]: 2026-03-13 12:23:46.971 [INFO][6396] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:23:46.976181 containerd[1713]: 2026-03-13 12:23:46.974 [INFO][6389] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Mar 13 12:23:46.976634 containerd[1713]: time="2026-03-13T12:23:46.976224514Z" level=info msg="TearDown network for sandbox \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\" successfully" Mar 13 12:23:46.976634 containerd[1713]: time="2026-03-13T12:23:46.976250914Z" level=info msg="StopPodSandbox for \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\" returns successfully" Mar 13 12:23:46.976911 containerd[1713]: time="2026-03-13T12:23:46.976884355Z" level=info msg="RemovePodSandbox for \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\"" Mar 13 12:23:46.976962 containerd[1713]: time="2026-03-13T12:23:46.976919835Z" level=info msg="Forcibly stopping sandbox \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\"" Mar 13 12:23:47.060124 containerd[1713]: 2026-03-13 12:23:47.023 [WARNING][6410] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0", GenerateName:"calico-apiserver-6d8b56c7bd-", Namespace:"calico-system", SelfLink:"", UID:"5f01a07f-bab0-4312-a812-86688ae7f0c8", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d8b56c7bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"0b0e7fb84dd6a7cee7dfffd4d71a2ada70a6ace891159d2b8031f2cf00d3c063", Pod:"calico-apiserver-6d8b56c7bd-nczjw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6536bd528fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:23:47.060124 containerd[1713]: 2026-03-13 12:23:47.023 [INFO][6410] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Mar 13 12:23:47.060124 containerd[1713]: 2026-03-13 12:23:47.024 [INFO][6410] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" iface="eth0" netns="" Mar 13 12:23:47.060124 containerd[1713]: 2026-03-13 12:23:47.024 [INFO][6410] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Mar 13 12:23:47.060124 containerd[1713]: 2026-03-13 12:23:47.024 [INFO][6410] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Mar 13 12:23:47.060124 containerd[1713]: 2026-03-13 12:23:47.045 [INFO][6417] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" HandleID="k8s-pod-network.f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0" Mar 13 12:23:47.060124 containerd[1713]: 2026-03-13 12:23:47.045 [INFO][6417] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:23:47.060124 containerd[1713]: 2026-03-13 12:23:47.045 [INFO][6417] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:23:47.060124 containerd[1713]: 2026-03-13 12:23:47.054 [WARNING][6417] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" HandleID="k8s-pod-network.f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0" Mar 13 12:23:47.060124 containerd[1713]: 2026-03-13 12:23:47.054 [INFO][6417] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" HandleID="k8s-pod-network.f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--apiserver--6d8b56c7bd--nczjw-eth0" Mar 13 12:23:47.060124 containerd[1713]: 2026-03-13 12:23:47.056 [INFO][6417] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:23:47.060124 containerd[1713]: 2026-03-13 12:23:47.058 [INFO][6410] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3" Mar 13 12:23:47.060124 containerd[1713]: time="2026-03-13T12:23:47.060078072Z" level=info msg="TearDown network for sandbox \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\" successfully" Mar 13 12:23:47.073313 containerd[1713]: time="2026-03-13T12:23:47.073267376Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 13 12:23:47.073804 containerd[1713]: time="2026-03-13T12:23:47.073567177Z" level=info msg="RemovePodSandbox \"f6e1186cd3299f8eddf8bafa326a860933e2152d063557117c367c01016670b3\" returns successfully" Mar 13 12:23:47.074096 containerd[1713]: time="2026-03-13T12:23:47.074067538Z" level=info msg="StopPodSandbox for \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\"" Mar 13 12:23:47.145587 containerd[1713]: 2026-03-13 12:23:47.110 [WARNING][6431] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b9f33bdd-2737-45fd-8259-eb04da313d49", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692", Pod:"csi-node-driver-4rnvb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califa0b2c41c77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:23:47.145587 containerd[1713]: 2026-03-13 12:23:47.110 [INFO][6431] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Mar 13 12:23:47.145587 containerd[1713]: 2026-03-13 12:23:47.110 [INFO][6431] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" iface="eth0" netns="" Mar 13 12:23:47.145587 containerd[1713]: 2026-03-13 12:23:47.110 [INFO][6431] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Mar 13 12:23:47.145587 containerd[1713]: 2026-03-13 12:23:47.110 [INFO][6431] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Mar 13 12:23:47.145587 containerd[1713]: 2026-03-13 12:23:47.130 [INFO][6438] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" HandleID="k8s-pod-network.13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Workload="ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0" Mar 13 12:23:47.145587 containerd[1713]: 2026-03-13 12:23:47.131 [INFO][6438] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:23:47.145587 containerd[1713]: 2026-03-13 12:23:47.131 [INFO][6438] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:23:47.145587 containerd[1713]: 2026-03-13 12:23:47.140 [WARNING][6438] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" HandleID="k8s-pod-network.13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Workload="ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0" Mar 13 12:23:47.145587 containerd[1713]: 2026-03-13 12:23:47.140 [INFO][6438] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" HandleID="k8s-pod-network.13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Workload="ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0" Mar 13 12:23:47.145587 containerd[1713]: 2026-03-13 12:23:47.141 [INFO][6438] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:23:47.145587 containerd[1713]: 2026-03-13 12:23:47.143 [INFO][6431] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Mar 13 12:23:47.146182 containerd[1713]: time="2026-03-13T12:23:47.145565312Z" level=info msg="TearDown network for sandbox \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\" successfully" Mar 13 12:23:47.146182 containerd[1713]: time="2026-03-13T12:23:47.145802753Z" level=info msg="StopPodSandbox for \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\" returns successfully" Mar 13 12:23:47.147003 containerd[1713]: time="2026-03-13T12:23:47.146636074Z" level=info msg="RemovePodSandbox for \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\"" Mar 13 12:23:47.147003 containerd[1713]: time="2026-03-13T12:23:47.146674315Z" level=info msg="Forcibly stopping sandbox \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\"" Mar 13 12:23:47.219310 containerd[1713]: 2026-03-13 12:23:47.183 [WARNING][6453] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b9f33bdd-2737-45fd-8259-eb04da313d49", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"7e2fad7880b41043cc1333923ff24c74a5417e7ebc2320a720f7e97c03126692", Pod:"csi-node-driver-4rnvb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califa0b2c41c77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:23:47.219310 containerd[1713]: 2026-03-13 12:23:47.183 [INFO][6453] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Mar 13 12:23:47.219310 containerd[1713]: 2026-03-13 12:23:47.183 [INFO][6453] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" iface="eth0" netns="" Mar 13 12:23:47.219310 containerd[1713]: 2026-03-13 12:23:47.183 [INFO][6453] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Mar 13 12:23:47.219310 containerd[1713]: 2026-03-13 12:23:47.183 [INFO][6453] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Mar 13 12:23:47.219310 containerd[1713]: 2026-03-13 12:23:47.203 [INFO][6460] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" HandleID="k8s-pod-network.13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Workload="ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0" Mar 13 12:23:47.219310 containerd[1713]: 2026-03-13 12:23:47.203 [INFO][6460] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:23:47.219310 containerd[1713]: 2026-03-13 12:23:47.203 [INFO][6460] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:23:47.219310 containerd[1713]: 2026-03-13 12:23:47.212 [WARNING][6460] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" HandleID="k8s-pod-network.13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Workload="ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0" Mar 13 12:23:47.219310 containerd[1713]: 2026-03-13 12:23:47.212 [INFO][6460] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" HandleID="k8s-pod-network.13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Workload="ci--4081.3.101--d13a81acd8-k8s-csi--node--driver--4rnvb-eth0" Mar 13 12:23:47.219310 containerd[1713]: 2026-03-13 12:23:47.214 [INFO][6460] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:23:47.219310 containerd[1713]: 2026-03-13 12:23:47.216 [INFO][6453] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd" Mar 13 12:23:47.219310 containerd[1713]: time="2026-03-13T12:23:47.217899529Z" level=info msg="TearDown network for sandbox \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\" successfully" Mar 13 12:23:47.227385 containerd[1713]: time="2026-03-13T12:23:47.227337906Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 13 12:23:47.227646 containerd[1713]: time="2026-03-13T12:23:47.227625067Z" level=info msg="RemovePodSandbox \"13f8d77fc1636328d29a41256f692cc89e85dc3001496b0a0957a2d4a751c6dd\" returns successfully" Mar 13 12:23:47.228186 containerd[1713]: time="2026-03-13T12:23:47.228163988Z" level=info msg="StopPodSandbox for \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\"" Mar 13 12:23:47.297784 containerd[1713]: 2026-03-13 12:23:47.263 [WARNING][6474] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0", GenerateName:"calico-kube-controllers-77b8d6f4b5-", Namespace:"calico-system", SelfLink:"", UID:"b22d9553-9cf2-4f77-9f81-2807136a31dd", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77b8d6f4b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1", Pod:"calico-kube-controllers-77b8d6f4b5-8vb87", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali249b6f6d1cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:23:47.297784 containerd[1713]: 2026-03-13 12:23:47.263 [INFO][6474] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Mar 13 12:23:47.297784 containerd[1713]: 2026-03-13 12:23:47.263 [INFO][6474] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" iface="eth0" netns="" Mar 13 12:23:47.297784 containerd[1713]: 2026-03-13 12:23:47.263 [INFO][6474] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Mar 13 12:23:47.297784 containerd[1713]: 2026-03-13 12:23:47.263 [INFO][6474] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Mar 13 12:23:47.297784 containerd[1713]: 2026-03-13 12:23:47.283 [INFO][6481] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" HandleID="k8s-pod-network.b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0" Mar 13 12:23:47.297784 containerd[1713]: 2026-03-13 12:23:47.283 [INFO][6481] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:23:47.297784 containerd[1713]: 2026-03-13 12:23:47.283 [INFO][6481] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:23:47.297784 containerd[1713]: 2026-03-13 12:23:47.292 [WARNING][6481] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" HandleID="k8s-pod-network.b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0" Mar 13 12:23:47.297784 containerd[1713]: 2026-03-13 12:23:47.292 [INFO][6481] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" HandleID="k8s-pod-network.b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0" Mar 13 12:23:47.297784 containerd[1713]: 2026-03-13 12:23:47.293 [INFO][6481] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:23:47.297784 containerd[1713]: 2026-03-13 12:23:47.295 [INFO][6474] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Mar 13 12:23:47.298201 containerd[1713]: time="2026-03-13T12:23:47.297842879Z" level=info msg="TearDown network for sandbox \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\" successfully" Mar 13 12:23:47.298201 containerd[1713]: time="2026-03-13T12:23:47.297869399Z" level=info msg="StopPodSandbox for \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\" returns successfully" Mar 13 12:23:47.298581 containerd[1713]: time="2026-03-13T12:23:47.298552520Z" level=info msg="RemovePodSandbox for \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\"" Mar 13 12:23:47.298624 containerd[1713]: time="2026-03-13T12:23:47.298598761Z" level=info msg="Forcibly stopping sandbox \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\"" Mar 13 12:23:47.372398 containerd[1713]: 2026-03-13 12:23:47.333 [WARNING][6495] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0", GenerateName:"calico-kube-controllers-77b8d6f4b5-", Namespace:"calico-system", SelfLink:"", UID:"b22d9553-9cf2-4f77-9f81-2807136a31dd", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 22, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77b8d6f4b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"03abd627d2ffaf7a235dcca3711c12f111f740fdb5713dbd996a222ccec45da1", Pod:"calico-kube-controllers-77b8d6f4b5-8vb87", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali249b6f6d1cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:23:47.372398 containerd[1713]: 2026-03-13 12:23:47.333 [INFO][6495] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Mar 13 12:23:47.372398 containerd[1713]: 2026-03-13 12:23:47.333 [INFO][6495] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" iface="eth0" netns="" Mar 13 12:23:47.372398 containerd[1713]: 2026-03-13 12:23:47.333 [INFO][6495] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Mar 13 12:23:47.372398 containerd[1713]: 2026-03-13 12:23:47.333 [INFO][6495] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Mar 13 12:23:47.372398 containerd[1713]: 2026-03-13 12:23:47.357 [INFO][6502] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" HandleID="k8s-pod-network.b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0" Mar 13 12:23:47.372398 containerd[1713]: 2026-03-13 12:23:47.357 [INFO][6502] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:23:47.372398 containerd[1713]: 2026-03-13 12:23:47.357 [INFO][6502] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:23:47.372398 containerd[1713]: 2026-03-13 12:23:47.366 [WARNING][6502] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" HandleID="k8s-pod-network.b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0" Mar 13 12:23:47.372398 containerd[1713]: 2026-03-13 12:23:47.366 [INFO][6502] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" HandleID="k8s-pod-network.b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Workload="ci--4081.3.101--d13a81acd8-k8s-calico--kube--controllers--77b8d6f4b5--8vb87-eth0" Mar 13 12:23:47.372398 containerd[1713]: 2026-03-13 12:23:47.368 [INFO][6502] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:23:47.372398 containerd[1713]: 2026-03-13 12:23:47.370 [INFO][6495] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450" Mar 13 12:23:47.372844 containerd[1713]: time="2026-03-13T12:23:47.372443660Z" level=info msg="TearDown network for sandbox \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\" successfully" Mar 13 12:23:47.381452 containerd[1713]: time="2026-03-13T12:23:47.381404796Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 13 12:23:47.381543 containerd[1713]: time="2026-03-13T12:23:47.381499277Z" level=info msg="RemovePodSandbox \"b0f381074166d3c350968fafcbebd3f3789ec3503e7ee5b84ff4a2de30c2c450\" returns successfully" Mar 13 12:23:47.382225 containerd[1713]: time="2026-03-13T12:23:47.382198718Z" level=info msg="StopPodSandbox for \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\"" Mar 13 12:23:47.454004 containerd[1713]: 2026-03-13 12:23:47.417 [WARNING][6516] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"f9601ee0-5635-473a-aecf-cb8b509e8382", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 21, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3", Pod:"coredns-7d764666f9-749vw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad1552ac588", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:23:47.454004 containerd[1713]: 2026-03-13 12:23:47.418 [INFO][6516] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Mar 13 12:23:47.454004 containerd[1713]: 2026-03-13 12:23:47.418 [INFO][6516] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" iface="eth0" netns="" Mar 13 12:23:47.454004 containerd[1713]: 2026-03-13 12:23:47.418 [INFO][6516] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Mar 13 12:23:47.454004 containerd[1713]: 2026-03-13 12:23:47.418 [INFO][6516] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Mar 13 12:23:47.454004 containerd[1713]: 2026-03-13 12:23:47.438 [INFO][6523] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" HandleID="k8s-pod-network.d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0" Mar 13 12:23:47.454004 containerd[1713]: 2026-03-13 12:23:47.439 [INFO][6523] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:23:47.454004 containerd[1713]: 2026-03-13 12:23:47.439 [INFO][6523] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:23:47.454004 containerd[1713]: 2026-03-13 12:23:47.448 [WARNING][6523] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" HandleID="k8s-pod-network.d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0" Mar 13 12:23:47.454004 containerd[1713]: 2026-03-13 12:23:47.448 [INFO][6523] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" HandleID="k8s-pod-network.d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0" Mar 13 12:23:47.454004 containerd[1713]: 2026-03-13 12:23:47.450 [INFO][6523] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:23:47.454004 containerd[1713]: 2026-03-13 12:23:47.452 [INFO][6516] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Mar 13 12:23:47.454427 containerd[1713]: time="2026-03-13T12:23:47.454059813Z" level=info msg="TearDown network for sandbox \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\" successfully" Mar 13 12:23:47.454427 containerd[1713]: time="2026-03-13T12:23:47.454094333Z" level=info msg="StopPodSandbox for \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\" returns successfully" Mar 13 12:23:47.454857 containerd[1713]: time="2026-03-13T12:23:47.454813895Z" level=info msg="RemovePodSandbox for \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\"" Mar 13 12:23:47.454920 containerd[1713]: time="2026-03-13T12:23:47.454874095Z" level=info msg="Forcibly stopping sandbox \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\"" Mar 13 12:23:47.530177 containerd[1713]: 2026-03-13 12:23:47.493 [WARNING][6537] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"f9601ee0-5635-473a-aecf-cb8b509e8382", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 12, 21, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.101-d13a81acd8", ContainerID:"84edf55055672d83b6fb4b756b7411bee7b57e6a1bea3d91bb955aac01fa2db3", Pod:"coredns-7d764666f9-749vw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad1552ac588", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 12:23:47.530177 containerd[1713]: 2026-03-13 12:23:47.493 [INFO][6537] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Mar 13 12:23:47.530177 containerd[1713]: 2026-03-13 12:23:47.494 [INFO][6537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" iface="eth0" netns="" Mar 13 12:23:47.530177 containerd[1713]: 2026-03-13 12:23:47.494 [INFO][6537] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Mar 13 12:23:47.530177 containerd[1713]: 2026-03-13 12:23:47.494 [INFO][6537] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Mar 13 12:23:47.530177 containerd[1713]: 2026-03-13 12:23:47.514 [INFO][6544] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" HandleID="k8s-pod-network.d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0" Mar 13 12:23:47.530177 containerd[1713]: 2026-03-13 12:23:47.514 [INFO][6544] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 12:23:47.530177 containerd[1713]: 2026-03-13 12:23:47.514 [INFO][6544] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 12:23:47.530177 containerd[1713]: 2026-03-13 12:23:47.523 [WARNING][6544] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" HandleID="k8s-pod-network.d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0" Mar 13 12:23:47.530177 containerd[1713]: 2026-03-13 12:23:47.523 [INFO][6544] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" HandleID="k8s-pod-network.d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Workload="ci--4081.3.101--d13a81acd8-k8s-coredns--7d764666f9--749vw-eth0" Mar 13 12:23:47.530177 containerd[1713]: 2026-03-13 12:23:47.525 [INFO][6544] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 12:23:47.530177 containerd[1713]: 2026-03-13 12:23:47.527 [INFO][6537] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71" Mar 13 12:23:47.530177 containerd[1713]: time="2026-03-13T12:23:47.528865954Z" level=info msg="TearDown network for sandbox \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\" successfully" Mar 13 12:23:47.539660 containerd[1713]: time="2026-03-13T12:23:47.539575814Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 13 12:23:47.539802 containerd[1713]: time="2026-03-13T12:23:47.539724454Z" level=info msg="RemovePodSandbox \"d8e4e7d5e3fdebcdd5e184366f4fa6fd3d6d6e358655d52d4743096d3eec1e71\" returns successfully" Mar 13 12:23:59.684574 systemd[1]: Started sshd@7-10.200.20.10:22-10.200.16.10:38462.service - OpenSSH per-connection server daemon (10.200.16.10:38462). Mar 13 12:24:00.174931 sshd[6613]: Accepted publickey for core from 10.200.16.10 port 38462 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:24:00.177075 sshd[6613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:24:00.182354 systemd-logind[1690]: New session 10 of user core. Mar 13 12:24:00.186630 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 12:24:00.600346 sshd[6613]: pam_unix(sshd:session): session closed for user core Mar 13 12:24:00.604030 systemd[1]: sshd@7-10.200.20.10:22-10.200.16.10:38462.service: Deactivated successfully. Mar 13 12:24:00.606287 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 12:24:00.607034 systemd-logind[1690]: Session 10 logged out. Waiting for processes to exit. Mar 13 12:24:00.608375 systemd-logind[1690]: Removed session 10. Mar 13 12:24:05.693732 systemd[1]: Started sshd@8-10.200.20.10:22-10.200.16.10:37822.service - OpenSSH per-connection server daemon (10.200.16.10:37822). Mar 13 12:24:06.187207 sshd[6660]: Accepted publickey for core from 10.200.16.10 port 37822 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:24:06.207022 sshd[6660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:24:06.213288 systemd-logind[1690]: New session 11 of user core. Mar 13 12:24:06.218731 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 12:24:06.631381 sshd[6660]: pam_unix(sshd:session): session closed for user core Mar 13 12:24:06.635300 systemd[1]: sshd@8-10.200.20.10:22-10.200.16.10:37822.service: Deactivated successfully. Mar 13 12:24:06.637284 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 12:24:06.638637 systemd-logind[1690]: Session 11 logged out. Waiting for processes to exit. Mar 13 12:24:06.639896 systemd-logind[1690]: Removed session 11. Mar 13 12:24:11.723577 systemd[1]: Started sshd@9-10.200.20.10:22-10.200.16.10:54738.service - OpenSSH per-connection server daemon (10.200.16.10:54738). Mar 13 12:24:12.218623 sshd[6711]: Accepted publickey for core from 10.200.16.10 port 54738 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:24:12.220066 sshd[6711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:24:12.224349 systemd-logind[1690]: New session 12 of user core. Mar 13 12:24:12.227647 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 12:24:12.637229 sshd[6711]: pam_unix(sshd:session): session closed for user core Mar 13 12:24:12.640779 systemd[1]: sshd@9-10.200.20.10:22-10.200.16.10:54738.service: Deactivated successfully. Mar 13 12:24:12.642980 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 12:24:12.644138 systemd-logind[1690]: Session 12 logged out. Waiting for processes to exit. Mar 13 12:24:12.645243 systemd-logind[1690]: Removed session 12. Mar 13 12:24:17.740803 systemd[1]: Started sshd@10-10.200.20.10:22-10.200.16.10:54748.service - OpenSSH per-connection server daemon (10.200.16.10:54748). Mar 13 12:24:18.225520 sshd[6725]: Accepted publickey for core from 10.200.16.10 port 54748 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:24:18.226427 sshd[6725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:24:18.230186 systemd-logind[1690]: New session 13 of user core. Mar 13 12:24:18.234668 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 12:24:18.644856 sshd[6725]: pam_unix(sshd:session): session closed for user core Mar 13 12:24:18.650302 systemd-logind[1690]: Session 13 logged out. Waiting for processes to exit. Mar 13 12:24:18.650650 systemd[1]: sshd@10-10.200.20.10:22-10.200.16.10:54748.service: Deactivated successfully. Mar 13 12:24:18.654807 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 12:24:18.656011 systemd-logind[1690]: Removed session 13. Mar 13 12:24:23.743784 systemd[1]: Started sshd@11-10.200.20.10:22-10.200.16.10:45806.service - OpenSSH per-connection server daemon (10.200.16.10:45806). Mar 13 12:24:24.234757 sshd[6761]: Accepted publickey for core from 10.200.16.10 port 45806 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:24:24.236273 sshd[6761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:24:24.241098 systemd-logind[1690]: New session 14 of user core. Mar 13 12:24:24.245641 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 12:24:24.662638 sshd[6761]: pam_unix(sshd:session): session closed for user core Mar 13 12:24:24.669043 systemd-logind[1690]: Session 14 logged out. Waiting for processes to exit. Mar 13 12:24:24.670000 systemd[1]: sshd@11-10.200.20.10:22-10.200.16.10:45806.service: Deactivated successfully. Mar 13 12:24:24.673333 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 12:24:24.674507 systemd-logind[1690]: Removed session 14. Mar 13 12:24:25.364434 systemd[1]: run-containerd-runc-k8s.io-266cc3e979d1258978787c6f8bc1dbca22977cca56c515d14b6c9a3cf14f0127-runc.beGYhS.mount: Deactivated successfully. Mar 13 12:24:29.752697 systemd[1]: Started sshd@12-10.200.20.10:22-10.200.16.10:45822.service - OpenSSH per-connection server daemon (10.200.16.10:45822). Mar 13 12:24:30.250571 sshd[6814]: Accepted publickey for core from 10.200.16.10 port 45822 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:24:30.251959 sshd[6814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:24:30.256193 systemd-logind[1690]: New session 15 of user core. Mar 13 12:24:30.264707 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 12:24:30.672247 sshd[6814]: pam_unix(sshd:session): session closed for user core Mar 13 12:24:30.676185 systemd[1]: sshd@12-10.200.20.10:22-10.200.16.10:45822.service: Deactivated successfully. Mar 13 12:24:30.679546 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 12:24:30.682701 systemd-logind[1690]: Session 15 logged out. Waiting for processes to exit. Mar 13 12:24:30.684023 systemd-logind[1690]: Removed session 15. Mar 13 12:24:30.759848 systemd[1]: Started sshd@13-10.200.20.10:22-10.200.16.10:41004.service - OpenSSH per-connection server daemon (10.200.16.10:41004). Mar 13 12:24:31.252306 sshd[6847]: Accepted publickey for core from 10.200.16.10 port 41004 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:24:31.253236 sshd[6847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:24:31.258199 systemd-logind[1690]: New session 16 of user core. Mar 13 12:24:31.263702 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 12:24:31.718308 sshd[6847]: pam_unix(sshd:session): session closed for user core Mar 13 12:24:31.722708 systemd[1]: sshd@13-10.200.20.10:22-10.200.16.10:41004.service: Deactivated successfully. Mar 13 12:24:31.727064 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 12:24:31.728146 systemd-logind[1690]: Session 16 logged out. Waiting for processes to exit. Mar 13 12:24:31.729978 systemd-logind[1690]: Removed session 16. Mar 13 12:24:31.811101 systemd[1]: Started sshd@14-10.200.20.10:22-10.200.16.10:41016.service - OpenSSH per-connection server daemon (10.200.16.10:41016). Mar 13 12:24:32.313016 sshd[6858]: Accepted publickey for core from 10.200.16.10 port 41016 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:24:32.314523 sshd[6858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:24:32.319571 systemd-logind[1690]: New session 17 of user core. Mar 13 12:24:32.327673 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 12:24:32.730218 sshd[6858]: pam_unix(sshd:session): session closed for user core Mar 13 12:24:32.734584 systemd[1]: sshd@14-10.200.20.10:22-10.200.16.10:41016.service: Deactivated successfully. Mar 13 12:24:32.738196 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 12:24:32.739273 systemd-logind[1690]: Session 17 logged out. Waiting for processes to exit. Mar 13 12:24:32.740285 systemd-logind[1690]: Removed session 17. Mar 13 12:24:37.819044 systemd[1]: Started sshd@15-10.200.20.10:22-10.200.16.10:41028.service - OpenSSH per-connection server daemon (10.200.16.10:41028). Mar 13 12:24:38.311512 sshd[6892]: Accepted publickey for core from 10.200.16.10 port 41028 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:24:38.312595 sshd[6892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:24:38.316209 systemd-logind[1690]: New session 18 of user core. Mar 13 12:24:38.323638 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 12:24:38.730116 sshd[6892]: pam_unix(sshd:session): session closed for user core Mar 13 12:24:38.733748 systemd[1]: sshd@15-10.200.20.10:22-10.200.16.10:41028.service: Deactivated successfully. Mar 13 12:24:38.735809 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 12:24:38.737784 systemd-logind[1690]: Session 18 logged out. Waiting for processes to exit. Mar 13 12:24:38.739237 systemd-logind[1690]: Removed session 18. Mar 13 12:24:38.823757 systemd[1]: Started sshd@16-10.200.20.10:22-10.200.16.10:41038.service - OpenSSH per-connection server daemon (10.200.16.10:41038). Mar 13 12:24:39.308506 sshd[6904]: Accepted publickey for core from 10.200.16.10 port 41038 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:24:39.309384 sshd[6904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:24:39.313543 systemd-logind[1690]: New session 19 of user core. Mar 13 12:24:39.318660 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 12:24:39.846264 sshd[6904]: pam_unix(sshd:session): session closed for user core Mar 13 12:24:39.850321 systemd[1]: sshd@16-10.200.20.10:22-10.200.16.10:41038.service: Deactivated successfully. Mar 13 12:24:39.852612 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 12:24:39.853410 systemd-logind[1690]: Session 19 logged out. Waiting for processes to exit. Mar 13 12:24:39.854380 systemd-logind[1690]: Removed session 19. Mar 13 12:24:39.939755 systemd[1]: Started sshd@17-10.200.20.10:22-10.200.16.10:55952.service - OpenSSH per-connection server daemon (10.200.16.10:55952). Mar 13 12:24:40.426722 sshd[6935]: Accepted publickey for core from 10.200.16.10 port 55952 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:24:40.428122 sshd[6935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:24:40.432706 systemd-logind[1690]: New session 20 of user core. Mar 13 12:24:40.444658 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 13 12:24:41.442966 sshd[6935]: pam_unix(sshd:session): session closed for user core Mar 13 12:24:41.447120 systemd[1]: sshd@17-10.200.20.10:22-10.200.16.10:55952.service: Deactivated successfully. Mar 13 12:24:41.449296 systemd[1]: session-20.scope: Deactivated successfully. Mar 13 12:24:41.450053 systemd-logind[1690]: Session 20 logged out. Waiting for processes to exit. Mar 13 12:24:41.451518 systemd-logind[1690]: Removed session 20. Mar 13 12:24:41.531526 systemd[1]: Started sshd@18-10.200.20.10:22-10.200.16.10:55964.service - OpenSSH per-connection server daemon (10.200.16.10:55964). Mar 13 12:24:42.017190 sshd[6967]: Accepted publickey for core from 10.200.16.10 port 55964 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:24:42.018750 sshd[6967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:24:42.024617 systemd-logind[1690]: New session 21 of user core. Mar 13 12:24:42.029658 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 13 12:24:42.581069 sshd[6967]: pam_unix(sshd:session): session closed for user core Mar 13 12:24:42.584906 systemd[1]: sshd@18-10.200.20.10:22-10.200.16.10:55964.service: Deactivated successfully. Mar 13 12:24:42.587208 systemd[1]: session-21.scope: Deactivated successfully. Mar 13 12:24:42.590113 systemd-logind[1690]: Session 21 logged out. Waiting for processes to exit. Mar 13 12:24:42.591193 systemd-logind[1690]: Removed session 21. Mar 13 12:24:42.674030 systemd[1]: Started sshd@19-10.200.20.10:22-10.200.16.10:55974.service - OpenSSH per-connection server daemon (10.200.16.10:55974). Mar 13 12:24:43.159137 sshd[6980]: Accepted publickey for core from 10.200.16.10 port 55974 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:24:43.160571 sshd[6980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:24:43.164397 systemd-logind[1690]: New session 22 of user core. Mar 13 12:24:43.170633 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 13 12:24:43.570233 sshd[6980]: pam_unix(sshd:session): session closed for user core Mar 13 12:24:43.574123 systemd[1]: sshd@19-10.200.20.10:22-10.200.16.10:55974.service: Deactivated successfully. Mar 13 12:24:43.576665 systemd[1]: session-22.scope: Deactivated successfully. Mar 13 12:24:43.577363 systemd-logind[1690]: Session 22 logged out. Waiting for processes to exit. Mar 13 12:24:43.578358 systemd-logind[1690]: Removed session 22. Mar 13 12:24:48.661826 systemd[1]: Started sshd@20-10.200.20.10:22-10.200.16.10:55986.service - OpenSSH per-connection server daemon (10.200.16.10:55986). Mar 13 12:24:49.146646 sshd[6995]: Accepted publickey for core from 10.200.16.10 port 55986 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:24:49.147565 sshd[6995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:24:49.152353 systemd-logind[1690]: New session 23 of user core. Mar 13 12:24:49.158674 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 13 12:24:49.557163 sshd[6995]: pam_unix(sshd:session): session closed for user core Mar 13 12:24:49.559999 systemd[1]: sshd@20-10.200.20.10:22-10.200.16.10:55986.service: Deactivated successfully. Mar 13 12:24:49.562544 systemd[1]: session-23.scope: Deactivated successfully. Mar 13 12:24:49.564602 systemd-logind[1690]: Session 23 logged out. Waiting for processes to exit. Mar 13 12:24:49.566247 systemd-logind[1690]: Removed session 23. Mar 13 12:24:54.642702 systemd[1]: Started sshd@21-10.200.20.10:22-10.200.16.10:44306.service - OpenSSH per-connection server daemon (10.200.16.10:44306). Mar 13 12:24:55.099648 sshd[7029]: Accepted publickey for core from 10.200.16.10 port 44306 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:24:55.101106 sshd[7029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:24:55.104970 systemd-logind[1690]: New session 24 of user core. Mar 13 12:24:55.111774 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 13 12:24:55.510217 sshd[7029]: pam_unix(sshd:session): session closed for user core Mar 13 12:24:55.514450 systemd[1]: sshd@21-10.200.20.10:22-10.200.16.10:44306.service: Deactivated successfully. Mar 13 12:24:55.517093 systemd[1]: session-24.scope: Deactivated successfully. Mar 13 12:24:55.517930 systemd-logind[1690]: Session 24 logged out. Waiting for processes to exit. Mar 13 12:24:55.518867 systemd-logind[1690]: Removed session 24. Mar 13 12:25:00.617820 systemd[1]: Started sshd@22-10.200.20.10:22-10.200.16.10:53426.service - OpenSSH per-connection server daemon (10.200.16.10:53426). Mar 13 12:25:01.111517 sshd[7064]: Accepted publickey for core from 10.200.16.10 port 53426 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:25:01.113105 sshd[7064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:25:01.117999 systemd-logind[1690]: New session 25 of user core. Mar 13 12:25:01.124667 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 13 12:25:01.527119 sshd[7064]: pam_unix(sshd:session): session closed for user core Mar 13 12:25:01.531389 systemd[1]: sshd@22-10.200.20.10:22-10.200.16.10:53426.service: Deactivated successfully. Mar 13 12:25:01.534404 systemd[1]: session-25.scope: Deactivated successfully. Mar 13 12:25:01.535552 systemd-logind[1690]: Session 25 logged out. Waiting for processes to exit. Mar 13 12:25:01.536693 systemd-logind[1690]: Removed session 25. Mar 13 12:25:06.623769 systemd[1]: Started sshd@23-10.200.20.10:22-10.200.16.10:53430.service - OpenSSH per-connection server daemon (10.200.16.10:53430). Mar 13 12:25:07.114278 sshd[7097]: Accepted publickey for core from 10.200.16.10 port 53430 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:25:07.116372 sshd[7097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:25:07.120810 systemd-logind[1690]: New session 26 of user core. Mar 13 12:25:07.123726 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 13 12:25:07.527988 sshd[7097]: pam_unix(sshd:session): session closed for user core Mar 13 12:25:07.532692 systemd[1]: sshd@23-10.200.20.10:22-10.200.16.10:53430.service: Deactivated successfully. Mar 13 12:25:07.534933 systemd[1]: session-26.scope: Deactivated successfully. Mar 13 12:25:07.536179 systemd-logind[1690]: Session 26 logged out. Waiting for processes to exit. Mar 13 12:25:07.537197 systemd-logind[1690]: Removed session 26. Mar 13 12:25:12.626119 systemd[1]: Started sshd@24-10.200.20.10:22-10.200.16.10:36708.service - OpenSSH per-connection server daemon (10.200.16.10:36708). Mar 13 12:25:13.112514 sshd[7111]: Accepted publickey for core from 10.200.16.10 port 36708 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:25:13.113868 sshd[7111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:25:13.118105 systemd-logind[1690]: New session 27 of user core. Mar 13 12:25:13.122643 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 13 12:25:13.530512 sshd[7111]: pam_unix(sshd:session): session closed for user core Mar 13 12:25:13.535682 systemd-logind[1690]: Session 27 logged out. Waiting for processes to exit. Mar 13 12:25:13.536050 systemd[1]: sshd@24-10.200.20.10:22-10.200.16.10:36708.service: Deactivated successfully. Mar 13 12:25:13.538624 systemd[1]: session-27.scope: Deactivated successfully. Mar 13 12:25:13.542935 systemd-logind[1690]: Removed session 27. Mar 13 12:25:18.636160 systemd[1]: Started sshd@25-10.200.20.10:22-10.200.16.10:36714.service - OpenSSH per-connection server daemon (10.200.16.10:36714). Mar 13 12:25:19.121415 sshd[7135]: Accepted publickey for core from 10.200.16.10 port 36714 ssh2: RSA SHA256:wv0Hv60tGauojkTsy+TmSIvSWS0y9AEyRHulsFFY/FA Mar 13 12:25:19.123075 sshd[7135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 12:25:19.127269 systemd-logind[1690]: New session 28 of user core. Mar 13 12:25:19.132669 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 13 12:25:19.537663 sshd[7135]: pam_unix(sshd:session): session closed for user core Mar 13 12:25:19.541972 systemd[1]: sshd@25-10.200.20.10:22-10.200.16.10:36714.service: Deactivated successfully. Mar 13 12:25:19.544325 systemd[1]: session-28.scope: Deactivated successfully. Mar 13 12:25:19.545449 systemd-logind[1690]: Session 28 logged out. Waiting for processes to exit. Mar 13 12:25:19.546522 systemd-logind[1690]: Removed session 28.