Sep 12 23:58:02.305761 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 23:58:02.305786 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 12 23:58:02.305794 kernel: KASLR enabled Sep 12 23:58:02.305800 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 12 23:58:02.305808 kernel: printk: bootconsole [pl11] enabled Sep 12 23:58:02.305814 kernel: efi: EFI v2.7 by EDK II Sep 12 23:58:02.305821 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Sep 12 23:58:02.305827 kernel: random: crng init done Sep 12 23:58:02.305833 kernel: ACPI: Early table checksum verification disabled Sep 12 23:58:02.305839 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 12 23:58:02.305845 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:02.305851 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:02.305858 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 12 23:58:02.305864 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:02.305872 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:02.305879 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:02.305885 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:02.305894 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:02.305900 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:02.305907 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 12 23:58:02.305914 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:02.305920 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 12 23:58:02.305927 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 12 23:58:02.305934 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 12 23:58:02.305940 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 12 23:58:02.305948 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 12 23:58:02.305954 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 12 23:58:02.305961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 12 23:58:02.305969 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 12 23:58:02.305976 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 12 23:58:02.305982 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 12 23:58:02.305989 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 12 23:58:02.305995 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 12 23:58:02.306001 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 12 23:58:02.306008 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Sep 12 23:58:02.306014 kernel: Zone ranges: Sep 12 23:58:02.306020 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 12 23:58:02.306027 kernel: DMA32 empty Sep 12 23:58:02.306033 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 23:58:02.306040 kernel: Movable zone start for each node Sep 12 23:58:02.306051 kernel: Early memory node ranges Sep 12 23:58:02.306058 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 12 23:58:02.306065 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 12 23:58:02.306072 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 12 23:58:02.306079 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 12 23:58:02.308118 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 12 23:58:02.308140 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 12 23:58:02.308148 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 23:58:02.308155 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 12 23:58:02.308162 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 12 23:58:02.308169 kernel: psci: probing for conduit method from ACPI. Sep 12 23:58:02.308176 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 23:58:02.308183 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 23:58:02.308190 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 12 23:58:02.308197 kernel: psci: SMC Calling Convention v1.4 Sep 12 23:58:02.308203 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 12 23:58:02.308210 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 12 23:58:02.308223 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 23:58:02.308230 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 23:58:02.308237 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 23:58:02.308244 kernel: Detected PIPT I-cache on CPU0 Sep 12 23:58:02.308250 kernel: CPU features: detected: GIC system register CPU interface Sep 12 23:58:02.308258 kernel: CPU features: detected: Hardware dirty bit management Sep 12 23:58:02.308268 kernel: CPU features: detected: Spectre-BHB Sep 12 23:58:02.308276 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 23:58:02.308284 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 23:58:02.308292 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 23:58:02.308299 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 12 23:58:02.308310 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 23:58:02.308318 kernel: alternatives: applying boot alternatives Sep 12 23:58:02.308327 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:58:02.308336 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 23:58:02.308344 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 23:58:02.308352 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 23:58:02.308360 kernel: Fallback order for Node 0: 0 Sep 12 23:58:02.308368 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 12 23:58:02.308376 kernel: Policy zone: Normal Sep 12 23:58:02.308384 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 23:58:02.308392 kernel: software IO TLB: area num 2. Sep 12 23:58:02.308402 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Sep 12 23:58:02.308410 kernel: Memory: 3982560K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 211600K reserved, 0K cma-reserved) Sep 12 23:58:02.308418 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 23:58:02.308425 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 23:58:02.308432 kernel: rcu: RCU event tracing is enabled. Sep 12 23:58:02.308439 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 23:58:02.308446 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 23:58:02.308455 kernel: Tracing variant of Tasks RCU enabled. Sep 12 23:58:02.308463 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 23:58:02.308471 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 23:58:02.308479 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 23:58:02.308488 kernel: GICv3: 960 SPIs implemented Sep 12 23:58:02.308496 kernel: GICv3: 0 Extended SPIs implemented Sep 12 23:58:02.308505 kernel: Root IRQ handler: gic_handle_irq Sep 12 23:58:02.308512 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 23:58:02.308520 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 12 23:58:02.308528 kernel: ITS: No ITS available, not enabling LPIs Sep 12 23:58:02.308536 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 23:58:02.308543 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:58:02.308550 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 23:58:02.308557 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 23:58:02.308564 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 23:58:02.308573 kernel: Console: colour dummy device 80x25 Sep 12 23:58:02.308581 kernel: printk: console [tty1] enabled Sep 12 23:58:02.308592 kernel: ACPI: Core revision 20230628 Sep 12 23:58:02.308600 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 23:58:02.308607 kernel: pid_max: default: 32768 minimum: 301 Sep 12 23:58:02.308614 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 23:58:02.308621 kernel: landlock: Up and running. Sep 12 23:58:02.308628 kernel: SELinux: Initializing. Sep 12 23:58:02.308635 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:58:02.308645 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:58:02.308655 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 23:58:02.308662 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 23:58:02.308669 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 12 23:58:02.308676 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 12 23:58:02.308683 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 12 23:58:02.308691 kernel: rcu: Hierarchical SRCU implementation. Sep 12 23:58:02.308701 kernel: rcu: Max phase no-delay instances is 400. Sep 12 23:58:02.308715 kernel: Remapping and enabling EFI services. Sep 12 23:58:02.308723 kernel: smp: Bringing up secondary CPUs ... Sep 12 23:58:02.308730 kernel: Detected PIPT I-cache on CPU1 Sep 12 23:58:02.308738 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 12 23:58:02.308747 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:58:02.308757 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 23:58:02.308764 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 23:58:02.308772 kernel: SMP: Total of 2 processors activated. Sep 12 23:58:02.308780 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 23:58:02.308791 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 12 23:58:02.308799 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 23:58:02.308807 kernel: CPU features: detected: CRC32 instructions Sep 12 23:58:02.308814 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 23:58:02.308821 kernel: CPU features: detected: LSE atomic instructions Sep 12 23:58:02.308833 kernel: CPU features: detected: Privileged Access Never Sep 12 23:58:02.308840 kernel: CPU: All CPU(s) started at EL1 Sep 12 23:58:02.308848 kernel: alternatives: applying system-wide alternatives Sep 12 23:58:02.308855 kernel: devtmpfs: initialized Sep 12 23:58:02.308865 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 23:58:02.308872 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 23:58:02.308882 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 23:58:02.308889 kernel: SMBIOS 3.1.0 present. Sep 12 23:58:02.308897 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 12 23:58:02.308905 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 23:58:02.308912 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 23:58:02.308922 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 23:58:02.308930 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 23:58:02.308939 kernel: audit: initializing netlink subsys (disabled) Sep 12 23:58:02.308947 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Sep 12 23:58:02.308954 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 23:58:02.308965 kernel: cpuidle: using governor menu Sep 12 23:58:02.308972 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 23:58:02.308980 kernel: ASID allocator initialised with 32768 entries Sep 12 23:58:02.308987 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 23:58:02.308994 kernel: Serial: AMBA PL011 UART driver Sep 12 23:58:02.309002 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 23:58:02.309014 kernel: Modules: 0 pages in range for non-PLT usage Sep 12 23:58:02.309022 kernel: Modules: 508992 pages in range for PLT usage Sep 12 23:58:02.309029 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 23:58:02.309037 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 23:58:02.309044 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 23:58:02.309052 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 23:58:02.309059 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 23:58:02.309069 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 23:58:02.309076 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 23:58:02.309086 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 23:58:02.309103 kernel: ACPI: Added _OSI(Module Device) Sep 12 23:58:02.309111 kernel: ACPI: Added _OSI(Processor Device) Sep 12 23:58:02.309118 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 23:58:02.309126 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 23:58:02.309136 kernel: ACPI: Interpreter enabled Sep 12 23:58:02.309144 kernel: ACPI: Using GIC for interrupt routing Sep 12 23:58:02.309151 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 12 23:58:02.309159 kernel: printk: console [ttyAMA0] enabled Sep 12 23:58:02.309168 kernel: printk: bootconsole [pl11] disabled Sep 12 23:58:02.309176 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 12 23:58:02.309184 kernel: iommu: Default domain type: Translated Sep 12 23:58:02.309194 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 23:58:02.309201 kernel: efivars: Registered efivars operations Sep 12 23:58:02.309209 kernel: vgaarb: loaded Sep 12 23:58:02.309216 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 23:58:02.309224 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 23:58:02.309231 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 23:58:02.309240 kernel: pnp: PnP ACPI init Sep 12 23:58:02.309250 kernel: pnp: PnP ACPI: found 0 devices Sep 12 23:58:02.309258 kernel: NET: Registered PF_INET protocol family Sep 12 23:58:02.309265 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 23:58:02.309273 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 23:58:02.309280 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 23:58:02.309288 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 23:58:02.309298 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 23:58:02.309305 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 23:58:02.309314 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:58:02.309322 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:58:02.309330 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 23:58:02.309337 kernel: PCI: CLS 0 bytes, default 64 Sep 12 23:58:02.309347 kernel: kvm [1]: HYP mode not available Sep 12 23:58:02.309354 kernel: Initialise system trusted keyrings Sep 12 23:58:02.309362 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 23:58:02.309369 kernel: Key type asymmetric registered Sep 12 23:58:02.309376 kernel: Asymmetric key parser 'x509' registered Sep 12 23:58:02.309389 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 23:58:02.309397 kernel: io scheduler mq-deadline registered Sep 12 23:58:02.309405 kernel: io scheduler kyber registered Sep 12 23:58:02.309413 kernel: io scheduler bfq registered Sep 12 23:58:02.309420 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 23:58:02.309427 kernel: thunder_xcv, ver 1.0 Sep 12 23:58:02.309435 kernel: thunder_bgx, ver 1.0 Sep 12 23:58:02.309442 kernel: nicpf, ver 1.0 Sep 12 23:58:02.309452 kernel: nicvf, ver 1.0 Sep 12 23:58:02.309630 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 23:58:02.309724 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T23:58:01 UTC (1757721481) Sep 12 23:58:02.309735 kernel: efifb: probing for efifb Sep 12 23:58:02.309743 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 12 23:58:02.309754 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 12 23:58:02.309761 kernel: efifb: scrolling: redraw Sep 12 23:58:02.309769 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 23:58:02.309776 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 23:58:02.309786 kernel: fb0: EFI VGA frame buffer device Sep 12 23:58:02.309793 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 12 23:58:02.309804 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 23:58:02.309811 kernel: No ACPI PMU IRQ for CPU0 Sep 12 23:58:02.309819 kernel: No ACPI PMU IRQ for CPU1 Sep 12 23:58:02.309826 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 12 23:58:02.309833 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 23:58:02.309841 kernel: watchdog: Hard watchdog permanently disabled Sep 12 23:58:02.309848 kernel: NET: Registered PF_INET6 protocol family Sep 12 23:58:02.309859 kernel: Segment Routing with IPv6 Sep 12 23:58:02.309866 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 23:58:02.309874 kernel: NET: Registered PF_PACKET protocol family Sep 12 23:58:02.309881 kernel: Key type dns_resolver registered Sep 12 23:58:02.309889 kernel: registered taskstats version 1 Sep 12 23:58:02.309896 kernel: Loading compiled-in X.509 certificates Sep 12 23:58:02.309904 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 12 23:58:02.309911 kernel: Key type .fscrypt registered Sep 12 23:58:02.309918 kernel: Key type fscrypt-provisioning registered Sep 12 23:58:02.309928 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 23:58:02.309935 kernel: ima: Allocated hash algorithm: sha1 Sep 12 23:58:02.309943 kernel: ima: No architecture policies found Sep 12 23:58:02.309950 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 23:58:02.309958 kernel: clk: Disabling unused clocks Sep 12 23:58:02.309965 kernel: Freeing unused kernel memory: 39488K Sep 12 23:58:02.309973 kernel: Run /init as init process Sep 12 23:58:02.309980 kernel: with arguments: Sep 12 23:58:02.309987 kernel: /init Sep 12 23:58:02.309996 kernel: with environment: Sep 12 23:58:02.310003 kernel: HOME=/ Sep 12 23:58:02.310010 kernel: TERM=linux Sep 12 23:58:02.310018 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 23:58:02.310027 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:58:02.310038 systemd[1]: Detected virtualization microsoft. Sep 12 23:58:02.310046 systemd[1]: Detected architecture arm64. Sep 12 23:58:02.310053 systemd[1]: Running in initrd. Sep 12 23:58:02.310063 systemd[1]: No hostname configured, using default hostname. Sep 12 23:58:02.310071 systemd[1]: Hostname set to . Sep 12 23:58:02.310079 systemd[1]: Initializing machine ID from random generator. Sep 12 23:58:02.310087 systemd[1]: Queued start job for default target initrd.target. Sep 12 23:58:02.312175 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:58:02.312186 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:58:02.312195 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 23:58:02.312204 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:58:02.312218 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 23:58:02.312227 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 23:58:02.312237 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 23:58:02.312245 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 23:58:02.312253 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:58:02.312261 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:58:02.312271 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:58:02.312279 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:58:02.312287 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:58:02.312295 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:58:02.312303 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:58:02.312311 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:58:02.312319 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:58:02.312327 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 23:58:02.312335 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:58:02.312345 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:58:02.312353 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:58:02.312361 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:58:02.312369 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 23:58:02.312377 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:58:02.312385 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 23:58:02.312393 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 23:58:02.312401 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:58:02.312409 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:58:02.312448 systemd-journald[216]: Collecting audit messages is disabled. Sep 12 23:58:02.312469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:58:02.312478 systemd-journald[216]: Journal started Sep 12 23:58:02.312499 systemd-journald[216]: Runtime Journal (/run/log/journal/9e6d80eb725547739403268b91ec5b29) is 8.0M, max 78.5M, 70.5M free. Sep 12 23:58:02.317345 systemd-modules-load[217]: Inserted module 'overlay' Sep 12 23:58:02.336009 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:58:02.346174 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 23:58:02.364552 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 23:58:02.364576 kernel: Bridge firewalling registered Sep 12 23:58:02.358419 systemd-modules-load[217]: Inserted module 'br_netfilter' Sep 12 23:58:02.359547 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:58:02.371326 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 23:58:02.380396 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:58:02.392484 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:02.419486 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:58:02.428272 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:58:02.456301 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:58:02.469260 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:58:02.491117 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:58:02.499698 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:58:02.512179 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:58:02.523972 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:58:02.547342 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 23:58:02.555275 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:58:02.577861 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:58:02.594880 dracut-cmdline[251]: dracut-dracut-053 Sep 12 23:58:02.601681 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:58:02.602177 systemd-resolved[252]: Positive Trust Anchors: Sep 12 23:58:02.602186 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:58:02.602218 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:58:02.606345 systemd-resolved[252]: Defaulting to hostname 'linux'. Sep 12 23:58:02.632325 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:58:02.639272 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:58:02.659380 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:58:02.763115 kernel: SCSI subsystem initialized Sep 12 23:58:02.772105 kernel: Loading iSCSI transport class v2.0-870. Sep 12 23:58:02.781118 kernel: iscsi: registered transport (tcp) Sep 12 23:58:02.799102 kernel: iscsi: registered transport (qla4xxx) Sep 12 23:58:02.799165 kernel: QLogic iSCSI HBA Driver Sep 12 23:58:02.833514 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 23:58:02.851243 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 23:58:02.882902 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 23:58:02.882959 kernel: device-mapper: uevent: version 1.0.3 Sep 12 23:58:02.889048 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 23:58:02.938130 kernel: raid6: neonx8 gen() 15728 MB/s Sep 12 23:58:02.958111 kernel: raid6: neonx4 gen() 15673 MB/s Sep 12 23:58:02.978105 kernel: raid6: neonx2 gen() 13226 MB/s Sep 12 23:58:02.999108 kernel: raid6: neonx1 gen() 10520 MB/s Sep 12 23:58:03.019100 kernel: raid6: int64x8 gen() 6975 MB/s Sep 12 23:58:03.039106 kernel: raid6: int64x4 gen() 7337 MB/s Sep 12 23:58:03.060108 kernel: raid6: int64x2 gen() 6134 MB/s Sep 12 23:58:03.083305 kernel: raid6: int64x1 gen() 5061 MB/s Sep 12 23:58:03.083326 kernel: raid6: using algorithm neonx8 gen() 15728 MB/s Sep 12 23:58:03.107164 kernel: raid6: .... xor() 12062 MB/s, rmw enabled Sep 12 23:58:03.107202 kernel: raid6: using neon recovery algorithm Sep 12 23:58:03.119656 kernel: xor: measuring software checksum speed Sep 12 23:58:03.119686 kernel: 8regs : 19721 MB/sec Sep 12 23:58:03.123078 kernel: 32regs : 19622 MB/sec Sep 12 23:58:03.126411 kernel: arm64_neon : 26910 MB/sec Sep 12 23:58:03.130621 kernel: xor: using function: arm64_neon (26910 MB/sec) Sep 12 23:58:03.182112 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 23:58:03.191157 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:58:03.207219 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:58:03.230172 systemd-udevd[438]: Using default interface naming scheme 'v255'. Sep 12 23:58:03.235539 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:58:03.260337 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 23:58:03.272809 dracut-pre-trigger[443]: rd.md=0: removing MD RAID activation Sep 12 23:58:03.298024 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:58:03.312522 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:58:03.350854 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:58:03.370304 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 23:58:03.393999 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 23:58:03.406049 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:58:03.427351 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:58:03.449288 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:58:03.472343 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 23:58:03.495498 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:58:03.514933 kernel: hv_vmbus: Vmbus version:5.3 Sep 12 23:58:03.509132 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:58:03.509245 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:58:03.529470 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:58:03.543223 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:58:03.543373 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:03.557599 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:58:03.591004 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:58:03.638502 kernel: hv_vmbus: registering driver hid_hyperv Sep 12 23:58:03.638528 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 12 23:58:03.638538 kernel: hv_vmbus: registering driver hv_netvsc Sep 12 23:58:03.638548 kernel: hv_vmbus: registering driver hv_storvsc Sep 12 23:58:03.638563 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 12 23:58:03.638573 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 12 23:58:03.638583 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 12 23:58:03.638592 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 12 23:58:03.647397 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:58:03.668316 kernel: scsi host0: storvsc_host_t Sep 12 23:58:03.668480 kernel: scsi host1: storvsc_host_t Sep 12 23:58:03.668577 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 12 23:58:03.647536 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:03.686879 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 12 23:58:03.694567 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 12 23:58:03.696333 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:58:03.714120 kernel: PTP clock support registered Sep 12 23:58:03.725181 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:03.746963 kernel: hv_utils: Registering HyperV Utility Driver Sep 12 23:58:03.746984 kernel: hv_vmbus: registering driver hv_utils Sep 12 23:58:03.746993 kernel: hv_utils: Shutdown IC version 3.2 Sep 12 23:58:03.747003 kernel: hv_utils: Heartbeat IC version 3.0 Sep 12 23:58:03.755503 kernel: hv_netvsc 0022487d-7fb7-0022-487d-7fb70022487d eth0: VF slot 1 added Sep 12 23:58:03.759085 kernel: hv_utils: TimeSync IC version 4.0 Sep 12 23:58:03.759247 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:58:03.489319 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 12 23:58:03.502298 systemd-journald[216]: Time jumped backwards, rotating. Sep 12 23:58:03.502345 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 23:58:03.474732 systemd-resolved[252]: Clock change detected. Flushing caches. Sep 12 23:58:03.516747 kernel: hv_vmbus: registering driver hv_pci Sep 12 23:58:03.516764 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 12 23:58:03.521082 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 12 23:58:03.527805 kernel: hv_pci e78c236d-8ca2-4144-b3c4-23d045e014b0: PCI VMBus probing: Using version 0x10004 Sep 12 23:58:03.527982 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 23:58:03.538349 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 12 23:58:03.531636 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:58:03.693666 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 12 23:58:03.693899 kernel: hv_pci e78c236d-8ca2-4144-b3c4-23d045e014b0: PCI host bridge to bus 8ca2:00 Sep 12 23:58:03.694001 kernel: pci_bus 8ca2:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 12 23:58:03.705744 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 12 23:58:03.705929 kernel: pci_bus 8ca2:00: No busn resource found for root bus, will use [bus 00-ff] Sep 12 23:58:03.713661 kernel: pci 8ca2:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 12 23:58:03.723367 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:58:03.723388 kernel: pci 8ca2:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 12 23:58:03.723408 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 23:58:03.731240 kernel: pci 8ca2:00:02.0: enabling Extended Tags Sep 12 23:58:03.753109 kernel: pci 8ca2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8ca2:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 12 23:58:03.765332 kernel: pci_bus 8ca2:00: busn_res: [bus 00-ff] end is updated to 00 Sep 12 23:58:03.765522 kernel: pci 8ca2:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 12 23:58:03.813661 kernel: mlx5_core 8ca2:00:02.0: enabling device (0000 -> 0002) Sep 12 23:58:03.820106 kernel: mlx5_core 8ca2:00:02.0: firmware version: 16.31.2424 Sep 12 23:58:04.101360 kernel: hv_netvsc 0022487d-7fb7-0022-487d-7fb70022487d eth0: VF registering: eth1 Sep 12 23:58:04.101675 kernel: mlx5_core 8ca2:00:02.0 eth1: joined to eth0 Sep 12 23:58:04.111184 kernel: mlx5_core 8ca2:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 12 23:58:04.122140 kernel: mlx5_core 8ca2:00:02.0 enP36002s1: renamed from eth1 Sep 12 23:58:04.360929 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 12 23:58:04.386117 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (494) Sep 12 23:58:04.399729 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 12 23:58:04.428077 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (495) Sep 12 23:58:04.421176 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 12 23:58:04.440317 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 23:58:04.463388 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 12 23:58:04.488246 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 23:58:04.516129 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:58:04.525114 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:58:04.537112 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:58:05.539146 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:58:05.539957 disk-uuid[607]: The operation has completed successfully. Sep 12 23:58:05.610014 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 23:58:05.612117 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 23:58:05.643281 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 23:58:05.657025 sh[720]: Success Sep 12 23:58:05.688125 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 23:58:06.056635 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 23:58:06.077227 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 23:58:06.089120 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 23:58:06.125514 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 12 23:58:06.125561 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:58:06.132030 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 23:58:06.137210 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 23:58:06.141301 kernel: BTRFS info (device dm-0): using free space tree Sep 12 23:58:06.546868 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 23:58:06.552271 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 23:58:06.573396 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 23:58:06.581220 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 23:58:06.625142 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:58:06.625185 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:58:06.629510 kernel: BTRFS info (device sda6): using free space tree Sep 12 23:58:06.669140 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:58:06.692220 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:58:06.712030 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 23:58:06.717970 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 23:58:06.731439 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:58:06.729899 systemd-networkd[894]: lo: Link UP Sep 12 23:58:06.729902 systemd-networkd[894]: lo: Gained carrier Sep 12 23:58:06.735515 systemd-networkd[894]: Enumeration completed Sep 12 23:58:06.735689 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:58:06.742415 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:58:06.742418 systemd-networkd[894]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:58:06.743536 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 23:58:06.753110 systemd[1]: Reached target network.target - Network. Sep 12 23:58:06.786338 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 23:58:06.859342 kernel: mlx5_core 8ca2:00:02.0 enP36002s1: Link up Sep 12 23:58:06.859592 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 23:58:06.935112 kernel: hv_netvsc 0022487d-7fb7-0022-487d-7fb70022487d eth0: Data path switched to VF: enP36002s1 Sep 12 23:58:06.935410 systemd-networkd[894]: enP36002s1: Link UP Sep 12 23:58:06.935656 systemd-networkd[894]: eth0: Link UP Sep 12 23:58:06.936040 systemd-networkd[894]: eth0: Gained carrier Sep 12 23:58:06.936049 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:58:06.959609 systemd-networkd[894]: enP36002s1: Gained carrier Sep 12 23:58:06.970139 systemd-networkd[894]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 23:58:07.830773 ignition[905]: Ignition 2.19.0 Sep 12 23:58:07.830788 ignition[905]: Stage: fetch-offline Sep 12 23:58:07.830826 ignition[905]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:07.835345 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:58:07.830835 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:07.857223 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 23:58:07.830955 ignition[905]: parsed url from cmdline: "" Sep 12 23:58:07.830959 ignition[905]: no config URL provided Sep 12 23:58:07.830964 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:58:07.830971 ignition[905]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:58:07.830976 ignition[905]: failed to fetch config: resource requires networking Sep 12 23:58:07.834401 ignition[905]: Ignition finished successfully Sep 12 23:58:07.870874 ignition[914]: Ignition 2.19.0 Sep 12 23:58:07.870880 ignition[914]: Stage: fetch Sep 12 23:58:07.871849 ignition[914]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:07.871870 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:07.871989 ignition[914]: parsed url from cmdline: "" Sep 12 23:58:07.871993 ignition[914]: no config URL provided Sep 12 23:58:07.872003 ignition[914]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:58:07.872011 ignition[914]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:58:07.872032 ignition[914]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 12 23:58:07.968244 ignition[914]: GET result: OK Sep 12 23:58:07.968312 ignition[914]: config has been read from IMDS userdata Sep 12 23:58:07.968357 ignition[914]: parsing config with SHA512: 91a4cc969166eccf8b2d3a479b4b1ef49237eec203a31ee94ee57d559f58d335a51d91d7740e793069670c0dd387e2d399571e2b3cbd7a623a5ef9208140ce47 Sep 12 23:58:07.973116 unknown[914]: fetched base config from "system" Sep 12 23:58:07.973581 ignition[914]: fetch: fetch complete Sep 12 23:58:07.973125 unknown[914]: fetched base config from "system" Sep 12 23:58:07.973585 ignition[914]: fetch: fetch passed Sep 12 23:58:07.973130 unknown[914]: fetched user config from "azure" Sep 12 23:58:07.973630 ignition[914]: Ignition finished successfully Sep 12 23:58:07.977963 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 23:58:07.999312 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 23:58:08.015042 ignition[920]: Ignition 2.19.0 Sep 12 23:58:08.017434 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 23:58:08.015048 ignition[920]: Stage: kargs Sep 12 23:58:08.015238 ignition[920]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:08.041362 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 23:58:08.015248 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:08.066115 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 23:58:08.016288 ignition[920]: kargs: kargs passed Sep 12 23:58:08.072255 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 23:58:08.016351 ignition[920]: Ignition finished successfully Sep 12 23:58:08.084334 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:58:08.058804 ignition[927]: Ignition 2.19.0 Sep 12 23:58:08.096020 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:58:08.058810 ignition[927]: Stage: disks Sep 12 23:58:08.104641 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:58:08.058967 ignition[927]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:08.116085 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:58:08.058976 ignition[927]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:08.140325 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 23:58:08.059925 ignition[927]: disks: disks passed Sep 12 23:58:08.059970 ignition[927]: Ignition finished successfully Sep 12 23:58:08.206333 systemd-fsck[935]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 12 23:58:08.213342 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 23:58:08.229297 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 23:58:08.249501 systemd-networkd[894]: eth0: Gained IPv6LL Sep 12 23:58:08.291106 kernel: EXT4-fs (sda9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 12 23:58:08.291960 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 23:58:08.296891 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 23:58:08.388164 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:58:08.413106 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (946) Sep 12 23:58:08.413146 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:58:08.423847 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:58:08.424144 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 23:58:08.444275 kernel: BTRFS info (device sda6): using free space tree Sep 12 23:58:08.429304 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 23:58:08.451761 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 23:58:08.463738 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:58:08.471732 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 23:58:08.493108 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 23:58:08.494376 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 23:58:08.501462 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:58:09.007124 coreos-metadata[948]: Sep 12 23:58:09.007 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 23:58:09.017569 coreos-metadata[948]: Sep 12 23:58:09.017 INFO Fetch successful Sep 12 23:58:09.023043 coreos-metadata[948]: Sep 12 23:58:09.019 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 12 23:58:09.034721 coreos-metadata[948]: Sep 12 23:58:09.034 INFO Fetch successful Sep 12 23:58:09.040472 coreos-metadata[948]: Sep 12 23:58:09.036 INFO wrote hostname ci-4081.3.5-n-d2844e2d10 to /sysroot/etc/hostname Sep 12 23:58:09.037362 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 23:58:10.281268 initrd-setup-root[975]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 23:58:10.338572 initrd-setup-root[982]: cut: /sysroot/etc/group: No such file or directory Sep 12 23:58:10.360781 initrd-setup-root[989]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 23:58:10.369564 initrd-setup-root[996]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 23:58:11.572356 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 23:58:11.589290 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 23:58:11.598269 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 23:58:11.619392 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:58:11.616066 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 23:58:11.643623 ignition[1063]: INFO : Ignition 2.19.0 Sep 12 23:58:11.643623 ignition[1063]: INFO : Stage: mount Sep 12 23:58:11.643623 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:11.643623 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:11.670938 ignition[1063]: INFO : mount: mount passed Sep 12 23:58:11.670938 ignition[1063]: INFO : Ignition finished successfully Sep 12 23:58:11.652016 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 23:58:11.675332 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 23:58:11.687892 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 23:58:11.716381 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:58:11.742113 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1076) Sep 12 23:58:11.754722 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:58:11.754750 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:58:11.758710 kernel: BTRFS info (device sda6): using free space tree Sep 12 23:58:11.767104 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 23:58:11.768996 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:58:11.800127 ignition[1093]: INFO : Ignition 2.19.0 Sep 12 23:58:11.800127 ignition[1093]: INFO : Stage: files Sep 12 23:58:11.800127 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:11.800127 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:11.820512 ignition[1093]: DEBUG : files: compiled without relabeling support, skipping Sep 12 23:58:11.834670 ignition[1093]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 23:58:11.842167 ignition[1093]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 23:58:11.935608 ignition[1093]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 23:58:11.943029 ignition[1093]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 23:58:11.943029 ignition[1093]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 23:58:11.936119 unknown[1093]: wrote ssh authorized keys file for user: core Sep 12 23:58:11.995205 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 23:58:12.004736 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 23:58:12.004736 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 23:58:12.004736 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 12 23:58:12.158573 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 23:58:12.404934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 12 23:58:12.906665 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 23:58:13.108120 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:58:13.108120 ignition[1093]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: files passed Sep 12 23:58:13.171224 ignition[1093]: INFO : Ignition finished successfully Sep 12 23:58:13.173086 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 23:58:13.210369 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 23:58:13.225262 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 23:58:13.354823 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:58:13.354823 initrd-setup-root-after-ignition[1121]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:58:13.251916 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 23:58:13.383836 initrd-setup-root-after-ignition[1125]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:58:13.252003 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 23:58:13.284886 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:58:13.292401 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 23:58:13.328387 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 23:58:13.365313 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 23:58:13.365412 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 23:58:13.379190 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 23:58:13.389574 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 23:58:13.403944 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 23:58:13.406274 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 23:58:13.439367 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:58:13.462240 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 23:58:13.483413 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:58:13.493000 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:58:13.505904 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 23:58:13.517259 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 23:58:13.517344 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:58:13.536741 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 23:58:13.548510 systemd[1]: Stopped target basic.target - Basic System. Sep 12 23:58:13.558579 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 23:58:13.576827 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:58:13.589978 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 23:58:13.601431 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 23:58:13.612941 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:58:13.625478 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 23:58:13.639530 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 23:58:13.651686 systemd[1]: Stopped target swap.target - Swaps. Sep 12 23:58:13.703165 kernel: mlx5_core 8ca2:00:02.0: poll_health:835:(pid 21): device's health compromised - reached miss count Sep 12 23:58:13.661429 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 23:58:13.661503 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:58:13.679051 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:58:13.699075 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:58:13.727020 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 23:58:13.727286 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:58:13.735164 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 23:58:13.735237 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 23:58:13.752951 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 23:58:13.753012 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:58:13.767164 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 23:58:13.767209 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 23:58:13.778591 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 23:58:13.778634 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 23:58:13.843639 ignition[1147]: INFO : Ignition 2.19.0 Sep 12 23:58:13.843639 ignition[1147]: INFO : Stage: umount Sep 12 23:58:13.843639 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:13.843639 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:13.843639 ignition[1147]: INFO : umount: umount passed Sep 12 23:58:13.843639 ignition[1147]: INFO : Ignition finished successfully Sep 12 23:58:13.811278 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 23:58:13.837236 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 23:58:13.848030 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 23:58:13.848107 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:58:13.864078 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 23:58:13.864166 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:58:13.878175 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 23:58:13.878277 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 23:58:13.890453 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 23:58:13.890548 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 23:58:13.906742 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 23:58:13.906835 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 23:58:13.916604 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 23:58:13.916651 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 23:58:13.927665 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 23:58:13.927707 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 23:58:13.939729 systemd[1]: Stopped target network.target - Network. Sep 12 23:58:13.950164 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 23:58:13.950229 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:58:13.962775 systemd[1]: Stopped target paths.target - Path Units. Sep 12 23:58:13.973205 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 23:58:13.979344 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:58:13.986653 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 23:58:13.996695 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 23:58:14.006815 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 23:58:14.006878 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:58:14.016992 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 23:58:14.017043 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:58:14.027530 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 23:58:14.027589 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 23:58:14.037956 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 23:58:14.038002 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 23:58:14.049789 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 23:58:14.060897 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 23:58:14.071207 systemd-networkd[894]: eth0: DHCPv6 lease lost Sep 12 23:58:14.072734 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 23:58:14.072856 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 23:58:14.092573 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 23:58:14.092678 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 23:58:14.107143 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 23:58:14.107198 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:58:14.137346 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 23:58:14.146314 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 23:58:14.146385 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:58:14.157852 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:58:14.347677 kernel: hv_netvsc 0022487d-7fb7-0022-487d-7fb70022487d eth0: Data path switched from VF: enP36002s1 Sep 12 23:58:14.157895 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:58:14.169019 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 23:58:14.169062 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 23:58:14.179633 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 23:58:14.179673 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:58:14.192275 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:58:14.205340 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 23:58:14.237826 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 23:58:14.238002 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:58:14.252140 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 23:58:14.252254 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 23:58:14.264053 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 23:58:14.264148 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 23:58:14.274996 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 23:58:14.275036 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:58:14.286599 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 23:58:14.286656 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:58:14.303994 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 23:58:14.304046 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 23:58:14.328684 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:58:14.328737 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:58:14.347729 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 23:58:14.347783 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 23:58:14.372295 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 23:58:14.387750 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 23:58:14.387826 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:58:14.403214 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 23:58:14.403291 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:58:14.420031 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 23:58:14.420129 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:58:14.432138 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:58:14.432183 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:14.444029 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 23:58:14.446180 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 23:58:14.463012 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 23:58:14.463143 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 23:58:14.472657 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 23:58:14.503540 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 23:58:14.721625 systemd[1]: Switching root. Sep 12 23:58:14.752740 systemd-journald[216]: Journal stopped Sep 12 23:58:02.305761 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 23:58:02.305786 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 12 23:58:02.305794 kernel: KASLR enabled Sep 12 23:58:02.305800 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 12 23:58:02.305808 kernel: printk: bootconsole [pl11] enabled Sep 12 23:58:02.305814 kernel: efi: EFI v2.7 by EDK II Sep 12 23:58:02.305821 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Sep 12 23:58:02.305827 kernel: random: crng init done Sep 12 23:58:02.305833 kernel: ACPI: Early table checksum verification disabled Sep 12 23:58:02.305839 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 12 23:58:02.305845 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:02.305851 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:02.305858 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 12 23:58:02.305864 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:02.305872 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:02.305879 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:02.305885 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:02.305894 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:02.305900 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:02.305907 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 12 23:58:02.305914 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:02.305920 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 12 23:58:02.305927 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 12 23:58:02.305934 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 12 23:58:02.305940 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 12 23:58:02.305948 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 12 23:58:02.305954 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 12 23:58:02.305961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 12 23:58:02.305969 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 12 23:58:02.305976 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 12 23:58:02.305982 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 12 23:58:02.305989 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 12 23:58:02.305995 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 12 23:58:02.306001 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 12 23:58:02.306008 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Sep 12 23:58:02.306014 kernel: Zone ranges: Sep 12 23:58:02.306020 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 12 23:58:02.306027 kernel: DMA32 empty Sep 12 23:58:02.306033 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 23:58:02.306040 kernel: Movable zone start for each node Sep 12 23:58:02.306051 kernel: Early memory node ranges Sep 12 23:58:02.306058 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 12 23:58:02.306065 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 12 23:58:02.306072 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 12 23:58:02.306079 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 12 23:58:02.308118 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 12 23:58:02.308140 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 12 23:58:02.308148 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 23:58:02.308155 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 12 23:58:02.308162 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 12 23:58:02.308169 kernel: psci: probing for conduit method from ACPI. Sep 12 23:58:02.308176 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 23:58:02.308183 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 23:58:02.308190 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 12 23:58:02.308197 kernel: psci: SMC Calling Convention v1.4 Sep 12 23:58:02.308203 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 12 23:58:02.308210 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 12 23:58:02.308223 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 23:58:02.308230 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 23:58:02.308237 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 23:58:02.308244 kernel: Detected PIPT I-cache on CPU0 Sep 12 23:58:02.308250 kernel: CPU features: detected: GIC system register CPU interface Sep 12 23:58:02.308258 kernel: CPU features: detected: Hardware dirty bit management Sep 12 23:58:02.308268 kernel: CPU features: detected: Spectre-BHB Sep 12 23:58:02.308276 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 23:58:02.308284 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 23:58:02.308292 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 23:58:02.308299 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 12 23:58:02.308310 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 23:58:02.308318 kernel: alternatives: applying boot alternatives Sep 12 23:58:02.308327 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:58:02.308336 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 23:58:02.308344 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 23:58:02.308352 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 23:58:02.308360 kernel: Fallback order for Node 0: 0 Sep 12 23:58:02.308368 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 12 23:58:02.308376 kernel: Policy zone: Normal Sep 12 23:58:02.308384 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 23:58:02.308392 kernel: software IO TLB: area num 2. Sep 12 23:58:02.308402 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Sep 12 23:58:02.308410 kernel: Memory: 3982560K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 211600K reserved, 0K cma-reserved) Sep 12 23:58:02.308418 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 23:58:02.308425 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 23:58:02.308432 kernel: rcu: RCU event tracing is enabled. Sep 12 23:58:02.308439 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 23:58:02.308446 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 23:58:02.308455 kernel: Tracing variant of Tasks RCU enabled. Sep 12 23:58:02.308463 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 23:58:02.308471 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 23:58:02.308479 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 23:58:02.308488 kernel: GICv3: 960 SPIs implemented Sep 12 23:58:02.308496 kernel: GICv3: 0 Extended SPIs implemented Sep 12 23:58:02.308505 kernel: Root IRQ handler: gic_handle_irq Sep 12 23:58:02.308512 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 23:58:02.308520 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 12 23:58:02.308528 kernel: ITS: No ITS available, not enabling LPIs Sep 12 23:58:02.308536 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 23:58:02.308543 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:58:02.308550 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 23:58:02.308557 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 23:58:02.308564 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 23:58:02.308573 kernel: Console: colour dummy device 80x25 Sep 12 23:58:02.308581 kernel: printk: console [tty1] enabled Sep 12 23:58:02.308592 kernel: ACPI: Core revision 20230628 Sep 12 23:58:02.308600 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 23:58:02.308607 kernel: pid_max: default: 32768 minimum: 301 Sep 12 23:58:02.308614 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 23:58:02.308621 kernel: landlock: Up and running. Sep 12 23:58:02.308628 kernel: SELinux: Initializing. Sep 12 23:58:02.308635 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:58:02.308645 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:58:02.308655 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 23:58:02.308662 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 23:58:02.308669 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 12 23:58:02.308676 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 12 23:58:02.308683 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 12 23:58:02.308691 kernel: rcu: Hierarchical SRCU implementation. Sep 12 23:58:02.308701 kernel: rcu: Max phase no-delay instances is 400. Sep 12 23:58:02.308715 kernel: Remapping and enabling EFI services. Sep 12 23:58:02.308723 kernel: smp: Bringing up secondary CPUs ... Sep 12 23:58:02.308730 kernel: Detected PIPT I-cache on CPU1 Sep 12 23:58:02.308738 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 12 23:58:02.308747 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:58:02.308757 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 23:58:02.308764 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 23:58:02.308772 kernel: SMP: Total of 2 processors activated. Sep 12 23:58:02.308780 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 23:58:02.308791 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 12 23:58:02.308799 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 23:58:02.308807 kernel: CPU features: detected: CRC32 instructions Sep 12 23:58:02.308814 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 23:58:02.308821 kernel: CPU features: detected: LSE atomic instructions Sep 12 23:58:02.308833 kernel: CPU features: detected: Privileged Access Never Sep 12 23:58:02.308840 kernel: CPU: All CPU(s) started at EL1 Sep 12 23:58:02.308848 kernel: alternatives: applying system-wide alternatives Sep 12 23:58:02.308855 kernel: devtmpfs: initialized Sep 12 23:58:02.308865 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 23:58:02.308872 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 23:58:02.308882 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 23:58:02.308889 kernel: SMBIOS 3.1.0 present. Sep 12 23:58:02.308897 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 12 23:58:02.308905 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 23:58:02.308912 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 23:58:02.308922 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 23:58:02.308930 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 23:58:02.308939 kernel: audit: initializing netlink subsys (disabled) Sep 12 23:58:02.308947 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Sep 12 23:58:02.308954 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 23:58:02.308965 kernel: cpuidle: using governor menu Sep 12 23:58:02.308972 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 23:58:02.308980 kernel: ASID allocator initialised with 32768 entries Sep 12 23:58:02.308987 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 23:58:02.308994 kernel: Serial: AMBA PL011 UART driver Sep 12 23:58:02.309002 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 23:58:02.309014 kernel: Modules: 0 pages in range for non-PLT usage Sep 12 23:58:02.309022 kernel: Modules: 508992 pages in range for PLT usage Sep 12 23:58:02.309029 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 23:58:02.309037 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 23:58:02.309044 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 23:58:02.309052 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 23:58:02.309059 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 23:58:02.309069 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 23:58:02.309076 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 23:58:02.309086 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 23:58:02.309103 kernel: ACPI: Added _OSI(Module Device) Sep 12 23:58:02.309111 kernel: ACPI: Added _OSI(Processor Device) Sep 12 23:58:02.309118 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 23:58:02.309126 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 23:58:02.309136 kernel: ACPI: Interpreter enabled Sep 12 23:58:02.309144 kernel: ACPI: Using GIC for interrupt routing Sep 12 23:58:02.309151 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 12 23:58:02.309159 kernel: printk: console [ttyAMA0] enabled Sep 12 23:58:02.309168 kernel: printk: bootconsole [pl11] disabled Sep 12 23:58:02.309176 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 12 23:58:02.309184 kernel: iommu: Default domain type: Translated Sep 12 23:58:02.309194 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 23:58:02.309201 kernel: efivars: Registered efivars operations Sep 12 23:58:02.309209 kernel: vgaarb: loaded Sep 12 23:58:02.309216 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 23:58:02.309224 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 23:58:02.309231 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 23:58:02.309240 kernel: pnp: PnP ACPI init Sep 12 23:58:02.309250 kernel: pnp: PnP ACPI: found 0 devices Sep 12 23:58:02.309258 kernel: NET: Registered PF_INET protocol family Sep 12 23:58:02.309265 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 23:58:02.309273 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 23:58:02.309280 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 23:58:02.309288 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 23:58:02.309298 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 23:58:02.309305 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 23:58:02.309314 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:58:02.309322 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:58:02.309330 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 23:58:02.309337 kernel: PCI: CLS 0 bytes, default 64 Sep 12 23:58:02.309347 kernel: kvm [1]: HYP mode not available Sep 12 23:58:02.309354 kernel: Initialise system trusted keyrings Sep 12 23:58:02.309362 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 23:58:02.309369 kernel: Key type asymmetric registered Sep 12 23:58:02.309376 kernel: Asymmetric key parser 'x509' registered Sep 12 23:58:02.309389 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 23:58:02.309397 kernel: io scheduler mq-deadline registered Sep 12 23:58:02.309405 kernel: io scheduler kyber registered Sep 12 23:58:02.309413 kernel: io scheduler bfq registered Sep 12 23:58:02.309420 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 23:58:02.309427 kernel: thunder_xcv, ver 1.0 Sep 12 23:58:02.309435 kernel: thunder_bgx, ver 1.0 Sep 12 23:58:02.309442 kernel: nicpf, ver 1.0 Sep 12 23:58:02.309452 kernel: nicvf, ver 1.0 Sep 12 23:58:02.309630 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 23:58:02.309724 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T23:58:01 UTC (1757721481) Sep 12 23:58:02.309735 kernel: efifb: probing for efifb Sep 12 23:58:02.309743 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 12 23:58:02.309754 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 12 23:58:02.309761 kernel: efifb: scrolling: redraw Sep 12 23:58:02.309769 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 23:58:02.309776 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 23:58:02.309786 kernel: fb0: EFI VGA frame buffer device Sep 12 23:58:02.309793 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 12 23:58:02.309804 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 23:58:02.309811 kernel: No ACPI PMU IRQ for CPU0 Sep 12 23:58:02.309819 kernel: No ACPI PMU IRQ for CPU1 Sep 12 23:58:02.309826 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 12 23:58:02.309833 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 23:58:02.309841 kernel: watchdog: Hard watchdog permanently disabled Sep 12 23:58:02.309848 kernel: NET: Registered PF_INET6 protocol family Sep 12 23:58:02.309859 kernel: Segment Routing with IPv6 Sep 12 23:58:02.309866 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 23:58:02.309874 kernel: NET: Registered PF_PACKET protocol family Sep 12 23:58:02.309881 kernel: Key type dns_resolver registered Sep 12 23:58:02.309889 kernel: registered taskstats version 1 Sep 12 23:58:02.309896 kernel: Loading compiled-in X.509 certificates Sep 12 23:58:02.309904 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 12 23:58:02.309911 kernel: Key type .fscrypt registered Sep 12 23:58:02.309918 kernel: Key type fscrypt-provisioning registered Sep 12 23:58:02.309928 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 23:58:02.309935 kernel: ima: Allocated hash algorithm: sha1 Sep 12 23:58:02.309943 kernel: ima: No architecture policies found Sep 12 23:58:02.309950 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 23:58:02.309958 kernel: clk: Disabling unused clocks Sep 12 23:58:02.309965 kernel: Freeing unused kernel memory: 39488K Sep 12 23:58:02.309973 kernel: Run /init as init process Sep 12 23:58:02.309980 kernel: with arguments: Sep 12 23:58:02.309987 kernel: /init Sep 12 23:58:02.309996 kernel: with environment: Sep 12 23:58:02.310003 kernel: HOME=/ Sep 12 23:58:02.310010 kernel: TERM=linux Sep 12 23:58:02.310018 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 23:58:02.310027 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:58:02.310038 systemd[1]: Detected virtualization microsoft. Sep 12 23:58:02.310046 systemd[1]: Detected architecture arm64. Sep 12 23:58:02.310053 systemd[1]: Running in initrd. Sep 12 23:58:02.310063 systemd[1]: No hostname configured, using default hostname. Sep 12 23:58:02.310071 systemd[1]: Hostname set to . Sep 12 23:58:02.310079 systemd[1]: Initializing machine ID from random generator. Sep 12 23:58:02.310087 systemd[1]: Queued start job for default target initrd.target. Sep 12 23:58:02.312175 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:58:02.312186 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:58:02.312195 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 23:58:02.312204 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:58:02.312218 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 23:58:02.312227 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 23:58:02.312237 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 23:58:02.312245 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 23:58:02.312253 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:58:02.312261 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:58:02.312271 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:58:02.312279 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:58:02.312287 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:58:02.312295 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:58:02.312303 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:58:02.312311 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:58:02.312319 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:58:02.312327 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 23:58:02.312335 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:58:02.312345 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:58:02.312353 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:58:02.312361 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:58:02.312369 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 23:58:02.312377 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:58:02.312385 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 23:58:02.312393 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 23:58:02.312401 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:58:02.312409 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:58:02.312448 systemd-journald[216]: Collecting audit messages is disabled. Sep 12 23:58:02.312469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:58:02.312478 systemd-journald[216]: Journal started Sep 12 23:58:02.312499 systemd-journald[216]: Runtime Journal (/run/log/journal/9e6d80eb725547739403268b91ec5b29) is 8.0M, max 78.5M, 70.5M free. Sep 12 23:58:02.317345 systemd-modules-load[217]: Inserted module 'overlay' Sep 12 23:58:02.336009 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:58:02.346174 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 23:58:02.364552 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 23:58:02.364576 kernel: Bridge firewalling registered Sep 12 23:58:02.358419 systemd-modules-load[217]: Inserted module 'br_netfilter' Sep 12 23:58:02.359547 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:58:02.371326 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 23:58:02.380396 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:58:02.392484 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:02.419486 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:58:02.428272 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:58:02.456301 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:58:02.469260 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:58:02.491117 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:58:02.499698 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:58:02.512179 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:58:02.523972 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:58:02.547342 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 23:58:02.555275 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:58:02.577861 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:58:02.594880 dracut-cmdline[251]: dracut-dracut-053 Sep 12 23:58:02.601681 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:58:02.602177 systemd-resolved[252]: Positive Trust Anchors: Sep 12 23:58:02.602186 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:58:02.602218 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:58:02.606345 systemd-resolved[252]: Defaulting to hostname 'linux'. Sep 12 23:58:02.632325 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:58:02.639272 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:58:02.659380 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:58:02.763115 kernel: SCSI subsystem initialized Sep 12 23:58:02.772105 kernel: Loading iSCSI transport class v2.0-870. Sep 12 23:58:02.781118 kernel: iscsi: registered transport (tcp) Sep 12 23:58:02.799102 kernel: iscsi: registered transport (qla4xxx) Sep 12 23:58:02.799165 kernel: QLogic iSCSI HBA Driver Sep 12 23:58:02.833514 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 23:58:02.851243 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 23:58:02.882902 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 23:58:02.882959 kernel: device-mapper: uevent: version 1.0.3 Sep 12 23:58:02.889048 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 23:58:02.938130 kernel: raid6: neonx8 gen() 15728 MB/s Sep 12 23:58:02.958111 kernel: raid6: neonx4 gen() 15673 MB/s Sep 12 23:58:02.978105 kernel: raid6: neonx2 gen() 13226 MB/s Sep 12 23:58:02.999108 kernel: raid6: neonx1 gen() 10520 MB/s Sep 12 23:58:03.019100 kernel: raid6: int64x8 gen() 6975 MB/s Sep 12 23:58:03.039106 kernel: raid6: int64x4 gen() 7337 MB/s Sep 12 23:58:03.060108 kernel: raid6: int64x2 gen() 6134 MB/s Sep 12 23:58:03.083305 kernel: raid6: int64x1 gen() 5061 MB/s Sep 12 23:58:03.083326 kernel: raid6: using algorithm neonx8 gen() 15728 MB/s Sep 12 23:58:03.107164 kernel: raid6: .... xor() 12062 MB/s, rmw enabled Sep 12 23:58:03.107202 kernel: raid6: using neon recovery algorithm Sep 12 23:58:03.119656 kernel: xor: measuring software checksum speed Sep 12 23:58:03.119686 kernel: 8regs : 19721 MB/sec Sep 12 23:58:03.123078 kernel: 32regs : 19622 MB/sec Sep 12 23:58:03.126411 kernel: arm64_neon : 26910 MB/sec Sep 12 23:58:03.130621 kernel: xor: using function: arm64_neon (26910 MB/sec) Sep 12 23:58:03.182112 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 23:58:03.191157 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:58:03.207219 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:58:03.230172 systemd-udevd[438]: Using default interface naming scheme 'v255'. Sep 12 23:58:03.235539 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:58:03.260337 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 23:58:03.272809 dracut-pre-trigger[443]: rd.md=0: removing MD RAID activation Sep 12 23:58:03.298024 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:58:03.312522 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:58:03.350854 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:58:03.370304 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 23:58:03.393999 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 23:58:03.406049 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:58:03.427351 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:58:03.449288 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:58:03.472343 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 23:58:03.495498 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:58:03.514933 kernel: hv_vmbus: Vmbus version:5.3 Sep 12 23:58:03.509132 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:58:03.509245 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:58:03.529470 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:58:03.543223 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:58:03.543373 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:03.557599 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:58:03.591004 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:58:03.638502 kernel: hv_vmbus: registering driver hid_hyperv Sep 12 23:58:03.638528 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 12 23:58:03.638538 kernel: hv_vmbus: registering driver hv_netvsc Sep 12 23:58:03.638548 kernel: hv_vmbus: registering driver hv_storvsc Sep 12 23:58:03.638563 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 12 23:58:03.638573 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 12 23:58:03.638583 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 12 23:58:03.638592 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 12 23:58:03.647397 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:58:03.668316 kernel: scsi host0: storvsc_host_t Sep 12 23:58:03.668480 kernel: scsi host1: storvsc_host_t Sep 12 23:58:03.668577 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 12 23:58:03.647536 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:03.686879 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 12 23:58:03.694567 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 12 23:58:03.696333 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:58:03.714120 kernel: PTP clock support registered Sep 12 23:58:03.725181 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:03.746963 kernel: hv_utils: Registering HyperV Utility Driver Sep 12 23:58:03.746984 kernel: hv_vmbus: registering driver hv_utils Sep 12 23:58:03.746993 kernel: hv_utils: Shutdown IC version 3.2 Sep 12 23:58:03.747003 kernel: hv_utils: Heartbeat IC version 3.0 Sep 12 23:58:03.755503 kernel: hv_netvsc 0022487d-7fb7-0022-487d-7fb70022487d eth0: VF slot 1 added Sep 12 23:58:03.759085 kernel: hv_utils: TimeSync IC version 4.0 Sep 12 23:58:03.759247 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:58:03.489319 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 12 23:58:03.502298 systemd-journald[216]: Time jumped backwards, rotating. Sep 12 23:58:03.502345 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 23:58:03.474732 systemd-resolved[252]: Clock change detected. Flushing caches. Sep 12 23:58:03.516747 kernel: hv_vmbus: registering driver hv_pci Sep 12 23:58:03.516764 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 12 23:58:03.521082 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 12 23:58:03.527805 kernel: hv_pci e78c236d-8ca2-4144-b3c4-23d045e014b0: PCI VMBus probing: Using version 0x10004 Sep 12 23:58:03.527982 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 23:58:03.538349 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 12 23:58:03.531636 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:58:03.693666 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 12 23:58:03.693899 kernel: hv_pci e78c236d-8ca2-4144-b3c4-23d045e014b0: PCI host bridge to bus 8ca2:00 Sep 12 23:58:03.694001 kernel: pci_bus 8ca2:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 12 23:58:03.705744 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 12 23:58:03.705929 kernel: pci_bus 8ca2:00: No busn resource found for root bus, will use [bus 00-ff] Sep 12 23:58:03.713661 kernel: pci 8ca2:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 12 23:58:03.723367 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:58:03.723388 kernel: pci 8ca2:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 12 23:58:03.723408 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 23:58:03.731240 kernel: pci 8ca2:00:02.0: enabling Extended Tags Sep 12 23:58:03.753109 kernel: pci 8ca2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8ca2:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 12 23:58:03.765332 kernel: pci_bus 8ca2:00: busn_res: [bus 00-ff] end is updated to 00 Sep 12 23:58:03.765522 kernel: pci 8ca2:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 12 23:58:03.813661 kernel: mlx5_core 8ca2:00:02.0: enabling device (0000 -> 0002) Sep 12 23:58:03.820106 kernel: mlx5_core 8ca2:00:02.0: firmware version: 16.31.2424 Sep 12 23:58:04.101360 kernel: hv_netvsc 0022487d-7fb7-0022-487d-7fb70022487d eth0: VF registering: eth1 Sep 12 23:58:04.101675 kernel: mlx5_core 8ca2:00:02.0 eth1: joined to eth0 Sep 12 23:58:04.111184 kernel: mlx5_core 8ca2:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 12 23:58:04.122140 kernel: mlx5_core 8ca2:00:02.0 enP36002s1: renamed from eth1 Sep 12 23:58:04.360929 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 12 23:58:04.386117 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (494) Sep 12 23:58:04.399729 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 12 23:58:04.428077 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (495) Sep 12 23:58:04.421176 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 12 23:58:04.440317 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 23:58:04.463388 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 12 23:58:04.488246 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 23:58:04.516129 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:58:04.525114 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:58:04.537112 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:58:05.539146 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:58:05.539957 disk-uuid[607]: The operation has completed successfully. Sep 12 23:58:05.610014 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 23:58:05.612117 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 23:58:05.643281 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 23:58:05.657025 sh[720]: Success Sep 12 23:58:05.688125 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 23:58:06.056635 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 23:58:06.077227 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 23:58:06.089120 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 23:58:06.125514 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 12 23:58:06.125561 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:58:06.132030 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 23:58:06.137210 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 23:58:06.141301 kernel: BTRFS info (device dm-0): using free space tree Sep 12 23:58:06.546868 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 23:58:06.552271 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 23:58:06.573396 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 23:58:06.581220 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 23:58:06.625142 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:58:06.625185 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:58:06.629510 kernel: BTRFS info (device sda6): using free space tree Sep 12 23:58:06.669140 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:58:06.692220 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:58:06.712030 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 23:58:06.717970 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 23:58:06.731439 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:58:06.729899 systemd-networkd[894]: lo: Link UP Sep 12 23:58:06.729902 systemd-networkd[894]: lo: Gained carrier Sep 12 23:58:06.735515 systemd-networkd[894]: Enumeration completed Sep 12 23:58:06.735689 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:58:06.742415 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:58:06.742418 systemd-networkd[894]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:58:06.743536 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 23:58:06.753110 systemd[1]: Reached target network.target - Network. Sep 12 23:58:06.786338 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 23:58:06.859342 kernel: mlx5_core 8ca2:00:02.0 enP36002s1: Link up Sep 12 23:58:06.859592 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 23:58:06.935112 kernel: hv_netvsc 0022487d-7fb7-0022-487d-7fb70022487d eth0: Data path switched to VF: enP36002s1 Sep 12 23:58:06.935410 systemd-networkd[894]: enP36002s1: Link UP Sep 12 23:58:06.935656 systemd-networkd[894]: eth0: Link UP Sep 12 23:58:06.936040 systemd-networkd[894]: eth0: Gained carrier Sep 12 23:58:06.936049 systemd-networkd[894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:58:06.959609 systemd-networkd[894]: enP36002s1: Gained carrier Sep 12 23:58:06.970139 systemd-networkd[894]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 23:58:07.830773 ignition[905]: Ignition 2.19.0 Sep 12 23:58:07.830788 ignition[905]: Stage: fetch-offline Sep 12 23:58:07.830826 ignition[905]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:07.835345 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:58:07.830835 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:07.857223 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 23:58:07.830955 ignition[905]: parsed url from cmdline: "" Sep 12 23:58:07.830959 ignition[905]: no config URL provided Sep 12 23:58:07.830964 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:58:07.830971 ignition[905]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:58:07.830976 ignition[905]: failed to fetch config: resource requires networking Sep 12 23:58:07.834401 ignition[905]: Ignition finished successfully Sep 12 23:58:07.870874 ignition[914]: Ignition 2.19.0 Sep 12 23:58:07.870880 ignition[914]: Stage: fetch Sep 12 23:58:07.871849 ignition[914]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:07.871870 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:07.871989 ignition[914]: parsed url from cmdline: "" Sep 12 23:58:07.871993 ignition[914]: no config URL provided Sep 12 23:58:07.872003 ignition[914]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:58:07.872011 ignition[914]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:58:07.872032 ignition[914]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 12 23:58:07.968244 ignition[914]: GET result: OK Sep 12 23:58:07.968312 ignition[914]: config has been read from IMDS userdata Sep 12 23:58:07.968357 ignition[914]: parsing config with SHA512: 91a4cc969166eccf8b2d3a479b4b1ef49237eec203a31ee94ee57d559f58d335a51d91d7740e793069670c0dd387e2d399571e2b3cbd7a623a5ef9208140ce47 Sep 12 23:58:07.973116 unknown[914]: fetched base config from "system" Sep 12 23:58:07.973581 ignition[914]: fetch: fetch complete Sep 12 23:58:07.973125 unknown[914]: fetched base config from "system" Sep 12 23:58:07.973585 ignition[914]: fetch: fetch passed Sep 12 23:58:07.973130 unknown[914]: fetched user config from "azure" Sep 12 23:58:07.973630 ignition[914]: Ignition finished successfully Sep 12 23:58:07.977963 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 23:58:07.999312 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 23:58:08.015042 ignition[920]: Ignition 2.19.0 Sep 12 23:58:08.017434 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 23:58:08.015048 ignition[920]: Stage: kargs Sep 12 23:58:08.015238 ignition[920]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:08.041362 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 23:58:08.015248 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:08.066115 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 23:58:08.016288 ignition[920]: kargs: kargs passed Sep 12 23:58:08.072255 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 23:58:08.016351 ignition[920]: Ignition finished successfully Sep 12 23:58:08.084334 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:58:08.058804 ignition[927]: Ignition 2.19.0 Sep 12 23:58:08.096020 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:58:08.058810 ignition[927]: Stage: disks Sep 12 23:58:08.104641 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:58:08.058967 ignition[927]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:08.116085 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:58:08.058976 ignition[927]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:08.140325 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 23:58:08.059925 ignition[927]: disks: disks passed Sep 12 23:58:08.059970 ignition[927]: Ignition finished successfully Sep 12 23:58:08.206333 systemd-fsck[935]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 12 23:58:08.213342 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 23:58:08.229297 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 23:58:08.249501 systemd-networkd[894]: eth0: Gained IPv6LL Sep 12 23:58:08.291106 kernel: EXT4-fs (sda9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 12 23:58:08.291960 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 23:58:08.296891 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 23:58:08.388164 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:58:08.413106 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (946) Sep 12 23:58:08.413146 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:58:08.423847 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:58:08.424144 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 23:58:08.444275 kernel: BTRFS info (device sda6): using free space tree Sep 12 23:58:08.429304 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 23:58:08.451761 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 23:58:08.463738 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:58:08.471732 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 23:58:08.493108 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 23:58:08.494376 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 23:58:08.501462 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:58:09.007124 coreos-metadata[948]: Sep 12 23:58:09.007 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 23:58:09.017569 coreos-metadata[948]: Sep 12 23:58:09.017 INFO Fetch successful Sep 12 23:58:09.023043 coreos-metadata[948]: Sep 12 23:58:09.019 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 12 23:58:09.034721 coreos-metadata[948]: Sep 12 23:58:09.034 INFO Fetch successful Sep 12 23:58:09.040472 coreos-metadata[948]: Sep 12 23:58:09.036 INFO wrote hostname ci-4081.3.5-n-d2844e2d10 to /sysroot/etc/hostname Sep 12 23:58:09.037362 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 23:58:10.281268 initrd-setup-root[975]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 23:58:10.338572 initrd-setup-root[982]: cut: /sysroot/etc/group: No such file or directory Sep 12 23:58:10.360781 initrd-setup-root[989]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 23:58:10.369564 initrd-setup-root[996]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 23:58:11.572356 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 23:58:11.589290 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 23:58:11.598269 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 23:58:11.619392 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:58:11.616066 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 23:58:11.643623 ignition[1063]: INFO : Ignition 2.19.0 Sep 12 23:58:11.643623 ignition[1063]: INFO : Stage: mount Sep 12 23:58:11.643623 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:11.643623 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:11.670938 ignition[1063]: INFO : mount: mount passed Sep 12 23:58:11.670938 ignition[1063]: INFO : Ignition finished successfully Sep 12 23:58:11.652016 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 23:58:11.675332 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 23:58:11.687892 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 23:58:11.716381 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:58:11.742113 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1076) Sep 12 23:58:11.754722 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:58:11.754750 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:58:11.758710 kernel: BTRFS info (device sda6): using free space tree Sep 12 23:58:11.767104 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 23:58:11.768996 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:58:11.800127 ignition[1093]: INFO : Ignition 2.19.0 Sep 12 23:58:11.800127 ignition[1093]: INFO : Stage: files Sep 12 23:58:11.800127 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:11.800127 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:11.820512 ignition[1093]: DEBUG : files: compiled without relabeling support, skipping Sep 12 23:58:11.834670 ignition[1093]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 23:58:11.842167 ignition[1093]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 23:58:11.935608 ignition[1093]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 23:58:11.943029 ignition[1093]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 23:58:11.943029 ignition[1093]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 23:58:11.936119 unknown[1093]: wrote ssh authorized keys file for user: core Sep 12 23:58:11.995205 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 23:58:12.004736 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 23:58:12.004736 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 23:58:12.004736 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 12 23:58:12.158573 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 23:58:12.404934 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:58:12.416382 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 12 23:58:12.906665 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 23:58:13.108120 ignition[1093]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:58:13.108120 ignition[1093]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:58:13.171224 ignition[1093]: INFO : files: files passed Sep 12 23:58:13.171224 ignition[1093]: INFO : Ignition finished successfully Sep 12 23:58:13.173086 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 23:58:13.210369 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 23:58:13.225262 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 23:58:13.354823 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:58:13.354823 initrd-setup-root-after-ignition[1121]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:58:13.251916 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 23:58:13.383836 initrd-setup-root-after-ignition[1125]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:58:13.252003 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 23:58:13.284886 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:58:13.292401 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 23:58:13.328387 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 23:58:13.365313 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 23:58:13.365412 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 23:58:13.379190 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 23:58:13.389574 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 23:58:13.403944 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 23:58:13.406274 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 23:58:13.439367 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:58:13.462240 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 23:58:13.483413 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:58:13.493000 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:58:13.505904 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 23:58:13.517259 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 23:58:13.517344 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:58:13.536741 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 23:58:13.548510 systemd[1]: Stopped target basic.target - Basic System. Sep 12 23:58:13.558579 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 23:58:13.576827 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:58:13.589978 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 23:58:13.601431 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 23:58:13.612941 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:58:13.625478 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 23:58:13.639530 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 23:58:13.651686 systemd[1]: Stopped target swap.target - Swaps. Sep 12 23:58:13.703165 kernel: mlx5_core 8ca2:00:02.0: poll_health:835:(pid 21): device's health compromised - reached miss count Sep 12 23:58:13.661429 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 23:58:13.661503 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:58:13.679051 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:58:13.699075 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:58:13.727020 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 23:58:13.727286 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:58:13.735164 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 23:58:13.735237 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 23:58:13.752951 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 23:58:13.753012 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:58:13.767164 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 23:58:13.767209 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 23:58:13.778591 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 23:58:13.778634 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 23:58:13.843639 ignition[1147]: INFO : Ignition 2.19.0 Sep 12 23:58:13.843639 ignition[1147]: INFO : Stage: umount Sep 12 23:58:13.843639 ignition[1147]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:13.843639 ignition[1147]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:13.843639 ignition[1147]: INFO : umount: umount passed Sep 12 23:58:13.843639 ignition[1147]: INFO : Ignition finished successfully Sep 12 23:58:13.811278 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 23:58:13.837236 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 23:58:13.848030 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 23:58:13.848107 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:58:13.864078 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 23:58:13.864166 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:58:13.878175 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 23:58:13.878277 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 23:58:13.890453 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 23:58:13.890548 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 23:58:13.906742 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 23:58:13.906835 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 23:58:13.916604 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 23:58:13.916651 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 23:58:13.927665 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 23:58:13.927707 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 23:58:13.939729 systemd[1]: Stopped target network.target - Network. Sep 12 23:58:13.950164 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 23:58:13.950229 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:58:13.962775 systemd[1]: Stopped target paths.target - Path Units. Sep 12 23:58:13.973205 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 23:58:13.979344 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:58:13.986653 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 23:58:13.996695 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 23:58:14.006815 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 23:58:14.006878 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:58:14.016992 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 23:58:14.017043 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:58:14.027530 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 23:58:14.027589 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 23:58:14.037956 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 23:58:14.038002 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 23:58:14.049789 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 23:58:14.060897 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 23:58:14.071207 systemd-networkd[894]: eth0: DHCPv6 lease lost Sep 12 23:58:14.072734 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 23:58:14.072856 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 23:58:14.092573 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 23:58:14.092678 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 23:58:14.107143 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 23:58:14.107198 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:58:14.137346 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 23:58:14.146314 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 23:58:14.146385 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:58:14.157852 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:58:14.347677 kernel: hv_netvsc 0022487d-7fb7-0022-487d-7fb70022487d eth0: Data path switched from VF: enP36002s1 Sep 12 23:58:14.157895 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:58:14.169019 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 23:58:14.169062 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 23:58:14.179633 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 23:58:14.179673 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:58:14.192275 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:58:14.205340 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 23:58:14.237826 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 23:58:14.238002 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:58:14.252140 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 23:58:14.252254 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 23:58:14.264053 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 23:58:14.264148 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 23:58:14.274996 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 23:58:14.275036 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:58:14.286599 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 23:58:14.286656 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:58:14.303994 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 23:58:14.304046 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 23:58:14.328684 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:58:14.328737 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:58:14.347729 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 23:58:14.347783 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 23:58:14.372295 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 23:58:14.387750 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 23:58:14.387826 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:58:14.403214 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 23:58:14.403291 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:58:14.420031 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 23:58:14.420129 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:58:14.432138 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:58:14.432183 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:14.444029 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 23:58:14.446180 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 23:58:14.463012 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 23:58:14.463143 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 23:58:14.472657 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 23:58:14.503540 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 23:58:14.721625 systemd[1]: Switching root. Sep 12 23:58:14.752740 systemd-journald[216]: Journal stopped Sep 12 23:58:23.471228 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Sep 12 23:58:23.471256 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 23:58:23.471266 kernel: SELinux: policy capability open_perms=1 Sep 12 23:58:23.471278 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 23:58:23.471285 kernel: SELinux: policy capability always_check_network=0 Sep 12 23:58:23.471293 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 23:58:23.471302 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 23:58:23.471313 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 23:58:23.471321 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 23:58:23.471329 kernel: audit: type=1403 audit(1757721497.413:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 23:58:23.471339 systemd[1]: Successfully loaded SELinux policy in 228.273ms. Sep 12 23:58:23.471349 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10ms. Sep 12 23:58:23.471359 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:58:23.471368 systemd[1]: Detected virtualization microsoft. Sep 12 23:58:23.471377 systemd[1]: Detected architecture arm64. Sep 12 23:58:23.471387 systemd[1]: Detected first boot. Sep 12 23:58:23.471396 systemd[1]: Hostname set to . Sep 12 23:58:23.471405 systemd[1]: Initializing machine ID from random generator. Sep 12 23:58:23.471414 zram_generator::config[1206]: No configuration found. Sep 12 23:58:23.471424 systemd[1]: Populated /etc with preset unit settings. Sep 12 23:58:23.471433 systemd[1]: Queued start job for default target multi-user.target. Sep 12 23:58:23.471443 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 23:58:23.471453 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 23:58:23.471462 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 23:58:23.471471 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 23:58:23.471480 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 23:58:23.471490 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 23:58:23.471499 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 23:58:23.471511 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 23:58:23.471520 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 23:58:23.471529 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:58:23.471538 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:58:23.471547 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 23:58:23.471557 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 23:58:23.471566 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 23:58:23.471575 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:58:23.471584 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 23:58:23.471595 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:58:23.471604 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 23:58:23.471613 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:58:23.471625 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:58:23.471634 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:58:23.471644 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:58:23.471653 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 23:58:23.471664 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 23:58:23.471673 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:58:23.471683 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 23:58:23.471693 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:58:23.471702 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:58:23.471712 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:58:23.471722 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 23:58:23.471733 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 23:58:23.471743 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 23:58:23.471752 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 23:58:23.471762 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 23:58:23.471771 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 23:58:23.471781 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 23:58:23.471792 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 23:58:23.471801 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:58:23.471811 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:58:23.471821 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 23:58:23.471830 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:58:23.471840 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:58:23.471849 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:58:23.471859 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 23:58:23.471869 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:58:23.471880 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 23:58:23.471890 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 12 23:58:23.471900 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 12 23:58:23.471909 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:58:23.471920 kernel: fuse: init (API version 7.39) Sep 12 23:58:23.471928 kernel: loop: module loaded Sep 12 23:58:23.471937 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:58:23.471947 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:58:23.471958 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 23:58:23.471983 systemd-journald[1302]: Collecting audit messages is disabled. Sep 12 23:58:23.472003 systemd-journald[1302]: Journal started Sep 12 23:58:23.472025 systemd-journald[1302]: Runtime Journal (/run/log/journal/fa301cbef795457b9e7b865a54990cad) is 8.0M, max 78.5M, 70.5M free. Sep 12 23:58:23.493838 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:58:23.510732 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:58:23.510802 kernel: ACPI: bus type drm_connector registered Sep 12 23:58:23.511399 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 23:58:23.517645 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 23:58:23.523876 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 23:58:23.529792 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 23:58:23.536479 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 23:58:23.542814 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 23:58:23.548579 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 23:58:23.555312 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:58:23.562550 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 23:58:23.562703 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 23:58:23.569848 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:58:23.569996 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:58:23.576601 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:58:23.576746 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:58:23.582957 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:58:23.584214 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:58:23.591469 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 23:58:23.591616 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 23:58:23.598223 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:58:23.598402 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:58:23.604945 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:58:23.611699 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:58:23.619620 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 23:58:23.627337 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:58:23.643512 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:58:23.656197 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 23:58:23.663329 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 23:58:23.669484 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 23:58:23.706269 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 23:58:23.713194 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 23:58:23.719577 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:58:23.720610 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 23:58:23.727008 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:58:23.728030 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:58:23.736278 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:58:23.747688 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 23:58:23.758592 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 23:58:23.765362 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 23:58:23.777856 udevadm[1367]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 23:58:23.784654 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 23:58:23.791903 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 23:58:23.794243 systemd-journald[1302]: Time spent on flushing to /var/log/journal/fa301cbef795457b9e7b865a54990cad is 45.107ms for 893 entries. Sep 12 23:58:23.794243 systemd-journald[1302]: System Journal (/var/log/journal/fa301cbef795457b9e7b865a54990cad) is 11.8M, max 2.6G, 2.6G free. Sep 12 23:58:23.891761 systemd-journald[1302]: Received client request to flush runtime journal. Sep 12 23:58:23.891809 systemd-journald[1302]: /var/log/journal/fa301cbef795457b9e7b865a54990cad/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Sep 12 23:58:23.891831 systemd-journald[1302]: Rotating system journal. Sep 12 23:58:23.893407 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 23:58:23.970764 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:58:24.027384 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Sep 12 23:58:24.027403 systemd-tmpfiles[1365]: ACLs are not supported, ignoring. Sep 12 23:58:24.033634 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:58:24.047205 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 23:58:24.682084 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 23:58:24.701259 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:58:24.716601 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Sep 12 23:58:24.716620 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Sep 12 23:58:24.720472 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:58:25.646637 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 23:58:25.656343 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:58:25.682402 systemd-udevd[1393]: Using default interface naming scheme 'v255'. Sep 12 23:58:26.424223 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:58:26.444452 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:58:26.482384 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Sep 12 23:58:26.511045 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 23:58:26.574525 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 23:58:26.613120 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 23:58:26.626142 kernel: hv_vmbus: registering driver hv_balloon Sep 12 23:58:26.636116 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 12 23:58:26.636195 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 12 23:58:26.639178 kernel: hv_vmbus: registering driver hyperv_fb Sep 12 23:58:26.663509 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 12 23:58:26.663582 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 12 23:58:26.668988 kernel: Console: switching to colour dummy device 80x25 Sep 12 23:58:26.676694 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 23:58:26.679385 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:58:26.696744 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:58:26.697796 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:26.711264 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:58:26.718290 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:58:26.718482 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:26.733373 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:58:26.853184 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1402) Sep 12 23:58:26.897859 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 23:58:26.921796 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 23:58:26.935237 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 23:58:26.977668 systemd-networkd[1407]: lo: Link UP Sep 12 23:58:26.977683 systemd-networkd[1407]: lo: Gained carrier Sep 12 23:58:26.979606 systemd-networkd[1407]: Enumeration completed Sep 12 23:58:26.979768 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:58:26.979980 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:58:26.979984 systemd-networkd[1407]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:58:26.992237 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 23:58:27.015114 lvm[1484]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 23:58:27.045125 kernel: mlx5_core 8ca2:00:02.0 enP36002s1: Link up Sep 12 23:58:27.051109 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 23:58:27.066539 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 23:58:27.074897 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:58:27.087233 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 23:58:27.093966 lvm[1488]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 23:58:27.102113 kernel: hv_netvsc 0022487d-7fb7-0022-487d-7fb70022487d eth0: Data path switched to VF: enP36002s1 Sep 12 23:58:27.103604 systemd-networkd[1407]: enP36002s1: Link UP Sep 12 23:58:27.103701 systemd-networkd[1407]: eth0: Link UP Sep 12 23:58:27.103704 systemd-networkd[1407]: eth0: Gained carrier Sep 12 23:58:27.103718 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:58:27.108397 systemd-networkd[1407]: enP36002s1: Gained carrier Sep 12 23:58:27.109659 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 23:58:27.117892 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:58:27.125397 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 23:58:27.125429 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:58:27.126164 systemd-networkd[1407]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 23:58:27.131700 systemd[1]: Reached target machines.target - Containers. Sep 12 23:58:27.137885 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 23:58:27.150220 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 23:58:27.157882 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 23:58:27.163935 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:58:27.164944 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 23:58:27.175266 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 23:58:27.183796 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 23:58:27.191665 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 23:58:27.247186 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 23:58:27.249004 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 23:58:27.270222 kernel: loop0: detected capacity change from 0 to 114328 Sep 12 23:58:27.274862 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 23:58:27.847119 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 23:58:27.910118 kernel: loop1: detected capacity change from 0 to 114432 Sep 12 23:58:28.390933 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:28.493123 kernel: loop2: detected capacity change from 0 to 31320 Sep 12 23:58:28.984261 systemd-networkd[1407]: eth0: Gained IPv6LL Sep 12 23:58:28.986815 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 23:58:29.119112 kernel: loop3: detected capacity change from 0 to 203944 Sep 12 23:58:29.158114 kernel: loop4: detected capacity change from 0 to 114328 Sep 12 23:58:29.171110 kernel: loop5: detected capacity change from 0 to 114432 Sep 12 23:58:29.184107 kernel: loop6: detected capacity change from 0 to 31320 Sep 12 23:58:29.195108 kernel: loop7: detected capacity change from 0 to 203944 Sep 12 23:58:29.207117 (sd-merge)[1515]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 12 23:58:29.207539 (sd-merge)[1515]: Merged extensions into '/usr'. Sep 12 23:58:29.211628 systemd[1]: Reloading requested from client PID 1496 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 23:58:29.211736 systemd[1]: Reloading... Sep 12 23:58:29.269525 zram_generator::config[1542]: No configuration found. Sep 12 23:58:29.405521 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:58:29.474208 systemd[1]: Reloading finished in 261 ms. Sep 12 23:58:29.486675 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 23:58:29.499220 systemd[1]: Starting ensure-sysext.service... Sep 12 23:58:29.505276 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:58:29.522131 systemd[1]: Reloading requested from client PID 1603 ('systemctl') (unit ensure-sysext.service)... Sep 12 23:58:29.522148 systemd[1]: Reloading... Sep 12 23:58:29.530407 systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 23:58:29.531498 systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 23:58:29.532816 systemd-tmpfiles[1604]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 23:58:29.533220 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Sep 12 23:58:29.533341 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Sep 12 23:58:29.582180 zram_generator::config[1633]: No configuration found. Sep 12 23:58:29.594565 systemd-tmpfiles[1604]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:58:29.594573 systemd-tmpfiles[1604]: Skipping /boot Sep 12 23:58:29.602698 systemd-tmpfiles[1604]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:58:29.602714 systemd-tmpfiles[1604]: Skipping /boot Sep 12 23:58:29.704047 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:58:29.777029 systemd[1]: Reloading finished in 254 ms. Sep 12 23:58:29.792263 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:58:29.818311 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 23:58:29.840634 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 23:58:29.850938 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 23:58:29.863246 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:58:29.874953 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 23:58:29.887760 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:58:29.890051 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:58:29.911292 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:58:29.927370 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:58:29.936450 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:58:29.945515 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:58:29.945974 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:58:29.953421 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:58:29.953571 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:58:29.961248 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:58:29.961440 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:58:29.977938 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 23:58:29.988744 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:58:29.998330 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:58:30.000017 systemd-resolved[1702]: Positive Trust Anchors: Sep 12 23:58:30.000033 systemd-resolved[1702]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:58:30.000065 systemd-resolved[1702]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:58:30.006853 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:58:30.021364 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:58:30.028870 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:58:30.034486 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:58:30.034652 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 23:58:30.041994 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:58:30.042174 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:58:30.048608 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:58:30.048752 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:58:30.056230 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:58:30.056379 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:58:30.063912 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:58:30.064110 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:58:30.073505 systemd[1]: Finished ensure-sysext.service. Sep 12 23:58:30.080588 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:58:30.080651 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:58:30.158630 systemd-resolved[1702]: Using system hostname 'ci-4081.3.5-n-d2844e2d10'. Sep 12 23:58:30.160687 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:58:30.167299 systemd[1]: Reached target network.target - Network. Sep 12 23:58:30.172139 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 23:58:30.178372 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:58:30.211863 augenrules[1745]: No rules Sep 12 23:58:30.214780 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 23:58:30.255484 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 23:58:33.179700 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 23:58:33.188120 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 23:58:38.784139 ldconfig[1492]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 23:58:38.802173 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 23:58:38.814223 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 23:58:38.843254 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 23:58:38.849696 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:58:38.855590 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 23:58:38.862447 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 23:58:38.869662 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 23:58:38.876165 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 23:58:38.883255 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 23:58:38.890241 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 23:58:38.890273 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:58:38.895193 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:58:38.915286 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 23:58:38.922772 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 23:58:38.944201 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 23:58:38.950440 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 23:58:38.956297 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:58:38.961669 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:58:38.966884 systemd[1]: System is tainted: cgroupsv1 Sep 12 23:58:38.966930 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:58:38.966949 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:58:38.989176 systemd[1]: Starting chronyd.service - NTP client/server... Sep 12 23:58:38.997221 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 23:58:39.011223 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 23:58:39.020252 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 23:58:39.027164 (chronyd)[1763]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 12 23:58:39.030218 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 23:58:39.037349 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 23:58:39.043246 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 23:58:39.043288 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Sep 12 23:58:39.045583 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 12 23:58:39.052268 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 12 23:58:39.053020 chronyd[1774]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 12 23:58:39.056223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:58:39.057727 KVP[1772]: KVP starting; pid is:1772 Sep 12 23:58:39.059179 jq[1770]: false Sep 12 23:58:39.076304 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 23:58:39.082800 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 23:58:39.089569 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 23:58:39.099253 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 23:58:39.112926 kernel: hv_utils: KVP IC version 4.0 Sep 12 23:58:39.111465 KVP[1772]: KVP LIC Version: 3.1 Sep 12 23:58:39.117400 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 23:58:39.119346 extend-filesystems[1771]: Found loop4 Sep 12 23:58:39.129712 extend-filesystems[1771]: Found loop5 Sep 12 23:58:39.129712 extend-filesystems[1771]: Found loop6 Sep 12 23:58:39.129712 extend-filesystems[1771]: Found loop7 Sep 12 23:58:39.129712 extend-filesystems[1771]: Found sda Sep 12 23:58:39.129712 extend-filesystems[1771]: Found sda1 Sep 12 23:58:39.129712 extend-filesystems[1771]: Found sda2 Sep 12 23:58:39.129712 extend-filesystems[1771]: Found sda3 Sep 12 23:58:39.129712 extend-filesystems[1771]: Found usr Sep 12 23:58:39.129712 extend-filesystems[1771]: Found sda4 Sep 12 23:58:39.129712 extend-filesystems[1771]: Found sda6 Sep 12 23:58:39.129712 extend-filesystems[1771]: Found sda7 Sep 12 23:58:39.129712 extend-filesystems[1771]: Found sda9 Sep 12 23:58:39.129712 extend-filesystems[1771]: Checking size of /dev/sda9 Sep 12 23:58:39.232160 extend-filesystems[1771]: Old size kept for /dev/sda9 Sep 12 23:58:39.232160 extend-filesystems[1771]: Found sr0 Sep 12 23:58:39.137390 chronyd[1774]: Timezone right/UTC failed leap second check, ignoring Sep 12 23:58:39.132271 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 23:58:39.137599 chronyd[1774]: Loaded seccomp filter (level 2) Sep 12 23:58:39.149509 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 23:58:39.160287 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 23:58:39.191352 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 23:58:39.254506 jq[1796]: true Sep 12 23:58:39.210250 systemd[1]: Started chronyd.service - NTP client/server. Sep 12 23:58:39.236980 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 23:58:39.237228 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 23:58:39.237461 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 23:58:39.237648 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 23:58:39.253554 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 23:58:39.253770 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 23:58:39.262950 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 23:58:39.275503 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 23:58:39.275715 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 23:58:39.288952 systemd-logind[1788]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 12 23:58:39.290322 systemd-logind[1788]: New seat seat0. Sep 12 23:58:39.298288 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 23:58:39.332812 update_engine[1793]: I20250912 23:58:39.332705 1793 main.cc:92] Flatcar Update Engine starting Sep 12 23:58:39.341139 (ntainerd)[1830]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 23:58:39.342359 jq[1829]: true Sep 12 23:58:39.375498 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1826) Sep 12 23:58:39.426291 tar[1817]: linux-arm64/helm Sep 12 23:58:39.443079 dbus-daemon[1767]: [system] SELinux support is enabled Sep 12 23:58:39.443320 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 23:58:39.456024 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 23:58:39.456537 dbus-daemon[1767]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 23:58:39.456053 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 23:58:39.463368 update_engine[1793]: I20250912 23:58:39.463315 1793 update_check_scheduler.cc:74] Next update check in 2m42s Sep 12 23:58:39.463945 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 23:58:39.463968 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 23:58:39.471520 systemd[1]: Started update-engine.service - Update Engine. Sep 12 23:58:39.479694 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 23:58:39.486377 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 23:58:39.504975 bash[1865]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:58:39.510853 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 23:58:39.528519 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 23:58:39.619670 coreos-metadata[1766]: Sep 12 23:58:39.619 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 23:58:39.625219 coreos-metadata[1766]: Sep 12 23:58:39.624 INFO Fetch successful Sep 12 23:58:39.625219 coreos-metadata[1766]: Sep 12 23:58:39.624 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 12 23:58:39.629437 coreos-metadata[1766]: Sep 12 23:58:39.629 INFO Fetch successful Sep 12 23:58:39.629751 coreos-metadata[1766]: Sep 12 23:58:39.629 INFO Fetching http://168.63.129.16/machine/518e1de9-eade-4072-a335-9d7a19c6ea6f/387e846d%2Dd2ae%2D4fea%2D9284%2Da7f2fa8210c9.%5Fci%2D4081.3.5%2Dn%2Dd2844e2d10?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 12 23:58:39.632069 coreos-metadata[1766]: Sep 12 23:58:39.631 INFO Fetch successful Sep 12 23:58:39.632349 coreos-metadata[1766]: Sep 12 23:58:39.632 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 12 23:58:39.648108 coreos-metadata[1766]: Sep 12 23:58:39.646 INFO Fetch successful Sep 12 23:58:39.692547 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 23:58:39.706953 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 23:58:39.783276 locksmithd[1887]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 23:58:40.080161 tar[1817]: linux-arm64/LICENSE Sep 12 23:58:40.080369 tar[1817]: linux-arm64/README.md Sep 12 23:58:40.100509 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 23:58:40.185959 containerd[1830]: time="2025-09-12T23:58:40.185865920Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 23:58:40.243035 containerd[1830]: time="2025-09-12T23:58:40.242221840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:58:40.245714 containerd[1830]: time="2025-09-12T23:58:40.245674600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:58:40.246343 containerd[1830]: time="2025-09-12T23:58:40.246323480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 23:58:40.246434 containerd[1830]: time="2025-09-12T23:58:40.246417840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 23:58:40.247181 containerd[1830]: time="2025-09-12T23:58:40.247158160Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 23:58:40.247280 containerd[1830]: time="2025-09-12T23:58:40.247264840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 23:58:40.247408 containerd[1830]: time="2025-09-12T23:58:40.247389520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:58:40.247470 containerd[1830]: time="2025-09-12T23:58:40.247457080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:58:40.248132 containerd[1830]: time="2025-09-12T23:58:40.247732240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:58:40.248206 containerd[1830]: time="2025-09-12T23:58:40.248191920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 23:58:40.248273 containerd[1830]: time="2025-09-12T23:58:40.248258400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:58:40.248330 containerd[1830]: time="2025-09-12T23:58:40.248317040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 23:58:40.248470 containerd[1830]: time="2025-09-12T23:58:40.248452960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:58:40.249455 containerd[1830]: time="2025-09-12T23:58:40.249433880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:58:40.250540 containerd[1830]: time="2025-09-12T23:58:40.250504040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:58:40.250627 containerd[1830]: time="2025-09-12T23:58:40.250613640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 23:58:40.250791 containerd[1830]: time="2025-09-12T23:58:40.250776280Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 23:58:40.250909 containerd[1830]: time="2025-09-12T23:58:40.250895160Z" level=info msg="metadata content store policy set" policy=shared Sep 12 23:58:40.269483 containerd[1830]: time="2025-09-12T23:58:40.269426000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 23:58:40.270258 containerd[1830]: time="2025-09-12T23:58:40.270237000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 23:58:40.270632 containerd[1830]: time="2025-09-12T23:58:40.270610480Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 23:58:40.270746 containerd[1830]: time="2025-09-12T23:58:40.270731520Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 23:58:40.270862 containerd[1830]: time="2025-09-12T23:58:40.270820440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 23:58:40.271129 containerd[1830]: time="2025-09-12T23:58:40.271107800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 23:58:40.273430 containerd[1830]: time="2025-09-12T23:58:40.273163600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 23:58:40.273430 containerd[1830]: time="2025-09-12T23:58:40.273325240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 23:58:40.273430 containerd[1830]: time="2025-09-12T23:58:40.273357640Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 23:58:40.273430 containerd[1830]: time="2025-09-12T23:58:40.273374800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 23:58:40.273430 containerd[1830]: time="2025-09-12T23:58:40.273390040Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 23:58:40.273430 containerd[1830]: time="2025-09-12T23:58:40.273403120Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 23:58:40.273679 containerd[1830]: time="2025-09-12T23:58:40.273416120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 23:58:40.273679 containerd[1830]: time="2025-09-12T23:58:40.273625720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 23:58:40.273679 containerd[1830]: time="2025-09-12T23:58:40.273644480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 23:58:40.273679 containerd[1830]: time="2025-09-12T23:58:40.273658280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 23:58:40.274547 containerd[1830]: time="2025-09-12T23:58:40.274475960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 23:58:40.274547 containerd[1830]: time="2025-09-12T23:58:40.274502240Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 23:58:40.274547 containerd[1830]: time="2025-09-12T23:58:40.274526360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 23:58:40.275771 containerd[1830]: time="2025-09-12T23:58:40.274642600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 23:58:40.275771 containerd[1830]: time="2025-09-12T23:58:40.274661800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 23:58:40.275771 containerd[1830]: time="2025-09-12T23:58:40.274675480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 23:58:40.275771 containerd[1830]: time="2025-09-12T23:58:40.274688280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 23:58:40.275771 containerd[1830]: time="2025-09-12T23:58:40.274711880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 23:58:40.275771 containerd[1830]: time="2025-09-12T23:58:40.274726480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 23:58:40.275771 containerd[1830]: time="2025-09-12T23:58:40.274740080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 23:58:40.275771 containerd[1830]: time="2025-09-12T23:58:40.274769720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 23:58:40.275771 containerd[1830]: time="2025-09-12T23:58:40.274795880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 23:58:40.275771 containerd[1830]: time="2025-09-12T23:58:40.274809200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 23:58:40.275771 containerd[1830]: time="2025-09-12T23:58:40.274821440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 23:58:40.275771 containerd[1830]: time="2025-09-12T23:58:40.274834640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 23:58:40.275771 containerd[1830]: time="2025-09-12T23:58:40.274850760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 23:58:40.275771 containerd[1830]: time="2025-09-12T23:58:40.274872400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 23:58:40.275771 containerd[1830]: time="2025-09-12T23:58:40.274884480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 23:58:40.276074 containerd[1830]: time="2025-09-12T23:58:40.274895760Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 23:58:40.276074 containerd[1830]: time="2025-09-12T23:58:40.274952120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 23:58:40.276074 containerd[1830]: time="2025-09-12T23:58:40.274972520Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 23:58:40.276074 containerd[1830]: time="2025-09-12T23:58:40.274984600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 23:58:40.276074 containerd[1830]: time="2025-09-12T23:58:40.274997200Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 23:58:40.276074 containerd[1830]: time="2025-09-12T23:58:40.275008000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 23:58:40.276074 containerd[1830]: time="2025-09-12T23:58:40.275020360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 23:58:40.276074 containerd[1830]: time="2025-09-12T23:58:40.275031240Z" level=info msg="NRI interface is disabled by configuration." Sep 12 23:58:40.276074 containerd[1830]: time="2025-09-12T23:58:40.275041800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 23:58:40.277656 containerd[1830]: time="2025-09-12T23:58:40.277579440Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 23:58:40.278751 containerd[1830]: time="2025-09-12T23:58:40.277831040Z" level=info msg="Connect containerd service" Sep 12 23:58:40.278830 containerd[1830]: time="2025-09-12T23:58:40.278726240Z" level=info msg="using legacy CRI server" Sep 12 23:58:40.278882 containerd[1830]: time="2025-09-12T23:58:40.278866120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 23:58:40.279787 containerd[1830]: time="2025-09-12T23:58:40.279764760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 23:58:40.281558 containerd[1830]: time="2025-09-12T23:58:40.281515800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:58:40.281828 containerd[1830]: time="2025-09-12T23:58:40.281806800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 23:58:40.281872 containerd[1830]: time="2025-09-12T23:58:40.281851800Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 23:58:40.281911 containerd[1830]: time="2025-09-12T23:58:40.281884360Z" level=info msg="Start subscribing containerd event" Sep 12 23:58:40.281940 containerd[1830]: time="2025-09-12T23:58:40.281926440Z" level=info msg="Start recovering state" Sep 12 23:58:40.282006 containerd[1830]: time="2025-09-12T23:58:40.281989040Z" level=info msg="Start event monitor" Sep 12 23:58:40.282045 containerd[1830]: time="2025-09-12T23:58:40.282007200Z" level=info msg="Start snapshots syncer" Sep 12 23:58:40.282045 containerd[1830]: time="2025-09-12T23:58:40.282018160Z" level=info msg="Start cni network conf syncer for default" Sep 12 23:58:40.282045 containerd[1830]: time="2025-09-12T23:58:40.282030040Z" level=info msg="Start streaming server" Sep 12 23:58:40.282216 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 23:58:40.290308 containerd[1830]: time="2025-09-12T23:58:40.288547080Z" level=info msg="containerd successfully booted in 0.097951s" Sep 12 23:58:40.344913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:58:40.351916 (kubelet)[1925]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:58:40.749688 kubelet[1925]: E0912 23:58:40.749646 1925 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:58:40.754298 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:58:40.754444 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:58:40.790913 sshd_keygen[1805]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 23:58:40.809535 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 23:58:40.819291 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 23:58:40.826292 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 12 23:58:40.837452 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 23:58:40.837665 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 23:58:40.847360 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 23:58:40.864215 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 12 23:58:40.897489 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 23:58:40.910581 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 23:58:40.922392 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 23:58:40.929050 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 23:58:40.934767 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 23:58:40.940583 systemd[1]: Startup finished in 16.393s (kernel) + 23.753s (userspace) = 40.147s. Sep 12 23:58:41.566410 login[1960]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 12 23:58:41.567779 login[1961]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:58:41.575824 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 23:58:41.581548 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 23:58:41.584271 systemd-logind[1788]: New session 1 of user core. Sep 12 23:58:41.611370 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 23:58:41.619394 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 23:58:41.652887 (systemd)[1970]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 23:58:41.955069 systemd[1970]: Queued start job for default target default.target. Sep 12 23:58:41.955765 systemd[1970]: Created slice app.slice - User Application Slice. Sep 12 23:58:41.955878 systemd[1970]: Reached target paths.target - Paths. Sep 12 23:58:41.955946 systemd[1970]: Reached target timers.target - Timers. Sep 12 23:58:41.965193 systemd[1970]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 23:58:41.971448 systemd[1970]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 23:58:41.971604 systemd[1970]: Reached target sockets.target - Sockets. Sep 12 23:58:41.971688 systemd[1970]: Reached target basic.target - Basic System. Sep 12 23:58:41.971777 systemd[1970]: Reached target default.target - Main User Target. Sep 12 23:58:41.971916 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 23:58:41.972014 systemd[1970]: Startup finished in 313ms. Sep 12 23:58:41.973781 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 23:58:42.566761 login[1960]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:58:42.571906 systemd-logind[1788]: New session 2 of user core. Sep 12 23:58:42.578339 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 23:58:42.968120 waagent[1957]: 2025-09-12T23:58:42.968004Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 12 23:58:42.978400 waagent[1957]: 2025-09-12T23:58:42.974145Z INFO Daemon Daemon OS: flatcar 4081.3.5 Sep 12 23:58:42.978738 waagent[1957]: 2025-09-12T23:58:42.978688Z INFO Daemon Daemon Python: 3.11.9 Sep 12 23:58:42.984121 waagent[1957]: 2025-09-12T23:58:42.983206Z INFO Daemon Daemon Run daemon Sep 12 23:58:42.989498 waagent[1957]: 2025-09-12T23:58:42.988144Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.5' Sep 12 23:58:42.996953 waagent[1957]: 2025-09-12T23:58:42.996897Z INFO Daemon Daemon Using waagent for provisioning Sep 12 23:58:43.002312 waagent[1957]: 2025-09-12T23:58:43.002265Z INFO Daemon Daemon Activate resource disk Sep 12 23:58:43.006841 waagent[1957]: 2025-09-12T23:58:43.006792Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 12 23:58:43.017886 waagent[1957]: 2025-09-12T23:58:43.017834Z INFO Daemon Daemon Found device: None Sep 12 23:58:43.022327 waagent[1957]: 2025-09-12T23:58:43.022275Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 12 23:58:43.030723 waagent[1957]: 2025-09-12T23:58:43.030663Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 12 23:58:43.043723 waagent[1957]: 2025-09-12T23:58:43.043655Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 23:58:43.049940 waagent[1957]: 2025-09-12T23:58:43.049889Z INFO Daemon Daemon Running default provisioning handler Sep 12 23:58:43.061423 waagent[1957]: 2025-09-12T23:58:43.061348Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 12 23:58:43.075279 waagent[1957]: 2025-09-12T23:58:43.075216Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 12 23:58:43.085371 waagent[1957]: 2025-09-12T23:58:43.085307Z INFO Daemon Daemon cloud-init is enabled: False Sep 12 23:58:43.090589 waagent[1957]: 2025-09-12T23:58:43.090537Z INFO Daemon Daemon Copying ovf-env.xml Sep 12 23:58:43.204889 waagent[1957]: 2025-09-12T23:58:43.204796Z INFO Daemon Daemon Successfully mounted dvd Sep 12 23:58:43.218798 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 12 23:58:43.220658 waagent[1957]: 2025-09-12T23:58:43.220553Z INFO Daemon Daemon Detect protocol endpoint Sep 12 23:58:43.225636 waagent[1957]: 2025-09-12T23:58:43.225588Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 23:58:43.231331 waagent[1957]: 2025-09-12T23:58:43.231284Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 12 23:58:43.238050 waagent[1957]: 2025-09-12T23:58:43.238004Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 12 23:58:43.243244 waagent[1957]: 2025-09-12T23:58:43.243197Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 12 23:58:43.248312 waagent[1957]: 2025-09-12T23:58:43.248267Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 12 23:58:43.296247 waagent[1957]: 2025-09-12T23:58:43.296195Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 12 23:58:43.302895 waagent[1957]: 2025-09-12T23:58:43.302861Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 12 23:58:43.308102 waagent[1957]: 2025-09-12T23:58:43.308041Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 12 23:58:43.584376 waagent[1957]: 2025-09-12T23:58:43.584233Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 12 23:58:43.590910 waagent[1957]: 2025-09-12T23:58:43.590833Z INFO Daemon Daemon Forcing an update of the goal state. Sep 12 23:58:43.600234 waagent[1957]: 2025-09-12T23:58:43.600183Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 23:58:43.620050 waagent[1957]: 2025-09-12T23:58:43.620005Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 12 23:58:43.625715 waagent[1957]: 2025-09-12T23:58:43.625665Z INFO Daemon Sep 12 23:58:43.628629 waagent[1957]: 2025-09-12T23:58:43.628585Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 3df723bd-7829-49d0-8efa-a67faca84ea6 eTag: 7038248234345681517 source: Fabric] Sep 12 23:58:43.640090 waagent[1957]: 2025-09-12T23:58:43.640034Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 12 23:58:43.646818 waagent[1957]: 2025-09-12T23:58:43.646767Z INFO Daemon Sep 12 23:58:43.649560 waagent[1957]: 2025-09-12T23:58:43.649513Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 12 23:58:43.660166 waagent[1957]: 2025-09-12T23:58:43.660125Z INFO Daemon Daemon Downloading artifacts profile blob Sep 12 23:58:43.735130 waagent[1957]: 2025-09-12T23:58:43.734751Z INFO Daemon Downloaded certificate {'thumbprint': 'C89E14FBA0C01F8719AA259325E9B19EEFE90F65', 'hasPrivateKey': True} Sep 12 23:58:43.744888 waagent[1957]: 2025-09-12T23:58:43.744838Z INFO Daemon Fetch goal state completed Sep 12 23:58:43.755691 waagent[1957]: 2025-09-12T23:58:43.755647Z INFO Daemon Daemon Starting provisioning Sep 12 23:58:43.760570 waagent[1957]: 2025-09-12T23:58:43.760524Z INFO Daemon Daemon Handle ovf-env.xml. Sep 12 23:58:43.765117 waagent[1957]: 2025-09-12T23:58:43.765069Z INFO Daemon Daemon Set hostname [ci-4081.3.5-n-d2844e2d10] Sep 12 23:58:43.813284 waagent[1957]: 2025-09-12T23:58:43.813220Z INFO Daemon Daemon Publish hostname [ci-4081.3.5-n-d2844e2d10] Sep 12 23:58:43.819819 waagent[1957]: 2025-09-12T23:58:43.819770Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 12 23:58:43.826050 waagent[1957]: 2025-09-12T23:58:43.826005Z INFO Daemon Daemon Primary interface is [eth0] Sep 12 23:58:43.893569 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:58:43.893575 systemd-networkd[1407]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:58:43.893619 systemd-networkd[1407]: eth0: DHCP lease lost Sep 12 23:58:43.895285 waagent[1957]: 2025-09-12T23:58:43.895220Z INFO Daemon Daemon Create user account if not exists Sep 12 23:58:43.900885 waagent[1957]: 2025-09-12T23:58:43.900838Z INFO Daemon Daemon User core already exists, skip useradd Sep 12 23:58:43.906530 waagent[1957]: 2025-09-12T23:58:43.906487Z INFO Daemon Daemon Configure sudoer Sep 12 23:58:43.907164 systemd-networkd[1407]: eth0: DHCPv6 lease lost Sep 12 23:58:43.911470 waagent[1957]: 2025-09-12T23:58:43.911383Z INFO Daemon Daemon Configure sshd Sep 12 23:58:43.915907 waagent[1957]: 2025-09-12T23:58:43.915858Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 12 23:58:43.928468 waagent[1957]: 2025-09-12T23:58:43.928423Z INFO Daemon Daemon Deploy ssh public key. Sep 12 23:58:43.942150 systemd-networkd[1407]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 23:58:45.052885 waagent[1957]: 2025-09-12T23:58:45.052829Z INFO Daemon Daemon Provisioning complete Sep 12 23:58:45.071479 waagent[1957]: 2025-09-12T23:58:45.071429Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 12 23:58:45.077971 waagent[1957]: 2025-09-12T23:58:45.077923Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 12 23:58:45.087756 waagent[1957]: 2025-09-12T23:58:45.087711Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 12 23:58:45.213745 waagent[2023]: 2025-09-12T23:58:45.213673Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 12 23:58:45.214697 waagent[2023]: 2025-09-12T23:58:45.214221Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.5 Sep 12 23:58:45.214697 waagent[2023]: 2025-09-12T23:58:45.214295Z INFO ExtHandler ExtHandler Python: 3.11.9 Sep 12 23:58:45.296403 waagent[2023]: 2025-09-12T23:58:45.296307Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.5; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 12 23:58:45.299249 waagent[2023]: 2025-09-12T23:58:45.299188Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 23:58:45.299331 waagent[2023]: 2025-09-12T23:58:45.299300Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 23:58:45.307869 waagent[2023]: 2025-09-12T23:58:45.307766Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 23:58:45.313476 waagent[2023]: 2025-09-12T23:58:45.313433Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 12 23:58:45.313911 waagent[2023]: 2025-09-12T23:58:45.313865Z INFO ExtHandler Sep 12 23:58:45.313989 waagent[2023]: 2025-09-12T23:58:45.313950Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 38f5593c-f42c-4ef4-8d81-52596a76ee16 eTag: 7038248234345681517 source: Fabric] Sep 12 23:58:45.314298 waagent[2023]: 2025-09-12T23:58:45.314256Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 12 23:58:45.314867 waagent[2023]: 2025-09-12T23:58:45.314819Z INFO ExtHandler Sep 12 23:58:45.314931 waagent[2023]: 2025-09-12T23:58:45.314902Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 12 23:58:45.318904 waagent[2023]: 2025-09-12T23:58:45.318871Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 12 23:58:45.388687 waagent[2023]: 2025-09-12T23:58:45.388607Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C89E14FBA0C01F8719AA259325E9B19EEFE90F65', 'hasPrivateKey': True} Sep 12 23:58:45.389185 waagent[2023]: 2025-09-12T23:58:45.389141Z INFO ExtHandler Fetch goal state completed Sep 12 23:58:45.404687 waagent[2023]: 2025-09-12T23:58:45.404635Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2023 Sep 12 23:58:45.404832 waagent[2023]: 2025-09-12T23:58:45.404793Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 12 23:58:45.406446 waagent[2023]: 2025-09-12T23:58:45.406396Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.5', '', 'Flatcar Container Linux by Kinvolk'] Sep 12 23:58:45.406806 waagent[2023]: 2025-09-12T23:58:45.406768Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 12 23:58:45.460892 waagent[2023]: 2025-09-12T23:58:45.460845Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 12 23:58:45.461102 waagent[2023]: 2025-09-12T23:58:45.461051Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 12 23:58:45.467213 waagent[2023]: 2025-09-12T23:58:45.466706Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 12 23:58:45.473046 systemd[1]: Reloading requested from client PID 2036 ('systemctl') (unit waagent.service)... Sep 12 23:58:45.473061 systemd[1]: Reloading... Sep 12 23:58:45.547126 zram_generator::config[2079]: No configuration found. Sep 12 23:58:45.647935 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:58:45.725286 systemd[1]: Reloading finished in 251 ms. Sep 12 23:58:45.745572 waagent[2023]: 2025-09-12T23:58:45.745204Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 12 23:58:45.751769 systemd[1]: Reloading requested from client PID 2131 ('systemctl') (unit waagent.service)... Sep 12 23:58:45.751893 systemd[1]: Reloading... Sep 12 23:58:45.839243 zram_generator::config[2166]: No configuration found. Sep 12 23:58:45.944203 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:58:46.017799 systemd[1]: Reloading finished in 265 ms. Sep 12 23:58:46.041122 waagent[2023]: 2025-09-12T23:58:46.040371Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 12 23:58:46.041122 waagent[2023]: 2025-09-12T23:58:46.040531Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 12 23:58:46.503279 waagent[2023]: 2025-09-12T23:58:46.502067Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 12 23:58:46.503279 waagent[2023]: 2025-09-12T23:58:46.502695Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 12 23:58:46.503636 waagent[2023]: 2025-09-12T23:58:46.503493Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 23:58:46.503636 waagent[2023]: 2025-09-12T23:58:46.503579Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 23:58:46.503821 waagent[2023]: 2025-09-12T23:58:46.503773Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 12 23:58:46.503944 waagent[2023]: 2025-09-12T23:58:46.503890Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 12 23:58:46.504083 waagent[2023]: 2025-09-12T23:58:46.504033Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 12 23:58:46.504083 waagent[2023]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 12 23:58:46.504083 waagent[2023]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 12 23:58:46.504083 waagent[2023]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 12 23:58:46.504083 waagent[2023]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 12 23:58:46.504083 waagent[2023]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 23:58:46.504083 waagent[2023]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 23:58:46.504562 waagent[2023]: 2025-09-12T23:58:46.504469Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 12 23:58:46.505250 waagent[2023]: 2025-09-12T23:58:46.505137Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 12 23:58:46.505416 waagent[2023]: 2025-09-12T23:58:46.505360Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 23:58:46.505454 waagent[2023]: 2025-09-12T23:58:46.505421Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 12 23:58:46.506120 waagent[2023]: 2025-09-12T23:58:46.505808Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 23:58:46.506120 waagent[2023]: 2025-09-12T23:58:46.505976Z INFO EnvHandler ExtHandler Configure routes Sep 12 23:58:46.506120 waagent[2023]: 2025-09-12T23:58:46.506036Z INFO EnvHandler ExtHandler Gateway:None Sep 12 23:58:46.506120 waagent[2023]: 2025-09-12T23:58:46.506077Z INFO EnvHandler ExtHandler Routes:None Sep 12 23:58:46.506680 waagent[2023]: 2025-09-12T23:58:46.506481Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 12 23:58:46.506680 waagent[2023]: 2025-09-12T23:58:46.506542Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 12 23:58:46.506756 waagent[2023]: 2025-09-12T23:58:46.506716Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 12 23:58:46.515459 waagent[2023]: 2025-09-12T23:58:46.515415Z INFO ExtHandler ExtHandler Sep 12 23:58:46.515647 waagent[2023]: 2025-09-12T23:58:46.515612Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 2d352ae2-83e7-4eea-969c-172f71fc7362 correlation cb1c57b2-0aa0-4aea-846b-18273f689819 created: 2025-09-12T23:57:07.354843Z] Sep 12 23:58:46.516058 waagent[2023]: 2025-09-12T23:58:46.516020Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 12 23:58:46.516687 waagent[2023]: 2025-09-12T23:58:46.516651Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 12 23:58:46.549965 waagent[2023]: 2025-09-12T23:58:46.549913Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 4F966425-7872-45D5-AB95-57DD97C66D69;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 12 23:58:46.603167 waagent[2023]: 2025-09-12T23:58:46.603063Z INFO MonitorHandler ExtHandler Network interfaces: Sep 12 23:58:46.603167 waagent[2023]: Executing ['ip', '-a', '-o', 'link']: Sep 12 23:58:46.603167 waagent[2023]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 12 23:58:46.603167 waagent[2023]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7d:7f:b7 brd ff:ff:ff:ff:ff:ff Sep 12 23:58:46.603167 waagent[2023]: 3: enP36002s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7d:7f:b7 brd ff:ff:ff:ff:ff:ff\ altname enP36002p0s2 Sep 12 23:58:46.603167 waagent[2023]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 12 23:58:46.603167 waagent[2023]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 12 23:58:46.603167 waagent[2023]: 2: eth0 inet 10.200.20.38/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 12 23:58:46.603167 waagent[2023]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 12 23:58:46.603167 waagent[2023]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 12 23:58:46.603167 waagent[2023]: 2: eth0 inet6 fe80::222:48ff:fe7d:7fb7/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 12 23:58:46.670182 waagent[2023]: 2025-09-12T23:58:46.669233Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 12 23:58:46.670182 waagent[2023]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 23:58:46.670182 waagent[2023]: pkts bytes target prot opt in out source destination Sep 12 23:58:46.670182 waagent[2023]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 23:58:46.670182 waagent[2023]: pkts bytes target prot opt in out source destination Sep 12 23:58:46.670182 waagent[2023]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 23:58:46.670182 waagent[2023]: pkts bytes target prot opt in out source destination Sep 12 23:58:46.670182 waagent[2023]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 23:58:46.670182 waagent[2023]: 4 594 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 23:58:46.670182 waagent[2023]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 23:58:46.672666 waagent[2023]: 2025-09-12T23:58:46.672606Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 12 23:58:46.672666 waagent[2023]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 23:58:46.672666 waagent[2023]: pkts bytes target prot opt in out source destination Sep 12 23:58:46.672666 waagent[2023]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 23:58:46.672666 waagent[2023]: pkts bytes target prot opt in out source destination Sep 12 23:58:46.672666 waagent[2023]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 23:58:46.672666 waagent[2023]: pkts bytes target prot opt in out source destination Sep 12 23:58:46.672666 waagent[2023]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 23:58:46.672666 waagent[2023]: 5 646 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 23:58:46.672666 waagent[2023]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 23:58:46.672903 waagent[2023]: 2025-09-12T23:58:46.672864Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 12 23:58:50.888879 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 23:58:50.896259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:58:51.204266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:58:51.207830 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:58:51.252886 kubelet[2267]: E0912 23:58:51.252823 2267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:58:51.255194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:58:51.255326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:58:54.075224 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 23:58:54.088667 systemd[1]: Started sshd@0-10.200.20.38:22-10.200.16.10:37992.service - OpenSSH per-connection server daemon (10.200.16.10:37992). Sep 12 23:58:54.589389 sshd[2275]: Accepted publickey for core from 10.200.16.10 port 37992 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 12 23:58:54.590677 sshd[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:58:54.594589 systemd-logind[1788]: New session 3 of user core. Sep 12 23:58:54.602350 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 23:58:54.994369 systemd[1]: Started sshd@1-10.200.20.38:22-10.200.16.10:38004.service - OpenSSH per-connection server daemon (10.200.16.10:38004). Sep 12 23:58:55.415295 sshd[2280]: Accepted publickey for core from 10.200.16.10 port 38004 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 12 23:58:55.416662 sshd[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:58:55.421834 systemd-logind[1788]: New session 4 of user core. Sep 12 23:58:55.427481 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 23:58:55.742514 sshd[2280]: pam_unix(sshd:session): session closed for user core Sep 12 23:58:55.747015 systemd[1]: sshd@1-10.200.20.38:22-10.200.16.10:38004.service: Deactivated successfully. Sep 12 23:58:55.749979 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 23:58:55.751047 systemd-logind[1788]: Session 4 logged out. Waiting for processes to exit. Sep 12 23:58:55.752298 systemd-logind[1788]: Removed session 4. Sep 12 23:58:55.832324 systemd[1]: Started sshd@2-10.200.20.38:22-10.200.16.10:38016.service - OpenSSH per-connection server daemon (10.200.16.10:38016). Sep 12 23:58:56.276391 sshd[2288]: Accepted publickey for core from 10.200.16.10 port 38016 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 12 23:58:56.277706 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:58:56.282264 systemd-logind[1788]: New session 5 of user core. Sep 12 23:58:56.289343 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 23:58:56.632288 sshd[2288]: pam_unix(sshd:session): session closed for user core Sep 12 23:58:56.635706 systemd[1]: sshd@2-10.200.20.38:22-10.200.16.10:38016.service: Deactivated successfully. Sep 12 23:58:56.638716 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 23:58:56.639212 systemd-logind[1788]: Session 5 logged out. Waiting for processes to exit. Sep 12 23:58:56.640085 systemd-logind[1788]: Removed session 5. Sep 12 23:58:56.711318 systemd[1]: Started sshd@3-10.200.20.38:22-10.200.16.10:38030.service - OpenSSH per-connection server daemon (10.200.16.10:38030). Sep 12 23:58:57.156894 sshd[2296]: Accepted publickey for core from 10.200.16.10 port 38030 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 12 23:58:57.158230 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:58:57.162840 systemd-logind[1788]: New session 6 of user core. Sep 12 23:58:57.168477 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 23:58:57.489304 sshd[2296]: pam_unix(sshd:session): session closed for user core Sep 12 23:58:57.492356 systemd[1]: sshd@3-10.200.20.38:22-10.200.16.10:38030.service: Deactivated successfully. Sep 12 23:58:57.495228 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 23:58:57.495291 systemd-logind[1788]: Session 6 logged out. Waiting for processes to exit. Sep 12 23:58:57.496774 systemd-logind[1788]: Removed session 6. Sep 12 23:58:57.561419 systemd[1]: Started sshd@4-10.200.20.38:22-10.200.16.10:38034.service - OpenSSH per-connection server daemon (10.200.16.10:38034). Sep 12 23:58:57.971319 sshd[2304]: Accepted publickey for core from 10.200.16.10 port 38034 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 12 23:58:57.972576 sshd[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:58:57.976283 systemd-logind[1788]: New session 7 of user core. Sep 12 23:58:57.987320 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 23:58:58.450599 sudo[2308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 23:58:58.450873 sudo[2308]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:58:58.477260 sudo[2308]: pam_unix(sudo:session): session closed for user root Sep 12 23:58:58.576318 sshd[2304]: pam_unix(sshd:session): session closed for user core Sep 12 23:58:58.579834 systemd[1]: sshd@4-10.200.20.38:22-10.200.16.10:38034.service: Deactivated successfully. Sep 12 23:58:58.583047 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 23:58:58.583934 systemd-logind[1788]: Session 7 logged out. Waiting for processes to exit. Sep 12 23:58:58.584911 systemd-logind[1788]: Removed session 7. Sep 12 23:58:58.658703 systemd[1]: Started sshd@5-10.200.20.38:22-10.200.16.10:38048.service - OpenSSH per-connection server daemon (10.200.16.10:38048). Sep 12 23:58:59.077388 sshd[2313]: Accepted publickey for core from 10.200.16.10 port 38048 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 12 23:58:59.078741 sshd[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:58:59.083358 systemd-logind[1788]: New session 8 of user core. Sep 12 23:58:59.089426 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 23:58:59.318737 sudo[2318]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 23:58:59.319374 sudo[2318]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:58:59.322522 sudo[2318]: pam_unix(sudo:session): session closed for user root Sep 12 23:58:59.326881 sudo[2317]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 23:58:59.327165 sudo[2317]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:58:59.343415 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 23:58:59.344931 auditctl[2321]: No rules Sep 12 23:58:59.345814 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:58:59.346066 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 23:58:59.349069 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 23:58:59.371022 augenrules[2340]: No rules Sep 12 23:58:59.374477 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 23:58:59.375675 sudo[2317]: pam_unix(sudo:session): session closed for user root Sep 12 23:58:59.476300 sshd[2313]: pam_unix(sshd:session): session closed for user core Sep 12 23:58:59.479656 systemd[1]: sshd@5-10.200.20.38:22-10.200.16.10:38048.service: Deactivated successfully. Sep 12 23:58:59.482334 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 23:58:59.483158 systemd-logind[1788]: Session 8 logged out. Waiting for processes to exit. Sep 12 23:58:59.483927 systemd-logind[1788]: Removed session 8. Sep 12 23:58:59.557477 systemd[1]: Started sshd@6-10.200.20.38:22-10.200.16.10:38058.service - OpenSSH per-connection server daemon (10.200.16.10:38058). Sep 12 23:58:59.974719 sshd[2349]: Accepted publickey for core from 10.200.16.10 port 38058 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 12 23:58:59.975996 sshd[2349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:58:59.979673 systemd-logind[1788]: New session 9 of user core. Sep 12 23:58:59.990303 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 23:59:00.216530 sudo[2353]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 23:59:00.216799 sudo[2353]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:59:01.348712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 23:59:01.353342 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 23:59:01.364346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:59:01.365009 (dockerd)[2369]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 23:59:01.722285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:01.733431 (kubelet)[2382]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:59:01.769299 kubelet[2382]: E0912 23:59:01.769243 2382 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:59:01.772286 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:59:01.772449 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:59:02.334134 dockerd[2369]: time="2025-09-12T23:59:02.332596280Z" level=info msg="Starting up" Sep 12 23:59:02.693708 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3362251793-merged.mount: Deactivated successfully. Sep 12 23:59:02.928369 chronyd[1774]: Selected source PHC0 Sep 12 23:59:02.950184 dockerd[2369]: time="2025-09-12T23:59:02.950098747Z" level=info msg="Loading containers: start." Sep 12 23:59:03.239114 kernel: Initializing XFRM netlink socket Sep 12 23:59:03.443630 systemd-networkd[1407]: docker0: Link UP Sep 12 23:59:03.471221 dockerd[2369]: time="2025-09-12T23:59:03.471172189Z" level=info msg="Loading containers: done." Sep 12 23:59:03.493058 dockerd[2369]: time="2025-09-12T23:59:03.493004231Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 23:59:03.493230 dockerd[2369]: time="2025-09-12T23:59:03.493142184Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 23:59:03.493310 dockerd[2369]: time="2025-09-12T23:59:03.493269377Z" level=info msg="Daemon has completed initialization" Sep 12 23:59:03.557878 dockerd[2369]: time="2025-09-12T23:59:03.557697199Z" level=info msg="API listen on /run/docker.sock" Sep 12 23:59:03.558400 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 23:59:04.553218 containerd[1830]: time="2025-09-12T23:59:04.553176857Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 23:59:05.579652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount81975457.mount: Deactivated successfully. Sep 12 23:59:06.750177 containerd[1830]: time="2025-09-12T23:59:06.750125019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:06.755482 containerd[1830]: time="2025-09-12T23:59:06.755447773Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687325" Sep 12 23:59:06.758819 containerd[1830]: time="2025-09-12T23:59:06.758790289Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:06.764120 containerd[1830]: time="2025-09-12T23:59:06.764078682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:06.765908 containerd[1830]: time="2025-09-12T23:59:06.765117761Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 2.211897744s" Sep 12 23:59:06.765908 containerd[1830]: time="2025-09-12T23:59:06.765157041Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 12 23:59:06.766593 containerd[1830]: time="2025-09-12T23:59:06.766543360Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 23:59:08.225122 containerd[1830]: time="2025-09-12T23:59:08.225034631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:08.228265 containerd[1830]: time="2025-09-12T23:59:08.228231147Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459767" Sep 12 23:59:08.232194 containerd[1830]: time="2025-09-12T23:59:08.231805463Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:08.239260 containerd[1830]: time="2025-09-12T23:59:08.239220574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:08.240280 containerd[1830]: time="2025-09-12T23:59:08.240242573Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.473664213s" Sep 12 23:59:08.240377 containerd[1830]: time="2025-09-12T23:59:08.240362373Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 12 23:59:08.240860 containerd[1830]: time="2025-09-12T23:59:08.240818772Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 23:59:09.291861 containerd[1830]: time="2025-09-12T23:59:09.291778127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:09.296455 containerd[1830]: time="2025-09-12T23:59:09.296216922Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127506" Sep 12 23:59:09.299664 containerd[1830]: time="2025-09-12T23:59:09.299614598Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:09.304839 containerd[1830]: time="2025-09-12T23:59:09.304794951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:09.307008 containerd[1830]: time="2025-09-12T23:59:09.305896110Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.065041698s" Sep 12 23:59:09.307008 containerd[1830]: time="2025-09-12T23:59:09.305931230Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 12 23:59:09.307124 containerd[1830]: time="2025-09-12T23:59:09.307021069Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 23:59:10.469535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount246904521.mount: Deactivated successfully. Sep 12 23:59:10.787969 containerd[1830]: time="2025-09-12T23:59:10.787847234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:10.791010 containerd[1830]: time="2025-09-12T23:59:10.790820470Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954907" Sep 12 23:59:10.794357 containerd[1830]: time="2025-09-12T23:59:10.794129266Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:10.799961 containerd[1830]: time="2025-09-12T23:59:10.799922779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:10.800675 containerd[1830]: time="2025-09-12T23:59:10.800645779Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.49359847s" Sep 12 23:59:10.800782 containerd[1830]: time="2025-09-12T23:59:10.800765498Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 12 23:59:10.801400 containerd[1830]: time="2025-09-12T23:59:10.801369898Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 23:59:11.490828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2424546283.mount: Deactivated successfully. Sep 12 23:59:11.889508 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 23:59:11.899408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:59:12.133259 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:12.136909 (kubelet)[2621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:59:12.476001 kubelet[2621]: E0912 23:59:12.185983 2621 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:59:12.189278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:59:12.189430 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:59:13.155261 containerd[1830]: time="2025-09-12T23:59:13.155208701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:13.158247 containerd[1830]: time="2025-09-12T23:59:13.158187939Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 12 23:59:13.162260 containerd[1830]: time="2025-09-12T23:59:13.162195176Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:13.167732 containerd[1830]: time="2025-09-12T23:59:13.167687853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:13.169181 containerd[1830]: time="2025-09-12T23:59:13.168824732Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.367418514s" Sep 12 23:59:13.169181 containerd[1830]: time="2025-09-12T23:59:13.168860932Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 23:59:13.169378 containerd[1830]: time="2025-09-12T23:59:13.169316732Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 23:59:13.739199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount399073589.mount: Deactivated successfully. Sep 12 23:59:13.767142 containerd[1830]: time="2025-09-12T23:59:13.766556876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:13.782513 containerd[1830]: time="2025-09-12T23:59:13.782473426Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 12 23:59:13.786175 containerd[1830]: time="2025-09-12T23:59:13.786126144Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:13.792183 containerd[1830]: time="2025-09-12T23:59:13.791366101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:13.792183 containerd[1830]: time="2025-09-12T23:59:13.792009100Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 622.655568ms" Sep 12 23:59:13.792183 containerd[1830]: time="2025-09-12T23:59:13.792045420Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 23:59:13.792813 containerd[1830]: time="2025-09-12T23:59:13.792647620Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 23:59:14.442076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4417684.mount: Deactivated successfully. Sep 12 23:59:14.736937 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 12 23:59:17.689559 containerd[1830]: time="2025-09-12T23:59:17.689506067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:17.692934 containerd[1830]: time="2025-09-12T23:59:17.692895943Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537161" Sep 12 23:59:17.696423 containerd[1830]: time="2025-09-12T23:59:17.696268620Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:17.702362 containerd[1830]: time="2025-09-12T23:59:17.702327373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:17.703615 containerd[1830]: time="2025-09-12T23:59:17.703486892Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.910587472s" Sep 12 23:59:17.703615 containerd[1830]: time="2025-09-12T23:59:17.703520292Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 12 23:59:22.389649 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 12 23:59:22.398337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:59:22.864242 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:22.868536 (kubelet)[2759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:59:22.906980 kubelet[2759]: E0912 23:59:22.906941 2759 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:59:22.910757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:59:22.911024 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:59:23.876374 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:23.887345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:59:23.911984 systemd[1]: Reloading requested from client PID 2775 ('systemctl') (unit session-9.scope)... Sep 12 23:59:23.912150 systemd[1]: Reloading... Sep 12 23:59:24.022222 zram_generator::config[2816]: No configuration found. Sep 12 23:59:24.130859 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:59:24.208341 systemd[1]: Reloading finished in 295 ms. Sep 12 23:59:24.257400 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 23:59:24.257610 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 23:59:24.258068 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:24.263596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:59:24.429501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:24.433349 (kubelet)[2894]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:59:24.549649 kubelet[2894]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:59:24.549649 kubelet[2894]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 23:59:24.549649 kubelet[2894]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:59:24.550011 kubelet[2894]: I0912 23:59:24.549738 2894 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:59:24.813780 update_engine[1793]: I20250912 23:59:24.813122 1793 update_attempter.cc:509] Updating boot flags... Sep 12 23:59:25.000123 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2912) Sep 12 23:59:25.136126 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2914) Sep 12 23:59:25.335628 kubelet[2894]: I0912 23:59:25.335595 2894 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 23:59:25.335768 kubelet[2894]: I0912 23:59:25.335758 2894 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:59:25.336076 kubelet[2894]: I0912 23:59:25.336063 2894 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 23:59:25.352566 kubelet[2894]: E0912 23:59:25.352516 2894 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:25.353736 kubelet[2894]: I0912 23:59:25.353709 2894 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:59:25.361337 kubelet[2894]: E0912 23:59:25.361306 2894 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 23:59:25.361337 kubelet[2894]: I0912 23:59:25.361337 2894 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 23:59:25.365141 kubelet[2894]: I0912 23:59:25.365122 2894 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:59:25.365383 kubelet[2894]: I0912 23:59:25.365368 2894 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 23:59:25.365496 kubelet[2894]: I0912 23:59:25.365470 2894 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:59:25.365654 kubelet[2894]: I0912 23:59:25.365495 2894 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-d2844e2d10","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 23:59:25.365737 kubelet[2894]: I0912 23:59:25.365661 2894 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:59:25.365737 kubelet[2894]: I0912 23:59:25.365670 2894 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 23:59:25.365778 kubelet[2894]: I0912 23:59:25.365769 2894 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:59:25.367940 kubelet[2894]: I0912 23:59:25.367916 2894 kubelet.go:408] "Attempting to sync node with API server" Sep 12 23:59:25.367974 kubelet[2894]: I0912 23:59:25.367943 2894 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:59:25.367974 kubelet[2894]: I0912 23:59:25.367964 2894 kubelet.go:314] "Adding apiserver pod source" Sep 12 23:59:25.368013 kubelet[2894]: I0912 23:59:25.367978 2894 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:59:25.371018 kubelet[2894]: W0912 23:59:25.370565 2894 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-d2844e2d10&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Sep 12 23:59:25.371018 kubelet[2894]: E0912 23:59:25.370624 2894 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-d2844e2d10&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:25.371018 kubelet[2894]: W0912 23:59:25.370936 2894 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Sep 12 23:59:25.371018 kubelet[2894]: E0912 23:59:25.370974 2894 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:25.373572 kubelet[2894]: I0912 23:59:25.372436 2894 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 23:59:25.373572 kubelet[2894]: I0912 23:59:25.372878 2894 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:59:25.373572 kubelet[2894]: W0912 23:59:25.372921 2894 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 23:59:25.373572 kubelet[2894]: I0912 23:59:25.373459 2894 server.go:1274] "Started kubelet" Sep 12 23:59:25.376537 kubelet[2894]: I0912 23:59:25.376503 2894 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:59:25.377243 kubelet[2894]: I0912 23:59:25.377196 2894 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:59:25.377568 kubelet[2894]: I0912 23:59:25.377548 2894 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:59:25.377670 kubelet[2894]: I0912 23:59:25.377273 2894 server.go:449] "Adding debug handlers to kubelet server" Sep 12 23:59:25.378514 kubelet[2894]: I0912 23:59:25.378471 2894 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:59:25.379708 kubelet[2894]: E0912 23:59:25.378655 2894 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-n-d2844e2d10.1864ae736e209e83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-n-d2844e2d10,UID:ci-4081.3.5-n-d2844e2d10,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-n-d2844e2d10,},FirstTimestamp:2025-09-12 23:59:25.373439619 +0000 UTC m=+0.937300161,LastTimestamp:2025-09-12 23:59:25.373439619 +0000 UTC m=+0.937300161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-n-d2844e2d10,}" Sep 12 23:59:25.380247 kubelet[2894]: I0912 23:59:25.380229 2894 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:59:25.381885 kubelet[2894]: I0912 23:59:25.381867 2894 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 23:59:25.382074 kubelet[2894]: I0912 23:59:25.382061 2894 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 23:59:25.382223 kubelet[2894]: I0912 23:59:25.382214 2894 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:59:25.382572 kubelet[2894]: W0912 23:59:25.382542 2894 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Sep 12 23:59:25.382679 kubelet[2894]: E0912 23:59:25.382665 2894 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:25.383619 kubelet[2894]: E0912 23:59:25.383590 2894 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-d2844e2d10\" not found" Sep 12 23:59:25.383879 kubelet[2894]: E0912 23:59:25.383862 2894 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:59:25.384128 kubelet[2894]: E0912 23:59:25.384082 2894 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-d2844e2d10?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="200ms" Sep 12 23:59:25.384698 kubelet[2894]: I0912 23:59:25.384682 2894 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:59:25.384781 kubelet[2894]: I0912 23:59:25.384772 2894 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:59:25.384898 kubelet[2894]: I0912 23:59:25.384883 2894 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:59:25.459940 kubelet[2894]: I0912 23:59:25.459888 2894 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:59:25.460882 kubelet[2894]: I0912 23:59:25.460831 2894 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:59:25.460882 kubelet[2894]: I0912 23:59:25.460869 2894 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 23:59:25.460882 kubelet[2894]: I0912 23:59:25.460889 2894 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 23:59:25.461007 kubelet[2894]: E0912 23:59:25.460930 2894 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:59:25.462936 kubelet[2894]: W0912 23:59:25.462895 2894 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Sep 12 23:59:25.463009 kubelet[2894]: E0912 23:59:25.462940 2894 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:25.483764 kubelet[2894]: E0912 23:59:25.483734 2894 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-d2844e2d10\" not found" Sep 12 23:59:25.517850 kubelet[2894]: I0912 23:59:25.517828 2894 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 23:59:25.517924 kubelet[2894]: I0912 23:59:25.517862 2894 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 23:59:25.517924 kubelet[2894]: I0912 23:59:25.517881 2894 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:59:25.525549 kubelet[2894]: I0912 23:59:25.525528 2894 policy_none.go:49] "None policy: Start" Sep 12 23:59:25.526102 kubelet[2894]: I0912 23:59:25.526068 2894 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 23:59:25.526176 kubelet[2894]: I0912 23:59:25.526107 2894 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:59:25.533913 kubelet[2894]: I0912 23:59:25.533884 2894 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:59:25.534099 kubelet[2894]: I0912 23:59:25.534067 2894 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:59:25.534126 kubelet[2894]: I0912 23:59:25.534100 2894 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:59:25.535378 kubelet[2894]: I0912 23:59:25.535358 2894 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:59:25.538243 kubelet[2894]: E0912 23:59:25.538221 2894 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-n-d2844e2d10\" not found" Sep 12 23:59:25.585028 kubelet[2894]: E0912 23:59:25.584988 2894 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-d2844e2d10?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="400ms" Sep 12 23:59:25.637008 kubelet[2894]: I0912 23:59:25.636620 2894 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:25.637008 kubelet[2894]: E0912 23:59:25.636970 2894 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:25.684259 kubelet[2894]: I0912 23:59:25.684180 2894 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b1b97f6aa76813035417e3a044f140a2-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-d2844e2d10\" (UID: \"b1b97f6aa76813035417e3a044f140a2\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:25.684259 kubelet[2894]: I0912 23:59:25.684216 2894 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f96cbe8847d9c6ea9c2c8e5e5f598cc-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-d2844e2d10\" (UID: \"4f96cbe8847d9c6ea9c2c8e5e5f598cc\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:25.684259 kubelet[2894]: I0912 23:59:25.684236 2894 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5cd5a256a15066ee360d2b1bb12ae76-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-d2844e2d10\" (UID: \"c5cd5a256a15066ee360d2b1bb12ae76\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:25.684597 kubelet[2894]: I0912 23:59:25.684436 2894 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5cd5a256a15066ee360d2b1bb12ae76-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-d2844e2d10\" (UID: \"c5cd5a256a15066ee360d2b1bb12ae76\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:25.684597 kubelet[2894]: I0912 23:59:25.684461 2894 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1b97f6aa76813035417e3a044f140a2-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-d2844e2d10\" (UID: \"b1b97f6aa76813035417e3a044f140a2\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:25.684597 kubelet[2894]: I0912 23:59:25.684502 2894 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b1b97f6aa76813035417e3a044f140a2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-d2844e2d10\" (UID: \"b1b97f6aa76813035417e3a044f140a2\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:25.684597 kubelet[2894]: I0912 23:59:25.684521 2894 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1b97f6aa76813035417e3a044f140a2-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-d2844e2d10\" (UID: \"b1b97f6aa76813035417e3a044f140a2\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:25.684597 kubelet[2894]: I0912 23:59:25.684536 2894 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5cd5a256a15066ee360d2b1bb12ae76-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-d2844e2d10\" (UID: \"c5cd5a256a15066ee360d2b1bb12ae76\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:25.684726 kubelet[2894]: I0912 23:59:25.684552 2894 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1b97f6aa76813035417e3a044f140a2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-d2844e2d10\" (UID: \"b1b97f6aa76813035417e3a044f140a2\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:25.839269 kubelet[2894]: I0912 23:59:25.839123 2894 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:25.839555 kubelet[2894]: E0912 23:59:25.839505 2894 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:25.868635 containerd[1830]: time="2025-09-12T23:59:25.868597789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-d2844e2d10,Uid:c5cd5a256a15066ee360d2b1bb12ae76,Namespace:kube-system,Attempt:0,}" Sep 12 23:59:25.871685 containerd[1830]: time="2025-09-12T23:59:25.871525026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-d2844e2d10,Uid:b1b97f6aa76813035417e3a044f140a2,Namespace:kube-system,Attempt:0,}" Sep 12 23:59:25.874318 containerd[1830]: time="2025-09-12T23:59:25.874293183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-d2844e2d10,Uid:4f96cbe8847d9c6ea9c2c8e5e5f598cc,Namespace:kube-system,Attempt:0,}" Sep 12 23:59:25.985677 kubelet[2894]: E0912 23:59:25.985601 2894 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-d2844e2d10?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="800ms" Sep 12 23:59:26.241568 kubelet[2894]: I0912 23:59:26.241476 2894 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:26.241960 kubelet[2894]: E0912 23:59:26.241930 2894 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:26.300726 kubelet[2894]: W0912 23:59:26.300673 2894 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Sep 12 23:59:26.300831 kubelet[2894]: E0912 23:59:26.300736 2894 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:26.372893 kubelet[2894]: W0912 23:59:26.372795 2894 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Sep 12 23:59:26.372893 kubelet[2894]: E0912 23:59:26.372844 2894 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:26.552958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3418805258.mount: Deactivated successfully. Sep 12 23:59:26.561918 kubelet[2894]: W0912 23:59:26.561818 2894 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-d2844e2d10&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Sep 12 23:59:26.561918 kubelet[2894]: E0912 23:59:26.561887 2894 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-d2844e2d10&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:26.585383 containerd[1830]: time="2025-09-12T23:59:26.585336754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:59:26.588306 containerd[1830]: time="2025-09-12T23:59:26.588227311Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 12 23:59:26.592131 containerd[1830]: time="2025-09-12T23:59:26.591491347Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:59:26.596745 containerd[1830]: time="2025-09-12T23:59:26.595983182Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:59:26.599292 containerd[1830]: time="2025-09-12T23:59:26.599258979Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 23:59:26.604499 containerd[1830]: time="2025-09-12T23:59:26.603463374Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:59:26.628859 containerd[1830]: time="2025-09-12T23:59:26.628796346Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 23:59:26.634514 containerd[1830]: time="2025-09-12T23:59:26.634462700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:59:26.636110 containerd[1830]: time="2025-09-12T23:59:26.635358819Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 766.67867ms" Sep 12 23:59:26.636426 containerd[1830]: time="2025-09-12T23:59:26.636394938Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 761.940675ms" Sep 12 23:59:26.637160 containerd[1830]: time="2025-09-12T23:59:26.637131217Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 765.546391ms" Sep 12 23:59:26.786506 kubelet[2894]: E0912 23:59:26.786460 2894 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-d2844e2d10?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="1.6s" Sep 12 23:59:26.871355 kubelet[2894]: W0912 23:59:26.871190 2894 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Sep 12 23:59:26.871355 kubelet[2894]: E0912 23:59:26.871258 2894 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:27.043957 kubelet[2894]: I0912 23:59:27.043921 2894 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:27.044326 kubelet[2894]: E0912 23:59:27.044298 2894 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:27.454902 kubelet[2894]: E0912 23:59:27.454853 2894 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:27.972285 containerd[1830]: time="2025-09-12T23:59:27.971793813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:59:27.972285 containerd[1830]: time="2025-09-12T23:59:27.971847533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:59:27.972285 containerd[1830]: time="2025-09-12T23:59:27.971863533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:27.974226 containerd[1830]: time="2025-09-12T23:59:27.974018210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:27.974392 containerd[1830]: time="2025-09-12T23:59:27.974330930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:59:27.974528 containerd[1830]: time="2025-09-12T23:59:27.974477050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:59:27.974588 containerd[1830]: time="2025-09-12T23:59:27.974509290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:27.974901 containerd[1830]: time="2025-09-12T23:59:27.974814049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:27.987285 containerd[1830]: time="2025-09-12T23:59:27.987175076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:59:27.987285 containerd[1830]: time="2025-09-12T23:59:27.987235795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:59:27.987285 containerd[1830]: time="2025-09-12T23:59:27.987252475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:27.987520 containerd[1830]: time="2025-09-12T23:59:27.987341995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:28.027031 kubelet[2894]: W0912 23:59:28.026872 2894 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Sep 12 23:59:28.027031 kubelet[2894]: E0912 23:59:28.026914 2894 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:59:28.073555 containerd[1830]: time="2025-09-12T23:59:28.073510699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-d2844e2d10,Uid:b1b97f6aa76813035417e3a044f140a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cf137143d6f425476266b2131f2afe0ed65591fcf2a51ab30e03141f1c3da9d\"" Sep 12 23:59:28.075197 containerd[1830]: time="2025-09-12T23:59:28.075156218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-d2844e2d10,Uid:c5cd5a256a15066ee360d2b1bb12ae76,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd1632208c60a8c4904ce85eacb548ab03a0ced8204234b416df62245c067f4a\"" Sep 12 23:59:28.075804 containerd[1830]: time="2025-09-12T23:59:28.075648737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-d2844e2d10,Uid:4f96cbe8847d9c6ea9c2c8e5e5f598cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ba0535e736f031c73c6dee026145775b5506a4b7e1096bc5007fe89a826f59c\"" Sep 12 23:59:28.083565 containerd[1830]: time="2025-09-12T23:59:28.083360489Z" level=info msg="CreateContainer within sandbox \"1cf137143d6f425476266b2131f2afe0ed65591fcf2a51ab30e03141f1c3da9d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 23:59:28.083565 containerd[1830]: time="2025-09-12T23:59:28.083518048Z" level=info msg="CreateContainer within sandbox \"cd1632208c60a8c4904ce85eacb548ab03a0ced8204234b416df62245c067f4a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 23:59:28.089352 containerd[1830]: time="2025-09-12T23:59:28.088926042Z" level=info msg="CreateContainer within sandbox \"2ba0535e736f031c73c6dee026145775b5506a4b7e1096bc5007fe89a826f59c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 23:59:28.360957 kubelet[2894]: E0912 23:59:28.360768 2894 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-n-d2844e2d10.1864ae736e209e83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-n-d2844e2d10,UID:ci-4081.3.5-n-d2844e2d10,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-n-d2844e2d10,},FirstTimestamp:2025-09-12 23:59:25.373439619 +0000 UTC m=+0.937300161,LastTimestamp:2025-09-12 23:59:25.373439619 +0000 UTC m=+0.937300161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-n-d2844e2d10,}" Sep 12 23:59:28.387444 kubelet[2894]: E0912 23:59:28.387394 2894 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-d2844e2d10?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="3.2s" Sep 12 23:59:28.427473 containerd[1830]: time="2025-09-12T23:59:28.427426786Z" level=info msg="CreateContainer within sandbox \"cd1632208c60a8c4904ce85eacb548ab03a0ced8204234b416df62245c067f4a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a48d0b0a5efbd6caef66088d5c9c70934dfd5556d19d6a0149175ce9e44ba760\"" Sep 12 23:59:28.429131 containerd[1830]: time="2025-09-12T23:59:28.428117585Z" level=info msg="StartContainer for \"a48d0b0a5efbd6caef66088d5c9c70934dfd5556d19d6a0149175ce9e44ba760\"" Sep 12 23:59:28.541129 containerd[1830]: time="2025-09-12T23:59:28.541064459Z" level=info msg="StartContainer for \"a48d0b0a5efbd6caef66088d5c9c70934dfd5556d19d6a0149175ce9e44ba760\" returns successfully" Sep 12 23:59:28.630265 containerd[1830]: time="2025-09-12T23:59:28.630120360Z" level=info msg="CreateContainer within sandbox \"2ba0535e736f031c73c6dee026145775b5506a4b7e1096bc5007fe89a826f59c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6b5b86c471195a47a2b63ed2a23f4154587ac7b2ff30bb81a9e3a0c7ef5c0f12\"" Sep 12 23:59:28.631203 containerd[1830]: time="2025-09-12T23:59:28.631176719Z" level=info msg="StartContainer for \"6b5b86c471195a47a2b63ed2a23f4154587ac7b2ff30bb81a9e3a0c7ef5c0f12\"" Sep 12 23:59:28.633764 containerd[1830]: time="2025-09-12T23:59:28.633464476Z" level=info msg="CreateContainer within sandbox \"1cf137143d6f425476266b2131f2afe0ed65591fcf2a51ab30e03141f1c3da9d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"953744a2e873f2c66b6e95fb8c1672be5f20cd5335e60214aa95f532c21a980f\"" Sep 12 23:59:28.634443 containerd[1830]: time="2025-09-12T23:59:28.634418635Z" level=info msg="StartContainer for \"953744a2e873f2c66b6e95fb8c1672be5f20cd5335e60214aa95f532c21a980f\"" Sep 12 23:59:28.657402 kubelet[2894]: I0912 23:59:28.657340 2894 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:28.709101 containerd[1830]: time="2025-09-12T23:59:28.708818432Z" level=info msg="StartContainer for \"6b5b86c471195a47a2b63ed2a23f4154587ac7b2ff30bb81a9e3a0c7ef5c0f12\" returns successfully" Sep 12 23:59:28.760587 containerd[1830]: time="2025-09-12T23:59:28.760466375Z" level=info msg="StartContainer for \"953744a2e873f2c66b6e95fb8c1672be5f20cd5335e60214aa95f532c21a980f\" returns successfully" Sep 12 23:59:30.688155 kubelet[2894]: I0912 23:59:30.688113 2894 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:30.688155 kubelet[2894]: E0912 23:59:30.688150 2894 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.5-n-d2844e2d10\": node \"ci-4081.3.5-n-d2844e2d10\" not found" Sep 12 23:59:31.372767 kubelet[2894]: I0912 23:59:31.372737 2894 apiserver.go:52] "Watching apiserver" Sep 12 23:59:31.382371 kubelet[2894]: I0912 23:59:31.382322 2894 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 23:59:31.527121 kubelet[2894]: W0912 23:59:31.526146 2894 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 23:59:32.978374 systemd[1]: Reloading requested from client PID 3234 ('systemctl') (unit session-9.scope)... Sep 12 23:59:32.978389 systemd[1]: Reloading... Sep 12 23:59:33.067120 zram_generator::config[3277]: No configuration found. Sep 12 23:59:33.180528 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:59:33.266485 systemd[1]: Reloading finished in 287 ms. Sep 12 23:59:33.303207 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:59:33.316069 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 23:59:33.316548 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:33.323532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:59:33.614238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:33.618013 (kubelet)[3348]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:59:33.669677 kubelet[3348]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:59:33.669677 kubelet[3348]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 23:59:33.669677 kubelet[3348]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:59:33.670030 kubelet[3348]: I0912 23:59:33.669740 3348 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:59:33.678166 kubelet[3348]: I0912 23:59:33.677459 3348 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 23:59:33.678166 kubelet[3348]: I0912 23:59:33.677483 3348 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:59:33.678166 kubelet[3348]: I0912 23:59:33.677699 3348 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 23:59:33.679287 kubelet[3348]: I0912 23:59:33.679264 3348 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 23:59:33.685514 kubelet[3348]: I0912 23:59:33.683837 3348 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:59:33.688676 kubelet[3348]: E0912 23:59:33.688653 3348 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 23:59:33.688790 kubelet[3348]: I0912 23:59:33.688777 3348 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 23:59:33.693171 kubelet[3348]: I0912 23:59:33.693140 3348 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:59:33.693666 kubelet[3348]: I0912 23:59:33.693654 3348 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 23:59:33.693956 kubelet[3348]: I0912 23:59:33.693916 3348 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:59:33.694203 kubelet[3348]: I0912 23:59:33.694018 3348 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-d2844e2d10","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 23:59:33.694337 kubelet[3348]: I0912 23:59:33.694324 3348 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:59:33.694389 kubelet[3348]: I0912 23:59:33.694382 3348 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 23:59:33.694471 kubelet[3348]: I0912 23:59:33.694462 3348 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:59:33.694706 kubelet[3348]: I0912 23:59:33.694692 3348 kubelet.go:408] "Attempting to sync node with API server" Sep 12 23:59:33.694795 kubelet[3348]: I0912 23:59:33.694785 3348 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:59:33.694862 kubelet[3348]: I0912 23:59:33.694854 3348 kubelet.go:314] "Adding apiserver pod source" Sep 12 23:59:33.694921 kubelet[3348]: I0912 23:59:33.694913 3348 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:59:33.696032 kubelet[3348]: I0912 23:59:33.696008 3348 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 23:59:33.703159 kubelet[3348]: I0912 23:59:33.700720 3348 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:59:33.703159 kubelet[3348]: I0912 23:59:33.701117 3348 server.go:1274] "Started kubelet" Sep 12 23:59:33.703159 kubelet[3348]: I0912 23:59:33.702698 3348 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:59:33.711055 kubelet[3348]: I0912 23:59:33.711012 3348 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:59:33.712113 kubelet[3348]: I0912 23:59:33.711844 3348 server.go:449] "Adding debug handlers to kubelet server" Sep 12 23:59:33.713973 kubelet[3348]: I0912 23:59:33.713923 3348 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:59:33.714159 kubelet[3348]: I0912 23:59:33.714143 3348 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:59:33.715135 kubelet[3348]: I0912 23:59:33.714346 3348 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:59:33.715440 kubelet[3348]: I0912 23:59:33.715420 3348 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 23:59:33.715964 kubelet[3348]: E0912 23:59:33.715923 3348 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-d2844e2d10\" not found" Sep 12 23:59:33.718157 kubelet[3348]: I0912 23:59:33.718114 3348 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 23:59:33.718669 kubelet[3348]: I0912 23:59:33.718634 3348 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:59:33.723268 kubelet[3348]: I0912 23:59:33.723249 3348 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:59:33.723508 kubelet[3348]: I0912 23:59:33.723488 3348 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:59:33.726564 kubelet[3348]: I0912 23:59:33.726225 3348 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:59:33.734314 kubelet[3348]: I0912 23:59:33.734287 3348 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:59:33.734314 kubelet[3348]: I0912 23:59:33.734311 3348 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 23:59:33.734314 kubelet[3348]: I0912 23:59:33.734325 3348 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 23:59:33.734438 kubelet[3348]: E0912 23:59:33.734362 3348 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:59:33.742677 kubelet[3348]: I0912 23:59:33.742644 3348 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:59:33.751288 kubelet[3348]: E0912 23:59:33.750842 3348 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:59:33.797815 kubelet[3348]: I0912 23:59:33.797538 3348 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 23:59:33.797815 kubelet[3348]: I0912 23:59:33.797555 3348 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 23:59:33.797815 kubelet[3348]: I0912 23:59:33.797573 3348 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:59:33.797815 kubelet[3348]: I0912 23:59:33.797716 3348 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 23:59:33.797815 kubelet[3348]: I0912 23:59:33.797726 3348 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 23:59:33.797815 kubelet[3348]: I0912 23:59:33.797747 3348 policy_none.go:49] "None policy: Start" Sep 12 23:59:33.798675 kubelet[3348]: I0912 23:59:33.798660 3348 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 23:59:33.798821 kubelet[3348]: I0912 23:59:33.798810 3348 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:59:33.835465 kubelet[3348]: E0912 23:59:33.835438 3348 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:59:36.674231 kubelet[3348]: E0912 23:59:34.036066 3348 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:59:36.674231 kubelet[3348]: E0912 23:59:34.436893 3348 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:59:36.674231 kubelet[3348]: I0912 23:59:34.695885 3348 apiserver.go:52] "Watching apiserver" Sep 12 23:59:36.674231 kubelet[3348]: E0912 23:59:35.236988 3348 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:59:36.676119 kubelet[3348]: I0912 23:59:36.674861 3348 state_mem.go:75] "Updated machine memory state" Sep 12 23:59:36.676119 kubelet[3348]: I0912 23:59:36.676010 3348 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:59:36.676400 kubelet[3348]: I0912 23:59:36.676388 3348 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:59:36.676510 kubelet[3348]: I0912 23:59:36.676479 3348 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:59:36.677133 kubelet[3348]: I0912 23:59:36.677107 3348 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:59:36.794147 kubelet[3348]: I0912 23:59:36.791426 3348 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:36.804222 kubelet[3348]: I0912 23:59:36.804190 3348 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:36.804379 kubelet[3348]: I0912 23:59:36.804277 3348 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:36.847619 kubelet[3348]: W0912 23:59:36.847581 3348 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 23:59:36.856626 kubelet[3348]: W0912 23:59:36.856546 3348 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 23:59:36.861895 kubelet[3348]: I0912 23:59:36.861774 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-n-d2844e2d10" podStartSLOduration=5.8617574900000005 podStartE2EDuration="5.86175749s" podCreationTimestamp="2025-09-12 23:59:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:59:36.86175757 +0000 UTC m=+3.238249288" watchObservedRunningTime="2025-09-12 23:59:36.86175749 +0000 UTC m=+3.238249208" Sep 12 23:59:36.887685 kubelet[3348]: I0912 23:59:36.887248 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-n-d2844e2d10" podStartSLOduration=0.887230261 podStartE2EDuration="887.230261ms" podCreationTimestamp="2025-09-12 23:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:59:36.873532597 +0000 UTC m=+3.250024355" watchObservedRunningTime="2025-09-12 23:59:36.887230261 +0000 UTC m=+3.263721979" Sep 12 23:59:36.919477 kubelet[3348]: I0912 23:59:36.919433 3348 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 23:59:36.934823 kubelet[3348]: I0912 23:59:36.934140 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5cd5a256a15066ee360d2b1bb12ae76-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-d2844e2d10\" (UID: \"c5cd5a256a15066ee360d2b1bb12ae76\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:36.934823 kubelet[3348]: I0912 23:59:36.934180 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1b97f6aa76813035417e3a044f140a2-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-d2844e2d10\" (UID: \"b1b97f6aa76813035417e3a044f140a2\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:36.934823 kubelet[3348]: I0912 23:59:36.934198 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1b97f6aa76813035417e3a044f140a2-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-d2844e2d10\" (UID: \"b1b97f6aa76813035417e3a044f140a2\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:36.934823 kubelet[3348]: I0912 23:59:36.934223 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1b97f6aa76813035417e3a044f140a2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-d2844e2d10\" (UID: \"b1b97f6aa76813035417e3a044f140a2\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:36.934823 kubelet[3348]: I0912 23:59:36.934257 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5cd5a256a15066ee360d2b1bb12ae76-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-d2844e2d10\" (UID: \"c5cd5a256a15066ee360d2b1bb12ae76\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:36.935040 kubelet[3348]: I0912 23:59:36.934276 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5cd5a256a15066ee360d2b1bb12ae76-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-d2844e2d10\" (UID: \"c5cd5a256a15066ee360d2b1bb12ae76\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:36.935040 kubelet[3348]: I0912 23:59:36.934325 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b1b97f6aa76813035417e3a044f140a2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-d2844e2d10\" (UID: \"b1b97f6aa76813035417e3a044f140a2\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:36.935040 kubelet[3348]: I0912 23:59:36.934366 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b1b97f6aa76813035417e3a044f140a2-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-d2844e2d10\" (UID: \"b1b97f6aa76813035417e3a044f140a2\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:36.935040 kubelet[3348]: I0912 23:59:36.934394 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f96cbe8847d9c6ea9c2c8e5e5f598cc-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-d2844e2d10\" (UID: \"4f96cbe8847d9c6ea9c2c8e5e5f598cc\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-d2844e2d10" Sep 12 23:59:37.153839 kubelet[3348]: I0912 23:59:37.153703 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-d2844e2d10" podStartSLOduration=1.153682526 podStartE2EDuration="1.153682526s" podCreationTimestamp="2025-09-12 23:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:59:36.887448581 +0000 UTC m=+3.263940299" watchObservedRunningTime="2025-09-12 23:59:37.153682526 +0000 UTC m=+3.530174244" Sep 12 23:59:38.883272 kubelet[3348]: I0912 23:59:38.882712 3348 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 23:59:38.883981 containerd[1830]: time="2025-09-12T23:59:38.883465611Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 23:59:38.884236 kubelet[3348]: I0912 23:59:38.884142 3348 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 23:59:39.046845 kubelet[3348]: I0912 23:59:39.046694 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/20570973-620d-4f7f-9d50-ce7d7b5978a8-kube-proxy\") pod \"kube-proxy-f9zxw\" (UID: \"20570973-620d-4f7f-9d50-ce7d7b5978a8\") " pod="kube-system/kube-proxy-f9zxw" Sep 12 23:59:39.046845 kubelet[3348]: I0912 23:59:39.046738 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20570973-620d-4f7f-9d50-ce7d7b5978a8-xtables-lock\") pod \"kube-proxy-f9zxw\" (UID: \"20570973-620d-4f7f-9d50-ce7d7b5978a8\") " pod="kube-system/kube-proxy-f9zxw" Sep 12 23:59:39.046845 kubelet[3348]: I0912 23:59:39.046755 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66db6\" (UniqueName: \"kubernetes.io/projected/20570973-620d-4f7f-9d50-ce7d7b5978a8-kube-api-access-66db6\") pod \"kube-proxy-f9zxw\" (UID: \"20570973-620d-4f7f-9d50-ce7d7b5978a8\") " pod="kube-system/kube-proxy-f9zxw" Sep 12 23:59:39.046845 kubelet[3348]: I0912 23:59:39.046777 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20570973-620d-4f7f-9d50-ce7d7b5978a8-lib-modules\") pod \"kube-proxy-f9zxw\" (UID: \"20570973-620d-4f7f-9d50-ce7d7b5978a8\") " pod="kube-system/kube-proxy-f9zxw" Sep 12 23:59:39.155222 kubelet[3348]: E0912 23:59:39.154909 3348 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 12 23:59:39.155222 kubelet[3348]: E0912 23:59:39.154937 3348 projected.go:194] Error preparing data for projected volume kube-api-access-66db6 for pod kube-system/kube-proxy-f9zxw: configmap "kube-root-ca.crt" not found Sep 12 23:59:39.155222 kubelet[3348]: E0912 23:59:39.154993 3348 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/20570973-620d-4f7f-9d50-ce7d7b5978a8-kube-api-access-66db6 podName:20570973-620d-4f7f-9d50-ce7d7b5978a8 nodeName:}" failed. No retries permitted until 2025-09-12 23:59:39.65497303 +0000 UTC m=+6.031464708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-66db6" (UniqueName: "kubernetes.io/projected/20570973-620d-4f7f-9d50-ce7d7b5978a8-kube-api-access-66db6") pod "kube-proxy-f9zxw" (UID: "20570973-620d-4f7f-9d50-ce7d7b5978a8") : configmap "kube-root-ca.crt" not found Sep 12 23:59:39.868554 containerd[1830]: time="2025-09-12T23:59:39.868449600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f9zxw,Uid:20570973-620d-4f7f-9d50-ce7d7b5978a8,Namespace:kube-system,Attempt:0,}" Sep 12 23:59:39.915541 containerd[1830]: time="2025-09-12T23:59:39.915440068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:59:39.915541 containerd[1830]: time="2025-09-12T23:59:39.915494628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:59:39.915541 containerd[1830]: time="2025-09-12T23:59:39.915511588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:39.916186 containerd[1830]: time="2025-09-12T23:59:39.915595627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:39.969786 containerd[1830]: time="2025-09-12T23:59:39.969737447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f9zxw,Uid:20570973-620d-4f7f-9d50-ce7d7b5978a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"09e5129d23110be7e8563e231a062a295f53ffaed68fc3b29fd452af7a9c3a71\"" Sep 12 23:59:39.975364 containerd[1830]: time="2025-09-12T23:59:39.975333201Z" level=info msg="CreateContainer within sandbox \"09e5129d23110be7e8563e231a062a295f53ffaed68fc3b29fd452af7a9c3a71\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 23:59:40.044934 containerd[1830]: time="2025-09-12T23:59:40.044879524Z" level=info msg="CreateContainer within sandbox \"09e5129d23110be7e8563e231a062a295f53ffaed68fc3b29fd452af7a9c3a71\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e168ad9077b8c72535e2d1d7d9ae3aeeb56d734188099be21d425657c525b2ea\"" Sep 12 23:59:40.046674 containerd[1830]: time="2025-09-12T23:59:40.046642602Z" level=info msg="StartContainer for \"e168ad9077b8c72535e2d1d7d9ae3aeeb56d734188099be21d425657c525b2ea\"" Sep 12 23:59:40.121370 containerd[1830]: time="2025-09-12T23:59:40.121257400Z" level=info msg="StartContainer for \"e168ad9077b8c72535e2d1d7d9ae3aeeb56d734188099be21d425657c525b2ea\" returns successfully" Sep 12 23:59:40.156102 kubelet[3348]: I0912 23:59:40.156044 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3cca2856-7d3a-4d5f-aed3-a2cd4b18fef3-var-lib-calico\") pod \"tigera-operator-58fc44c59b-6lhfq\" (UID: \"3cca2856-7d3a-4d5f-aed3-a2cd4b18fef3\") " pod="tigera-operator/tigera-operator-58fc44c59b-6lhfq" Sep 12 23:59:40.156102 kubelet[3348]: I0912 23:59:40.156110 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp5kq\" (UniqueName: \"kubernetes.io/projected/3cca2856-7d3a-4d5f-aed3-a2cd4b18fef3-kube-api-access-vp5kq\") pod \"tigera-operator-58fc44c59b-6lhfq\" (UID: \"3cca2856-7d3a-4d5f-aed3-a2cd4b18fef3\") " pod="tigera-operator/tigera-operator-58fc44c59b-6lhfq" Sep 12 23:59:40.355430 containerd[1830]: time="2025-09-12T23:59:40.355372620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-6lhfq,Uid:3cca2856-7d3a-4d5f-aed3-a2cd4b18fef3,Namespace:tigera-operator,Attempt:0,}" Sep 12 23:59:40.402760 containerd[1830]: time="2025-09-12T23:59:40.402581208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:59:40.402760 containerd[1830]: time="2025-09-12T23:59:40.402643008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:59:40.403713 containerd[1830]: time="2025-09-12T23:59:40.402658328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:40.403713 containerd[1830]: time="2025-09-12T23:59:40.402750648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:40.441590 containerd[1830]: time="2025-09-12T23:59:40.441553045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-6lhfq,Uid:3cca2856-7d3a-4d5f-aed3-a2cd4b18fef3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e3ecf2b4e820271708791d70477b343993d161ed3570cdf9a06208d9b6c86d86\"" Sep 12 23:59:40.445512 containerd[1830]: time="2025-09-12T23:59:40.445471401Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 23:59:41.965637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1473497333.mount: Deactivated successfully. Sep 12 23:59:42.251755 kubelet[3348]: I0912 23:59:42.251162 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f9zxw" podStartSLOduration=4.251143321 podStartE2EDuration="4.251143321s" podCreationTimestamp="2025-09-12 23:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:59:40.801044367 +0000 UTC m=+7.177536085" watchObservedRunningTime="2025-09-12 23:59:42.251143321 +0000 UTC m=+8.627635039" Sep 12 23:59:42.881631 containerd[1830]: time="2025-09-12T23:59:42.881576663Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:42.885372 containerd[1830]: time="2025-09-12T23:59:42.885190739Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 12 23:59:42.888720 containerd[1830]: time="2025-09-12T23:59:42.888690815Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:42.895154 containerd[1830]: time="2025-09-12T23:59:42.895123848Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:42.896326 containerd[1830]: time="2025-09-12T23:59:42.895816927Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.450094847s" Sep 12 23:59:42.896326 containerd[1830]: time="2025-09-12T23:59:42.895850767Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 12 23:59:42.897893 containerd[1830]: time="2025-09-12T23:59:42.897860725Z" level=info msg="CreateContainer within sandbox \"e3ecf2b4e820271708791d70477b343993d161ed3570cdf9a06208d9b6c86d86\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 23:59:42.943795 containerd[1830]: time="2025-09-12T23:59:42.943750354Z" level=info msg="CreateContainer within sandbox \"e3ecf2b4e820271708791d70477b343993d161ed3570cdf9a06208d9b6c86d86\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bf82cd1ebde36ecc3ee05eefec6ef890c2dce703117f8b16ee4ba68d75d1c27c\"" Sep 12 23:59:42.946241 containerd[1830]: time="2025-09-12T23:59:42.944194753Z" level=info msg="StartContainer for \"bf82cd1ebde36ecc3ee05eefec6ef890c2dce703117f8b16ee4ba68d75d1c27c\"" Sep 12 23:59:43.000982 containerd[1830]: time="2025-09-12T23:59:43.000925726Z" level=info msg="StartContainer for \"bf82cd1ebde36ecc3ee05eefec6ef890c2dce703117f8b16ee4ba68d75d1c27c\" returns successfully" Sep 12 23:59:46.710881 kubelet[3348]: I0912 23:59:46.710375 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-6lhfq" podStartSLOduration=4.256746345 podStartE2EDuration="6.710355428s" podCreationTimestamp="2025-09-12 23:59:40 +0000 UTC" firstStartedPulling="2025-09-12 23:59:40.443037443 +0000 UTC m=+6.819529161" lastFinishedPulling="2025-09-12 23:59:42.896646526 +0000 UTC m=+9.273138244" observedRunningTime="2025-09-12 23:59:43.808773092 +0000 UTC m=+10.185264850" watchObservedRunningTime="2025-09-12 23:59:46.710355428 +0000 UTC m=+13.086847146" Sep 12 23:59:49.180724 sudo[2353]: pam_unix(sudo:session): session closed for user root Sep 12 23:59:49.262325 sshd[2349]: pam_unix(sshd:session): session closed for user core Sep 12 23:59:49.270409 systemd[1]: sshd@6-10.200.20.38:22-10.200.16.10:38058.service: Deactivated successfully. Sep 12 23:59:49.278167 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 23:59:49.281697 systemd-logind[1788]: Session 9 logged out. Waiting for processes to exit. Sep 12 23:59:49.283228 systemd-logind[1788]: Removed session 9. Sep 12 23:59:53.443169 kubelet[3348]: I0912 23:59:53.443114 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnzdk\" (UniqueName: \"kubernetes.io/projected/83646730-01a9-46ff-98c3-9b35a45818fd-kube-api-access-mnzdk\") pod \"calico-typha-66b77b9c6b-4s58t\" (UID: \"83646730-01a9-46ff-98c3-9b35a45818fd\") " pod="calico-system/calico-typha-66b77b9c6b-4s58t" Sep 12 23:59:53.443169 kubelet[3348]: I0912 23:59:53.443161 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83646730-01a9-46ff-98c3-9b35a45818fd-tigera-ca-bundle\") pod \"calico-typha-66b77b9c6b-4s58t\" (UID: \"83646730-01a9-46ff-98c3-9b35a45818fd\") " pod="calico-system/calico-typha-66b77b9c6b-4s58t" Sep 12 23:59:53.444813 kubelet[3348]: I0912 23:59:53.443180 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/83646730-01a9-46ff-98c3-9b35a45818fd-typha-certs\") pod \"calico-typha-66b77b9c6b-4s58t\" (UID: \"83646730-01a9-46ff-98c3-9b35a45818fd\") " pod="calico-system/calico-typha-66b77b9c6b-4s58t" Sep 12 23:59:53.720903 containerd[1830]: time="2025-09-12T23:59:53.720764651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66b77b9c6b-4s58t,Uid:83646730-01a9-46ff-98c3-9b35a45818fd,Namespace:calico-system,Attempt:0,}" Sep 12 23:59:53.744847 kubelet[3348]: I0912 23:59:53.744547 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/eeb79e60-1163-4302-9e63-f8ffab65a314-node-certs\") pod \"calico-node-kzhr4\" (UID: \"eeb79e60-1163-4302-9e63-f8ffab65a314\") " pod="calico-system/calico-node-kzhr4" Sep 12 23:59:53.744847 kubelet[3348]: I0912 23:59:53.744586 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eeb79e60-1163-4302-9e63-f8ffab65a314-tigera-ca-bundle\") pod \"calico-node-kzhr4\" (UID: \"eeb79e60-1163-4302-9e63-f8ffab65a314\") " pod="calico-system/calico-node-kzhr4" Sep 12 23:59:53.744847 kubelet[3348]: I0912 23:59:53.744601 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/eeb79e60-1163-4302-9e63-f8ffab65a314-var-run-calico\") pod \"calico-node-kzhr4\" (UID: \"eeb79e60-1163-4302-9e63-f8ffab65a314\") " pod="calico-system/calico-node-kzhr4" Sep 12 23:59:53.744847 kubelet[3348]: I0912 23:59:53.744618 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/eeb79e60-1163-4302-9e63-f8ffab65a314-cni-net-dir\") pod \"calico-node-kzhr4\" (UID: \"eeb79e60-1163-4302-9e63-f8ffab65a314\") " pod="calico-system/calico-node-kzhr4" Sep 12 23:59:53.744847 kubelet[3348]: I0912 23:59:53.744634 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/eeb79e60-1163-4302-9e63-f8ffab65a314-flexvol-driver-host\") pod \"calico-node-kzhr4\" (UID: \"eeb79e60-1163-4302-9e63-f8ffab65a314\") " pod="calico-system/calico-node-kzhr4" Sep 12 23:59:53.745177 kubelet[3348]: I0912 23:59:53.744651 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9r8h\" (UniqueName: \"kubernetes.io/projected/eeb79e60-1163-4302-9e63-f8ffab65a314-kube-api-access-q9r8h\") pod \"calico-node-kzhr4\" (UID: \"eeb79e60-1163-4302-9e63-f8ffab65a314\") " pod="calico-system/calico-node-kzhr4" Sep 12 23:59:53.745177 kubelet[3348]: I0912 23:59:53.744666 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/eeb79e60-1163-4302-9e63-f8ffab65a314-cni-bin-dir\") pod \"calico-node-kzhr4\" (UID: \"eeb79e60-1163-4302-9e63-f8ffab65a314\") " pod="calico-system/calico-node-kzhr4" Sep 12 23:59:53.745177 kubelet[3348]: I0912 23:59:53.744679 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eeb79e60-1163-4302-9e63-f8ffab65a314-lib-modules\") pod \"calico-node-kzhr4\" (UID: \"eeb79e60-1163-4302-9e63-f8ffab65a314\") " pod="calico-system/calico-node-kzhr4" Sep 12 23:59:53.745177 kubelet[3348]: I0912 23:59:53.744693 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eeb79e60-1163-4302-9e63-f8ffab65a314-xtables-lock\") pod \"calico-node-kzhr4\" (UID: \"eeb79e60-1163-4302-9e63-f8ffab65a314\") " pod="calico-system/calico-node-kzhr4" Sep 12 23:59:53.745177 kubelet[3348]: I0912 23:59:53.744707 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/eeb79e60-1163-4302-9e63-f8ffab65a314-policysync\") pod \"calico-node-kzhr4\" (UID: \"eeb79e60-1163-4302-9e63-f8ffab65a314\") " pod="calico-system/calico-node-kzhr4" Sep 12 23:59:53.745282 kubelet[3348]: I0912 23:59:53.744723 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/eeb79e60-1163-4302-9e63-f8ffab65a314-cni-log-dir\") pod \"calico-node-kzhr4\" (UID: \"eeb79e60-1163-4302-9e63-f8ffab65a314\") " pod="calico-system/calico-node-kzhr4" Sep 12 23:59:53.745282 kubelet[3348]: I0912 23:59:53.744736 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eeb79e60-1163-4302-9e63-f8ffab65a314-var-lib-calico\") pod \"calico-node-kzhr4\" (UID: \"eeb79e60-1163-4302-9e63-f8ffab65a314\") " pod="calico-system/calico-node-kzhr4" Sep 12 23:59:53.777368 containerd[1830]: time="2025-09-12T23:59:53.777171693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:59:53.777368 containerd[1830]: time="2025-09-12T23:59:53.777231453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:59:53.777368 containerd[1830]: time="2025-09-12T23:59:53.777249653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:53.782170 containerd[1830]: time="2025-09-12T23:59:53.777345332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:53.865101 kubelet[3348]: E0912 23:59:53.864095 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:53.865563 kubelet[3348]: W0912 23:59:53.865272 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:53.865563 kubelet[3348]: E0912 23:59:53.865310 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:53.873381 kubelet[3348]: E0912 23:59:53.873256 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:53.873381 kubelet[3348]: W0912 23:59:53.873272 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:53.873381 kubelet[3348]: E0912 23:59:53.873289 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:53.873717 containerd[1830]: time="2025-09-12T23:59:53.873676639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66b77b9c6b-4s58t,Uid:83646730-01a9-46ff-98c3-9b35a45818fd,Namespace:calico-system,Attempt:0,} returns sandbox id \"d2a5e55c7226291f6a0ab15cce9a9958de39dea13a3f7a1ecc5cbfba3ea1e44d\"" Sep 12 23:59:53.874525 kubelet[3348]: E0912 23:59:53.874124 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:53.874525 kubelet[3348]: W0912 23:59:53.874141 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:53.874525 kubelet[3348]: E0912 23:59:53.874383 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:53.876404 kubelet[3348]: E0912 23:59:53.875716 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:53.876404 kubelet[3348]: W0912 23:59:53.875729 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:53.876404 kubelet[3348]: E0912 23:59:53.875837 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:53.877201 containerd[1830]: time="2025-09-12T23:59:53.875987596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 23:59:53.877233 kubelet[3348]: E0912 23:59:53.876871 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:53.877233 kubelet[3348]: W0912 23:59:53.876886 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:53.879429 kubelet[3348]: E0912 23:59:53.878177 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:53.879645 kubelet[3348]: E0912 23:59:53.879565 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:53.879645 kubelet[3348]: W0912 23:59:53.879588 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:53.879645 kubelet[3348]: E0912 23:59:53.879602 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:53.964354 kubelet[3348]: E0912 23:59:53.963930 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4wv45" podUID="7fc7d1ef-2f94-42c6-a027-0084519d796b" Sep 12 23:59:53.992626 containerd[1830]: time="2025-09-12T23:59:53.992405314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kzhr4,Uid:eeb79e60-1163-4302-9e63-f8ffab65a314,Namespace:calico-system,Attempt:0,}" Sep 12 23:59:54.036148 kubelet[3348]: E0912 23:59:54.033483 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.036148 kubelet[3348]: W0912 23:59:54.033510 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.036148 kubelet[3348]: E0912 23:59:54.033531 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.036148 kubelet[3348]: E0912 23:59:54.033660 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.036148 kubelet[3348]: W0912 23:59:54.033667 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.036148 kubelet[3348]: E0912 23:59:54.033674 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.036148 kubelet[3348]: E0912 23:59:54.033785 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.036148 kubelet[3348]: W0912 23:59:54.033792 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.036148 kubelet[3348]: E0912 23:59:54.033798 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.036148 kubelet[3348]: E0912 23:59:54.033907 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.036744 kubelet[3348]: W0912 23:59:54.033912 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.036744 kubelet[3348]: E0912 23:59:54.033919 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.036744 kubelet[3348]: E0912 23:59:54.034221 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.036744 kubelet[3348]: W0912 23:59:54.034232 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.036744 kubelet[3348]: E0912 23:59:54.034243 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.037192 kubelet[3348]: E0912 23:59:54.037155 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.037192 kubelet[3348]: W0912 23:59:54.037174 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.037192 kubelet[3348]: E0912 23:59:54.037188 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.037373 kubelet[3348]: E0912 23:59:54.037358 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.037373 kubelet[3348]: W0912 23:59:54.037370 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.037451 kubelet[3348]: E0912 23:59:54.037380 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.037530 kubelet[3348]: E0912 23:59:54.037514 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.037530 kubelet[3348]: W0912 23:59:54.037527 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.037608 kubelet[3348]: E0912 23:59:54.037535 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.038434 kubelet[3348]: E0912 23:59:54.037672 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.038434 kubelet[3348]: W0912 23:59:54.037685 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.038434 kubelet[3348]: E0912 23:59:54.037693 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.038434 kubelet[3348]: E0912 23:59:54.037847 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.038434 kubelet[3348]: W0912 23:59:54.037856 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.038434 kubelet[3348]: E0912 23:59:54.037866 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.040128 kubelet[3348]: E0912 23:59:54.039783 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.040128 kubelet[3348]: W0912 23:59:54.039799 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.040128 kubelet[3348]: E0912 23:59:54.039812 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.040128 kubelet[3348]: E0912 23:59:54.039970 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.040128 kubelet[3348]: W0912 23:59:54.039977 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.040128 kubelet[3348]: E0912 23:59:54.039985 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.040316 kubelet[3348]: E0912 23:59:54.040143 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.040316 kubelet[3348]: W0912 23:59:54.040152 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.040316 kubelet[3348]: E0912 23:59:54.040161 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.042535 kubelet[3348]: E0912 23:59:54.042446 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.042535 kubelet[3348]: W0912 23:59:54.042464 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.042535 kubelet[3348]: E0912 23:59:54.042476 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.044119 kubelet[3348]: E0912 23:59:54.043712 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.044119 kubelet[3348]: W0912 23:59:54.043743 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.044119 kubelet[3348]: E0912 23:59:54.043756 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.044119 kubelet[3348]: E0912 23:59:54.043971 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.044119 kubelet[3348]: W0912 23:59:54.043980 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.044119 kubelet[3348]: E0912 23:59:54.043990 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.044296 kubelet[3348]: E0912 23:59:54.044155 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.044296 kubelet[3348]: W0912 23:59:54.044167 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.044296 kubelet[3348]: E0912 23:59:54.044176 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.045969 kubelet[3348]: E0912 23:59:54.045909 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.045969 kubelet[3348]: W0912 23:59:54.045927 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.045969 kubelet[3348]: E0912 23:59:54.045939 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.046843 kubelet[3348]: E0912 23:59:54.046617 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.046843 kubelet[3348]: W0912 23:59:54.046634 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.046843 kubelet[3348]: E0912 23:59:54.046646 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.046843 kubelet[3348]: E0912 23:59:54.046808 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.046843 kubelet[3348]: W0912 23:59:54.046832 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.046843 kubelet[3348]: E0912 23:59:54.046842 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.052331 kubelet[3348]: E0912 23:59:54.051703 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.052331 kubelet[3348]: W0912 23:59:54.051721 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.052331 kubelet[3348]: E0912 23:59:54.051734 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.052331 kubelet[3348]: I0912 23:59:54.052037 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fc7d1ef-2f94-42c6-a027-0084519d796b-kubelet-dir\") pod \"csi-node-driver-4wv45\" (UID: \"7fc7d1ef-2f94-42c6-a027-0084519d796b\") " pod="calico-system/csi-node-driver-4wv45" Sep 12 23:59:54.052331 kubelet[3348]: E0912 23:59:54.052314 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.052331 kubelet[3348]: W0912 23:59:54.052327 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.052537 kubelet[3348]: E0912 23:59:54.052347 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.053322 kubelet[3348]: E0912 23:59:54.053299 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.053322 kubelet[3348]: W0912 23:59:54.053317 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.053322 kubelet[3348]: E0912 23:59:54.053336 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.054389 kubelet[3348]: E0912 23:59:54.054206 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.054697 kubelet[3348]: W0912 23:59:54.054610 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.054697 kubelet[3348]: E0912 23:59:54.054663 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.055421 kubelet[3348]: I0912 23:59:54.055277 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7fc7d1ef-2f94-42c6-a027-0084519d796b-registration-dir\") pod \"csi-node-driver-4wv45\" (UID: \"7fc7d1ef-2f94-42c6-a027-0084519d796b\") " pod="calico-system/csi-node-driver-4wv45" Sep 12 23:59:54.056211 kubelet[3348]: E0912 23:59:54.055450 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.056211 kubelet[3348]: W0912 23:59:54.055763 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.056211 kubelet[3348]: E0912 23:59:54.055781 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.057144 kubelet[3348]: E0912 23:59:54.056554 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.057773 kubelet[3348]: W0912 23:59:54.057448 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.057773 kubelet[3348]: E0912 23:59:54.057475 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.058430 kubelet[3348]: E0912 23:59:54.058283 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.058430 kubelet[3348]: W0912 23:59:54.058303 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.058430 kubelet[3348]: E0912 23:59:54.058318 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.058430 kubelet[3348]: I0912 23:59:54.058347 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7fc7d1ef-2f94-42c6-a027-0084519d796b-socket-dir\") pod \"csi-node-driver-4wv45\" (UID: \"7fc7d1ef-2f94-42c6-a027-0084519d796b\") " pod="calico-system/csi-node-driver-4wv45" Sep 12 23:59:54.059528 kubelet[3348]: E0912 23:59:54.058784 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.059528 kubelet[3348]: W0912 23:59:54.058799 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.059528 kubelet[3348]: E0912 23:59:54.058819 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.059528 kubelet[3348]: I0912 23:59:54.058844 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7fc7d1ef-2f94-42c6-a027-0084519d796b-varrun\") pod \"csi-node-driver-4wv45\" (UID: \"7fc7d1ef-2f94-42c6-a027-0084519d796b\") " pod="calico-system/csi-node-driver-4wv45" Sep 12 23:59:54.060117 kubelet[3348]: E0912 23:59:54.059630 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.060117 kubelet[3348]: W0912 23:59:54.059650 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.060117 kubelet[3348]: E0912 23:59:54.059670 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.060117 kubelet[3348]: I0912 23:59:54.059690 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nngcb\" (UniqueName: \"kubernetes.io/projected/7fc7d1ef-2f94-42c6-a027-0084519d796b-kube-api-access-nngcb\") pod \"csi-node-driver-4wv45\" (UID: \"7fc7d1ef-2f94-42c6-a027-0084519d796b\") " pod="calico-system/csi-node-driver-4wv45" Sep 12 23:59:54.061863 kubelet[3348]: E0912 23:59:54.061230 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.061863 kubelet[3348]: W0912 23:59:54.061246 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.061967 kubelet[3348]: E0912 23:59:54.061950 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.062402 kubelet[3348]: E0912 23:59:54.062380 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.062402 kubelet[3348]: W0912 23:59:54.062399 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.062402 kubelet[3348]: E0912 23:59:54.062435 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.063490 kubelet[3348]: E0912 23:59:54.063465 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.063490 kubelet[3348]: W0912 23:59:54.063484 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.063637 kubelet[3348]: E0912 23:59:54.063528 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.065182 kubelet[3348]: E0912 23:59:54.065166 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.065379 kubelet[3348]: W0912 23:59:54.065251 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.065379 kubelet[3348]: E0912 23:59:54.065270 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.065734 kubelet[3348]: E0912 23:59:54.065631 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.065734 kubelet[3348]: W0912 23:59:54.065645 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.065734 kubelet[3348]: E0912 23:59:54.065657 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.067318 kubelet[3348]: E0912 23:59:54.067259 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.067318 kubelet[3348]: W0912 23:59:54.067276 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.067318 kubelet[3348]: E0912 23:59:54.067290 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.078185 containerd[1830]: time="2025-09-12T23:59:54.077933916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:59:54.078442 containerd[1830]: time="2025-09-12T23:59:54.077985836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:59:54.078442 containerd[1830]: time="2025-09-12T23:59:54.078362035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:54.079970 containerd[1830]: time="2025-09-12T23:59:54.078669315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:54.154283 containerd[1830]: time="2025-09-12T23:59:54.152833012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kzhr4,Uid:eeb79e60-1163-4302-9e63-f8ffab65a314,Namespace:calico-system,Attempt:0,} returns sandbox id \"d538c4bd01ad4a0bf0b8ed615dde50b247b2a489f6896169a9a42ef568007616\"" Sep 12 23:59:54.161850 kubelet[3348]: E0912 23:59:54.161829 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.163071 kubelet[3348]: W0912 23:59:54.163052 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.164201 kubelet[3348]: E0912 23:59:54.164163 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.164472 kubelet[3348]: E0912 23:59:54.164450 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.164472 kubelet[3348]: W0912 23:59:54.164467 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.164472 kubelet[3348]: E0912 23:59:54.164483 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.166899 kubelet[3348]: E0912 23:59:54.165758 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.166899 kubelet[3348]: W0912 23:59:54.166317 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.166899 kubelet[3348]: E0912 23:59:54.166377 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.167595 kubelet[3348]: E0912 23:59:54.167578 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.168296 kubelet[3348]: W0912 23:59:54.168042 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.168296 kubelet[3348]: E0912 23:59:54.168111 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.169712 kubelet[3348]: E0912 23:59:54.169232 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.169712 kubelet[3348]: W0912 23:59:54.169247 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.169712 kubelet[3348]: E0912 23:59:54.169293 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.171540 kubelet[3348]: E0912 23:59:54.170838 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.171540 kubelet[3348]: W0912 23:59:54.170852 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.171540 kubelet[3348]: E0912 23:59:54.170889 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.173576 kubelet[3348]: E0912 23:59:54.171993 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.173576 kubelet[3348]: W0912 23:59:54.172006 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.173576 kubelet[3348]: E0912 23:59:54.172041 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.173576 kubelet[3348]: E0912 23:59:54.172625 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.173576 kubelet[3348]: W0912 23:59:54.172656 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.173576 kubelet[3348]: E0912 23:59:54.173074 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.173576 kubelet[3348]: E0912 23:59:54.173821 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.173576 kubelet[3348]: W0912 23:59:54.173833 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.173576 kubelet[3348]: E0912 23:59:54.173877 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.176076 kubelet[3348]: E0912 23:59:54.174495 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.176076 kubelet[3348]: W0912 23:59:54.175030 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.176076 kubelet[3348]: E0912 23:59:54.175235 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.177621 kubelet[3348]: E0912 23:59:54.177559 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.181566 kubelet[3348]: W0912 23:59:54.178047 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.181566 kubelet[3348]: E0912 23:59:54.178122 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.181566 kubelet[3348]: E0912 23:59:54.178547 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.181566 kubelet[3348]: W0912 23:59:54.178560 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.181566 kubelet[3348]: E0912 23:59:54.178596 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.181566 kubelet[3348]: E0912 23:59:54.179965 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.181566 kubelet[3348]: W0912 23:59:54.179978 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.181566 kubelet[3348]: E0912 23:59:54.180008 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.181566 kubelet[3348]: E0912 23:59:54.180895 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.181566 kubelet[3348]: W0912 23:59:54.180910 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.182604 kubelet[3348]: E0912 23:59:54.180942 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.184607 kubelet[3348]: E0912 23:59:54.183232 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.184607 kubelet[3348]: W0912 23:59:54.183247 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.184607 kubelet[3348]: E0912 23:59:54.183306 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.184607 kubelet[3348]: E0912 23:59:54.183947 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.184607 kubelet[3348]: W0912 23:59:54.183960 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.184607 kubelet[3348]: E0912 23:59:54.183995 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.186674 kubelet[3348]: E0912 23:59:54.186133 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.186674 kubelet[3348]: W0912 23:59:54.186149 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.186674 kubelet[3348]: E0912 23:59:54.186192 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.189154 kubelet[3348]: E0912 23:59:54.188774 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.189154 kubelet[3348]: W0912 23:59:54.188792 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.189154 kubelet[3348]: E0912 23:59:54.188838 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.190235 kubelet[3348]: E0912 23:59:54.190206 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.190235 kubelet[3348]: W0912 23:59:54.190228 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.190562 kubelet[3348]: E0912 23:59:54.190275 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.190562 kubelet[3348]: E0912 23:59:54.190467 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.190562 kubelet[3348]: W0912 23:59:54.190476 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.191477 kubelet[3348]: E0912 23:59:54.190827 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.191477 kubelet[3348]: E0912 23:59:54.191018 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.191477 kubelet[3348]: W0912 23:59:54.191028 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.191477 kubelet[3348]: E0912 23:59:54.191113 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.191477 kubelet[3348]: E0912 23:59:54.191267 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.191477 kubelet[3348]: W0912 23:59:54.191275 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.191477 kubelet[3348]: E0912 23:59:54.191311 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.194049 kubelet[3348]: E0912 23:59:54.192279 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.194049 kubelet[3348]: W0912 23:59:54.192292 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.194049 kubelet[3348]: E0912 23:59:54.192309 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.194049 kubelet[3348]: E0912 23:59:54.192477 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.194049 kubelet[3348]: W0912 23:59:54.192484 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.194049 kubelet[3348]: E0912 23:59:54.192492 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.194049 kubelet[3348]: E0912 23:59:54.193045 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.194049 kubelet[3348]: W0912 23:59:54.193100 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.194049 kubelet[3348]: E0912 23:59:54.193114 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:54.212872 kubelet[3348]: E0912 23:59:54.212837 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:54.212872 kubelet[3348]: W0912 23:59:54.212859 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:54.212872 kubelet[3348]: E0912 23:59:54.212878 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:55.273819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2345678323.mount: Deactivated successfully. Sep 12 23:59:55.731664 containerd[1830]: time="2025-09-12T23:59:55.731619383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:55.736226 kubelet[3348]: E0912 23:59:55.736180 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4wv45" podUID="7fc7d1ef-2f94-42c6-a027-0084519d796b" Sep 12 23:59:55.737580 containerd[1830]: time="2025-09-12T23:59:55.737505015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 12 23:59:55.743757 containerd[1830]: time="2025-09-12T23:59:55.743705807Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:55.749047 containerd[1830]: time="2025-09-12T23:59:55.749002119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:55.749744 containerd[1830]: time="2025-09-12T23:59:55.749714318Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.873701562s" Sep 12 23:59:55.749799 containerd[1830]: time="2025-09-12T23:59:55.749752198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 12 23:59:55.752117 containerd[1830]: time="2025-09-12T23:59:55.751614796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 23:59:55.771985 containerd[1830]: time="2025-09-12T23:59:55.771646528Z" level=info msg="CreateContainer within sandbox \"d2a5e55c7226291f6a0ab15cce9a9958de39dea13a3f7a1ecc5cbfba3ea1e44d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 23:59:55.798710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1838023966.mount: Deactivated successfully. Sep 12 23:59:55.814570 containerd[1830]: time="2025-09-12T23:59:55.814519028Z" level=info msg="CreateContainer within sandbox \"d2a5e55c7226291f6a0ab15cce9a9958de39dea13a3f7a1ecc5cbfba3ea1e44d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f46c9bcc4f503ff60577350f52662bd3797b53a778a884c404959d3e7478c760\"" Sep 12 23:59:55.816477 containerd[1830]: time="2025-09-12T23:59:55.815354787Z" level=info msg="StartContainer for \"f46c9bcc4f503ff60577350f52662bd3797b53a778a884c404959d3e7478c760\"" Sep 12 23:59:55.884408 containerd[1830]: time="2025-09-12T23:59:55.884357732Z" level=info msg="StartContainer for \"f46c9bcc4f503ff60577350f52662bd3797b53a778a884c404959d3e7478c760\" returns successfully" Sep 12 23:59:56.868716 kubelet[3348]: E0912 23:59:56.868387 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.868716 kubelet[3348]: W0912 23:59:56.868469 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.870291 kubelet[3348]: E0912 23:59:56.868494 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.870291 kubelet[3348]: E0912 23:59:56.869510 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.870291 kubelet[3348]: W0912 23:59:56.869524 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.870291 kubelet[3348]: E0912 23:59:56.869558 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.870291 kubelet[3348]: E0912 23:59:56.869754 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.870291 kubelet[3348]: W0912 23:59:56.869763 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.870291 kubelet[3348]: E0912 23:59:56.869774 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.870291 kubelet[3348]: E0912 23:59:56.869966 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.870291 kubelet[3348]: W0912 23:59:56.869975 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.870291 kubelet[3348]: E0912 23:59:56.869985 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.871791 kubelet[3348]: E0912 23:59:56.870167 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.871791 kubelet[3348]: W0912 23:59:56.870193 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.871791 kubelet[3348]: E0912 23:59:56.870203 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.871791 kubelet[3348]: E0912 23:59:56.870371 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.871791 kubelet[3348]: W0912 23:59:56.870380 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.871791 kubelet[3348]: E0912 23:59:56.870389 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.871791 kubelet[3348]: E0912 23:59:56.870556 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.871791 kubelet[3348]: W0912 23:59:56.870564 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.871791 kubelet[3348]: E0912 23:59:56.870572 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.871791 kubelet[3348]: E0912 23:59:56.870904 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.872103 kubelet[3348]: W0912 23:59:56.870918 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.872103 kubelet[3348]: E0912 23:59:56.870928 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.872103 kubelet[3348]: E0912 23:59:56.871139 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.872103 kubelet[3348]: W0912 23:59:56.871192 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.872103 kubelet[3348]: E0912 23:59:56.871205 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.872103 kubelet[3348]: E0912 23:59:56.871516 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.872103 kubelet[3348]: W0912 23:59:56.871526 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.872103 kubelet[3348]: E0912 23:59:56.871537 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.872103 kubelet[3348]: E0912 23:59:56.871973 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.873128 kubelet[3348]: W0912 23:59:56.872279 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.873128 kubelet[3348]: E0912 23:59:56.872306 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.873128 kubelet[3348]: E0912 23:59:56.872518 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.873128 kubelet[3348]: W0912 23:59:56.872527 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.873128 kubelet[3348]: E0912 23:59:56.872536 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.873128 kubelet[3348]: E0912 23:59:56.872805 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.873128 kubelet[3348]: W0912 23:59:56.872816 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.873128 kubelet[3348]: E0912 23:59:56.872829 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.874375 kubelet[3348]: E0912 23:59:56.873242 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.874375 kubelet[3348]: W0912 23:59:56.873255 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.874375 kubelet[3348]: E0912 23:59:56.873268 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.874375 kubelet[3348]: E0912 23:59:56.874198 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.874375 kubelet[3348]: W0912 23:59:56.874213 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.874375 kubelet[3348]: E0912 23:59:56.874226 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.900469 kubelet[3348]: E0912 23:59:56.900396 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.900469 kubelet[3348]: W0912 23:59:56.900433 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.900889 kubelet[3348]: E0912 23:59:56.900550 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.901011 kubelet[3348]: E0912 23:59:56.900998 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.901075 kubelet[3348]: W0912 23:59:56.901064 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.901386 kubelet[3348]: E0912 23:59:56.901232 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.901565 kubelet[3348]: E0912 23:59:56.901552 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.901787 kubelet[3348]: W0912 23:59:56.901655 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.901787 kubelet[3348]: E0912 23:59:56.901676 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.901973 kubelet[3348]: E0912 23:59:56.901960 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.902153 kubelet[3348]: W0912 23:59:56.902030 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.902153 kubelet[3348]: E0912 23:59:56.902048 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.902355 kubelet[3348]: E0912 23:59:56.902322 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.903248 kubelet[3348]: W0912 23:59:56.902431 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.903248 kubelet[3348]: E0912 23:59:56.902459 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.906158 kubelet[3348]: E0912 23:59:56.903823 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.906158 kubelet[3348]: W0912 23:59:56.903837 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.906158 kubelet[3348]: E0912 23:59:56.903984 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.906158 kubelet[3348]: W0912 23:59:56.903991 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.906158 kubelet[3348]: E0912 23:59:56.904160 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.906158 kubelet[3348]: E0912 23:59:56.904202 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.906432 kubelet[3348]: E0912 23:59:56.906415 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.906598 kubelet[3348]: W0912 23:59:56.906477 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.906598 kubelet[3348]: E0912 23:59:56.906507 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.907001 kubelet[3348]: E0912 23:59:56.906865 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.907001 kubelet[3348]: W0912 23:59:56.906878 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.907001 kubelet[3348]: E0912 23:59:56.906889 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.909337 kubelet[3348]: E0912 23:59:56.909192 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.909337 kubelet[3348]: W0912 23:59:56.909213 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.909831 kubelet[3348]: E0912 23:59:56.909720 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.909831 kubelet[3348]: W0912 23:59:56.909736 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.910582 kubelet[3348]: E0912 23:59:56.910301 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.910582 kubelet[3348]: W0912 23:59:56.910314 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.910582 kubelet[3348]: E0912 23:59:56.910329 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.910859 kubelet[3348]: E0912 23:59:56.910735 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.910859 kubelet[3348]: W0912 23:59:56.910749 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.910859 kubelet[3348]: E0912 23:59:56.910761 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.913508 kubelet[3348]: E0912 23:59:56.913137 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.913508 kubelet[3348]: E0912 23:59:56.913186 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.913508 kubelet[3348]: E0912 23:59:56.913188 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.913508 kubelet[3348]: W0912 23:59:56.913203 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.913508 kubelet[3348]: E0912 23:59:56.913218 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.915297 kubelet[3348]: E0912 23:59:56.915273 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.915297 kubelet[3348]: W0912 23:59:56.915291 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.915441 kubelet[3348]: E0912 23:59:56.915311 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.916270 kubelet[3348]: E0912 23:59:56.916237 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.916270 kubelet[3348]: W0912 23:59:56.916258 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.916270 kubelet[3348]: E0912 23:59:56.916279 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.919608 kubelet[3348]: E0912 23:59:56.919369 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.919608 kubelet[3348]: W0912 23:59:56.919396 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.919608 kubelet[3348]: E0912 23:59:56.919414 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:56.921498 kubelet[3348]: E0912 23:59:56.921353 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:59:56.921498 kubelet[3348]: W0912 23:59:56.921380 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:59:56.921498 kubelet[3348]: E0912 23:59:56.921400 3348 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:59:57.068388 containerd[1830]: time="2025-09-12T23:59:57.067615771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:57.071408 containerd[1830]: time="2025-09-12T23:59:57.071364246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 12 23:59:57.077371 containerd[1830]: time="2025-09-12T23:59:57.076465399Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:57.083377 containerd[1830]: time="2025-09-12T23:59:57.083284989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:57.085543 containerd[1830]: time="2025-09-12T23:59:57.085391347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.333721271s" Sep 12 23:59:57.085543 containerd[1830]: time="2025-09-12T23:59:57.085441946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 12 23:59:57.089038 containerd[1830]: time="2025-09-12T23:59:57.088828182Z" level=info msg="CreateContainer within sandbox \"d538c4bd01ad4a0bf0b8ed615dde50b247b2a489f6896169a9a42ef568007616\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 23:59:57.131600 containerd[1830]: time="2025-09-12T23:59:57.131251163Z" level=info msg="CreateContainer within sandbox \"d538c4bd01ad4a0bf0b8ed615dde50b247b2a489f6896169a9a42ef568007616\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8a4ddb1b21c97f8e82b39935680460e28996b1165cb2bb50bcffcff186e0d3e4\"" Sep 12 23:59:57.134535 containerd[1830]: time="2025-09-12T23:59:57.131987682Z" level=info msg="StartContainer for \"8a4ddb1b21c97f8e82b39935680460e28996b1165cb2bb50bcffcff186e0d3e4\"" Sep 12 23:59:57.209890 containerd[1830]: time="2025-09-12T23:59:57.209655854Z" level=info msg="StartContainer for \"8a4ddb1b21c97f8e82b39935680460e28996b1165cb2bb50bcffcff186e0d3e4\" returns successfully" Sep 12 23:59:57.262462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a4ddb1b21c97f8e82b39935680460e28996b1165cb2bb50bcffcff186e0d3e4-rootfs.mount: Deactivated successfully. Sep 12 23:59:57.736330 kubelet[3348]: E0912 23:59:57.736279 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4wv45" podUID="7fc7d1ef-2f94-42c6-a027-0084519d796b" Sep 12 23:59:58.229902 kubelet[3348]: I0912 23:59:57.834744 3348 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:59:58.229902 kubelet[3348]: I0912 23:59:57.857755 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66b77b9c6b-4s58t" podStartSLOduration=2.982274236 podStartE2EDuration="4.857726996s" podCreationTimestamp="2025-09-12 23:59:53 +0000 UTC" firstStartedPulling="2025-09-12 23:59:53.875414917 +0000 UTC m=+20.251906635" lastFinishedPulling="2025-09-12 23:59:55.750867677 +0000 UTC m=+22.127359395" observedRunningTime="2025-09-12 23:59:56.856490464 +0000 UTC m=+23.232982182" watchObservedRunningTime="2025-09-12 23:59:57.857726996 +0000 UTC m=+24.234218714" Sep 12 23:59:58.295517 containerd[1830]: time="2025-09-12T23:59:58.295456469Z" level=info msg="shim disconnected" id=8a4ddb1b21c97f8e82b39935680460e28996b1165cb2bb50bcffcff186e0d3e4 namespace=k8s.io Sep 12 23:59:58.295517 containerd[1830]: time="2025-09-12T23:59:58.295521109Z" level=warning msg="cleaning up after shim disconnected" id=8a4ddb1b21c97f8e82b39935680460e28996b1165cb2bb50bcffcff186e0d3e4 namespace=k8s.io Sep 12 23:59:58.295903 containerd[1830]: time="2025-09-12T23:59:58.295531349Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:59:58.841365 containerd[1830]: time="2025-09-12T23:59:58.840922073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 23:59:59.737215 kubelet[3348]: E0912 23:59:59.736313 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4wv45" podUID="7fc7d1ef-2f94-42c6-a027-0084519d796b" Sep 13 00:00:01.735761 kubelet[3348]: E0913 00:00:01.735395 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4wv45" podUID="7fc7d1ef-2f94-42c6-a027-0084519d796b" Sep 13 00:00:01.759596 containerd[1830]: time="2025-09-13T00:00:01.758687993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:01.763286 containerd[1830]: time="2025-09-13T00:00:01.763221907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 13 00:00:01.770965 containerd[1830]: time="2025-09-13T00:00:01.770887538Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:01.777158 containerd[1830]: time="2025-09-13T00:00:01.777068651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:01.778065 containerd[1830]: time="2025-09-13T00:00:01.777934610Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.936968177s" Sep 13 00:00:01.778065 containerd[1830]: time="2025-09-13T00:00:01.777972170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 13 00:00:01.782627 containerd[1830]: time="2025-09-13T00:00:01.782567165Z" level=info msg="CreateContainer within sandbox \"d538c4bd01ad4a0bf0b8ed615dde50b247b2a489f6896169a9a42ef568007616\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:00:01.839437 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Sep 13 00:00:01.853672 systemd[1]: logrotate.service: Deactivated successfully. Sep 13 00:00:01.864928 containerd[1830]: time="2025-09-13T00:00:01.864868027Z" level=info msg="CreateContainer within sandbox \"d538c4bd01ad4a0bf0b8ed615dde50b247b2a489f6896169a9a42ef568007616\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d4add16173fb723d9dfbcddafb9f2fe5bdeed5d2ee19d4d2f9ee2385ca0069dd\"" Sep 13 00:00:01.865857 containerd[1830]: time="2025-09-13T00:00:01.865815786Z" level=info msg="StartContainer for \"d4add16173fb723d9dfbcddafb9f2fe5bdeed5d2ee19d4d2f9ee2385ca0069dd\"" Sep 13 00:00:01.938914 containerd[1830]: time="2025-09-13T00:00:01.938743860Z" level=info msg="StartContainer for \"d4add16173fb723d9dfbcddafb9f2fe5bdeed5d2ee19d4d2f9ee2385ca0069dd\" returns successfully" Sep 13 00:00:03.181522 kubelet[3348]: I0913 00:00:03.181463 3348 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:00:03.213052 containerd[1830]: time="2025-09-13T00:00:03.212997596Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:00:03.241114 kubelet[3348]: I0913 00:00:03.240139 3348 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:00:03.251545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4add16173fb723d9dfbcddafb9f2fe5bdeed5d2ee19d4d2f9ee2385ca0069dd-rootfs.mount: Deactivated successfully. Sep 13 00:00:03.347717 kubelet[3348]: I0913 00:00:03.347261 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38b924fa-67d4-4dd5-b92a-a61c832c28b3-tigera-ca-bundle\") pod \"calico-kube-controllers-7c5f5d79c6-pslxk\" (UID: \"38b924fa-67d4-4dd5-b92a-a61c832c28b3\") " pod="calico-system/calico-kube-controllers-7c5f5d79c6-pslxk" Sep 13 00:00:03.347717 kubelet[3348]: I0913 00:00:03.347692 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p55r4\" (UniqueName: \"kubernetes.io/projected/bd16400c-5cb4-4ddc-a634-d229d8486603-kube-api-access-p55r4\") pod \"calico-apiserver-7d7646db59-7hfrz\" (UID: \"bd16400c-5cb4-4ddc-a634-d229d8486603\") " pod="calico-apiserver/calico-apiserver-7d7646db59-7hfrz" Sep 13 00:00:03.347717 kubelet[3348]: I0913 00:00:03.347774 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wn28\" (UniqueName: \"kubernetes.io/projected/9fdfb2ff-1d8f-40fb-8d56-8897236ec308-kube-api-access-8wn28\") pod \"coredns-7c65d6cfc9-q8jln\" (UID: \"9fdfb2ff-1d8f-40fb-8d56-8897236ec308\") " pod="kube-system/coredns-7c65d6cfc9-q8jln" Sep 13 00:00:03.347717 kubelet[3348]: I0913 00:00:03.347798 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j44lp\" (UniqueName: \"kubernetes.io/projected/3c611723-85b5-4e9c-9b6a-4091949cf62d-kube-api-access-j44lp\") pod \"coredns-7c65d6cfc9-88m8h\" (UID: \"3c611723-85b5-4e9c-9b6a-4091949cf62d\") " pod="kube-system/coredns-7c65d6cfc9-88m8h" Sep 13 00:00:03.347717 kubelet[3348]: I0913 00:00:03.347821 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eddd75bf-28ce-46f2-ab6e-7773bfe85938-calico-apiserver-certs\") pod \"calico-apiserver-7d7646db59-w2j5v\" (UID: \"eddd75bf-28ce-46f2-ab6e-7773bfe85938\") " pod="calico-apiserver/calico-apiserver-7d7646db59-w2j5v" Sep 13 00:00:03.349235 kubelet[3348]: I0913 00:00:03.347842 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19d99d46-ced1-4edc-abbd-62bd19d065ab-whisker-ca-bundle\") pod \"whisker-5b5dd65974-vbvff\" (UID: \"19d99d46-ced1-4edc-abbd-62bd19d065ab\") " pod="calico-system/whisker-5b5dd65974-vbvff" Sep 13 00:00:03.349235 kubelet[3348]: I0913 00:00:03.347864 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/22768cde-472b-4c66-b597-59195c05c1b3-goldmane-key-pair\") pod \"goldmane-7988f88666-c6kj9\" (UID: \"22768cde-472b-4c66-b597-59195c05c1b3\") " pod="calico-system/goldmane-7988f88666-c6kj9" Sep 13 00:00:03.349235 kubelet[3348]: I0913 00:00:03.347880 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72qls\" (UniqueName: \"kubernetes.io/projected/38b924fa-67d4-4dd5-b92a-a61c832c28b3-kube-api-access-72qls\") pod \"calico-kube-controllers-7c5f5d79c6-pslxk\" (UID: \"38b924fa-67d4-4dd5-b92a-a61c832c28b3\") " pod="calico-system/calico-kube-controllers-7c5f5d79c6-pslxk" Sep 13 00:00:03.349235 kubelet[3348]: I0913 00:00:03.347897 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bd16400c-5cb4-4ddc-a634-d229d8486603-calico-apiserver-certs\") pod \"calico-apiserver-7d7646db59-7hfrz\" (UID: \"bd16400c-5cb4-4ddc-a634-d229d8486603\") " pod="calico-apiserver/calico-apiserver-7d7646db59-7hfrz" Sep 13 00:00:03.349235 kubelet[3348]: I0913 00:00:03.347914 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c611723-85b5-4e9c-9b6a-4091949cf62d-config-volume\") pod \"coredns-7c65d6cfc9-88m8h\" (UID: \"3c611723-85b5-4e9c-9b6a-4091949cf62d\") " pod="kube-system/coredns-7c65d6cfc9-88m8h" Sep 13 00:00:03.349352 kubelet[3348]: I0913 00:00:03.347947 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sswc\" (UniqueName: \"kubernetes.io/projected/22768cde-472b-4c66-b597-59195c05c1b3-kube-api-access-6sswc\") pod \"goldmane-7988f88666-c6kj9\" (UID: \"22768cde-472b-4c66-b597-59195c05c1b3\") " pod="calico-system/goldmane-7988f88666-c6kj9" Sep 13 00:00:03.349352 kubelet[3348]: I0913 00:00:03.347968 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/19d99d46-ced1-4edc-abbd-62bd19d065ab-whisker-backend-key-pair\") pod \"whisker-5b5dd65974-vbvff\" (UID: \"19d99d46-ced1-4edc-abbd-62bd19d065ab\") " pod="calico-system/whisker-5b5dd65974-vbvff" Sep 13 00:00:03.349352 kubelet[3348]: I0913 00:00:03.347986 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh2gh\" (UniqueName: \"kubernetes.io/projected/19d99d46-ced1-4edc-abbd-62bd19d065ab-kube-api-access-hh2gh\") pod \"whisker-5b5dd65974-vbvff\" (UID: \"19d99d46-ced1-4edc-abbd-62bd19d065ab\") " pod="calico-system/whisker-5b5dd65974-vbvff" Sep 13 00:00:03.349352 kubelet[3348]: I0913 00:00:03.348004 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22768cde-472b-4c66-b597-59195c05c1b3-goldmane-ca-bundle\") pod \"goldmane-7988f88666-c6kj9\" (UID: \"22768cde-472b-4c66-b597-59195c05c1b3\") " pod="calico-system/goldmane-7988f88666-c6kj9" Sep 13 00:00:03.349352 kubelet[3348]: I0913 00:00:03.348024 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9fdfb2ff-1d8f-40fb-8d56-8897236ec308-config-volume\") pod \"coredns-7c65d6cfc9-q8jln\" (UID: \"9fdfb2ff-1d8f-40fb-8d56-8897236ec308\") " pod="kube-system/coredns-7c65d6cfc9-q8jln" Sep 13 00:00:03.349484 kubelet[3348]: I0913 00:00:03.348040 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt7m7\" (UniqueName: \"kubernetes.io/projected/eddd75bf-28ce-46f2-ab6e-7773bfe85938-kube-api-access-lt7m7\") pod \"calico-apiserver-7d7646db59-w2j5v\" (UID: \"eddd75bf-28ce-46f2-ab6e-7773bfe85938\") " pod="calico-apiserver/calico-apiserver-7d7646db59-w2j5v" Sep 13 00:00:03.349484 kubelet[3348]: I0913 00:00:03.348056 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22768cde-472b-4c66-b597-59195c05c1b3-config\") pod \"goldmane-7988f88666-c6kj9\" (UID: \"22768cde-472b-4c66-b597-59195c05c1b3\") " pod="calico-system/goldmane-7988f88666-c6kj9" Sep 13 00:00:04.041400 containerd[1830]: time="2025-09-13T00:00:04.040545939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4wv45,Uid:7fc7d1ef-2f94-42c6-a027-0084519d796b,Namespace:calico-system,Attempt:0,}" Sep 13 00:00:04.113470 containerd[1830]: time="2025-09-13T00:00:04.113399693Z" level=info msg="shim disconnected" id=d4add16173fb723d9dfbcddafb9f2fe5bdeed5d2ee19d4d2f9ee2385ca0069dd namespace=k8s.io Sep 13 00:00:04.113470 containerd[1830]: time="2025-09-13T00:00:04.113460853Z" level=warning msg="cleaning up after shim disconnected" id=d4add16173fb723d9dfbcddafb9f2fe5bdeed5d2ee19d4d2f9ee2385ca0069dd namespace=k8s.io Sep 13 00:00:04.113470 containerd[1830]: time="2025-09-13T00:00:04.113470653Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:00:04.132890 containerd[1830]: time="2025-09-13T00:00:04.132679911Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:00:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:00:04.184363 containerd[1830]: time="2025-09-13T00:00:04.184310890Z" level=error msg="Failed to destroy network for sandbox \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.184666 containerd[1830]: time="2025-09-13T00:00:04.184631529Z" level=error msg="encountered an error cleaning up failed sandbox \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.184722 containerd[1830]: time="2025-09-13T00:00:04.184696329Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4wv45,Uid:7fc7d1ef-2f94-42c6-a027-0084519d796b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.184940 kubelet[3348]: E0913 00:00:04.184895 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.185281 kubelet[3348]: E0913 00:00:04.184967 3348 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4wv45" Sep 13 00:00:04.185281 kubelet[3348]: E0913 00:00:04.184987 3348 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4wv45" Sep 13 00:00:04.185281 kubelet[3348]: E0913 00:00:04.185048 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4wv45_calico-system(7fc7d1ef-2f94-42c6-a027-0084519d796b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4wv45_calico-system(7fc7d1ef-2f94-42c6-a027-0084519d796b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4wv45" podUID="7fc7d1ef-2f94-42c6-a027-0084519d796b" Sep 13 00:00:04.194503 containerd[1830]: time="2025-09-13T00:00:04.194463278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-88m8h,Uid:3c611723-85b5-4e9c-9b6a-4091949cf62d,Namespace:kube-system,Attempt:0,}" Sep 13 00:00:04.206034 containerd[1830]: time="2025-09-13T00:00:04.205707704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c5f5d79c6-pslxk,Uid:38b924fa-67d4-4dd5-b92a-a61c832c28b3,Namespace:calico-system,Attempt:0,}" Sep 13 00:00:04.210356 containerd[1830]: time="2025-09-13T00:00:04.210161179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q8jln,Uid:9fdfb2ff-1d8f-40fb-8d56-8897236ec308,Namespace:kube-system,Attempt:0,}" Sep 13 00:00:04.211588 containerd[1830]: time="2025-09-13T00:00:04.211558937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d7646db59-7hfrz,Uid:bd16400c-5cb4-4ddc-a634-d229d8486603,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:00:04.216463 containerd[1830]: time="2025-09-13T00:00:04.216427892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b5dd65974-vbvff,Uid:19d99d46-ced1-4edc-abbd-62bd19d065ab,Namespace:calico-system,Attempt:0,}" Sep 13 00:00:04.221455 containerd[1830]: time="2025-09-13T00:00:04.221016926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-c6kj9,Uid:22768cde-472b-4c66-b597-59195c05c1b3,Namespace:calico-system,Attempt:0,}" Sep 13 00:00:04.221455 containerd[1830]: time="2025-09-13T00:00:04.221266926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d7646db59-w2j5v,Uid:eddd75bf-28ce-46f2-ab6e-7773bfe85938,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:00:04.247125 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae-shm.mount: Deactivated successfully. Sep 13 00:00:04.423771 containerd[1830]: time="2025-09-13T00:00:04.423704487Z" level=error msg="Failed to destroy network for sandbox \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.424153 containerd[1830]: time="2025-09-13T00:00:04.424120926Z" level=error msg="encountered an error cleaning up failed sandbox \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.424225 containerd[1830]: time="2025-09-13T00:00:04.424199766Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-88m8h,Uid:3c611723-85b5-4e9c-9b6a-4091949cf62d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.424552 kubelet[3348]: E0913 00:00:04.424504 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.424633 kubelet[3348]: E0913 00:00:04.424575 3348 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-88m8h" Sep 13 00:00:04.424633 kubelet[3348]: E0913 00:00:04.424597 3348 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-88m8h" Sep 13 00:00:04.424711 kubelet[3348]: E0913 00:00:04.424639 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-88m8h_kube-system(3c611723-85b5-4e9c-9b6a-4091949cf62d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-88m8h_kube-system(3c611723-85b5-4e9c-9b6a-4091949cf62d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-88m8h" podUID="3c611723-85b5-4e9c-9b6a-4091949cf62d" Sep 13 00:00:04.523864 containerd[1830]: time="2025-09-13T00:00:04.523779049Z" level=error msg="Failed to destroy network for sandbox \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.525456 containerd[1830]: time="2025-09-13T00:00:04.524958767Z" level=error msg="encountered an error cleaning up failed sandbox \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.525661 containerd[1830]: time="2025-09-13T00:00:04.525634087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c5f5d79c6-pslxk,Uid:38b924fa-67d4-4dd5-b92a-a61c832c28b3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.526415 kubelet[3348]: E0913 00:00:04.526356 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.526504 kubelet[3348]: E0913 00:00:04.526426 3348 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c5f5d79c6-pslxk" Sep 13 00:00:04.526504 kubelet[3348]: E0913 00:00:04.526445 3348 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c5f5d79c6-pslxk" Sep 13 00:00:04.526504 kubelet[3348]: E0913 00:00:04.526485 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c5f5d79c6-pslxk_calico-system(38b924fa-67d4-4dd5-b92a-a61c832c28b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c5f5d79c6-pslxk_calico-system(38b924fa-67d4-4dd5-b92a-a61c832c28b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c5f5d79c6-pslxk" podUID="38b924fa-67d4-4dd5-b92a-a61c832c28b3" Sep 13 00:00:04.559830 containerd[1830]: time="2025-09-13T00:00:04.559335247Z" level=error msg="Failed to destroy network for sandbox \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.559830 containerd[1830]: time="2025-09-13T00:00:04.559672966Z" level=error msg="encountered an error cleaning up failed sandbox \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.559830 containerd[1830]: time="2025-09-13T00:00:04.559726446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b5dd65974-vbvff,Uid:19d99d46-ced1-4edc-abbd-62bd19d065ab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.560053 kubelet[3348]: E0913 00:00:04.559942 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.560053 kubelet[3348]: E0913 00:00:04.559997 3348 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b5dd65974-vbvff" Sep 13 00:00:04.560053 kubelet[3348]: E0913 00:00:04.560022 3348 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b5dd65974-vbvff" Sep 13 00:00:04.560186 kubelet[3348]: E0913 00:00:04.560059 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5b5dd65974-vbvff_calico-system(19d99d46-ced1-4edc-abbd-62bd19d065ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5b5dd65974-vbvff_calico-system(19d99d46-ced1-4edc-abbd-62bd19d065ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b5dd65974-vbvff" podUID="19d99d46-ced1-4edc-abbd-62bd19d065ab" Sep 13 00:00:04.600579 containerd[1830]: time="2025-09-13T00:00:04.600522638Z" level=error msg="Failed to destroy network for sandbox \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.601837 containerd[1830]: time="2025-09-13T00:00:04.601782317Z" level=error msg="encountered an error cleaning up failed sandbox \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.601955 containerd[1830]: time="2025-09-13T00:00:04.601857077Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d7646db59-w2j5v,Uid:eddd75bf-28ce-46f2-ab6e-7773bfe85938,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.602254 kubelet[3348]: E0913 00:00:04.602134 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.602254 kubelet[3348]: E0913 00:00:04.602207 3348 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d7646db59-w2j5v" Sep 13 00:00:04.602254 kubelet[3348]: E0913 00:00:04.602226 3348 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d7646db59-w2j5v" Sep 13 00:00:04.602382 kubelet[3348]: E0913 00:00:04.602287 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d7646db59-w2j5v_calico-apiserver(eddd75bf-28ce-46f2-ab6e-7773bfe85938)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d7646db59-w2j5v_calico-apiserver(eddd75bf-28ce-46f2-ab6e-7773bfe85938)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d7646db59-w2j5v" podUID="eddd75bf-28ce-46f2-ab6e-7773bfe85938" Sep 13 00:00:04.603798 containerd[1830]: time="2025-09-13T00:00:04.603755834Z" level=error msg="Failed to destroy network for sandbox \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.604420 containerd[1830]: time="2025-09-13T00:00:04.604301514Z" level=error msg="encountered an error cleaning up failed sandbox \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.604420 containerd[1830]: time="2025-09-13T00:00:04.604364154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q8jln,Uid:9fdfb2ff-1d8f-40fb-8d56-8897236ec308,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.605443 kubelet[3348]: E0913 00:00:04.605321 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.605443 kubelet[3348]: E0913 00:00:04.605389 3348 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q8jln" Sep 13 00:00:04.605443 kubelet[3348]: E0913 00:00:04.605416 3348 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-q8jln" Sep 13 00:00:04.605682 kubelet[3348]: E0913 00:00:04.605621 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-q8jln_kube-system(9fdfb2ff-1d8f-40fb-8d56-8897236ec308)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-q8jln_kube-system(9fdfb2ff-1d8f-40fb-8d56-8897236ec308)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-q8jln" podUID="9fdfb2ff-1d8f-40fb-8d56-8897236ec308" Sep 13 00:00:04.621124 containerd[1830]: time="2025-09-13T00:00:04.620997214Z" level=error msg="Failed to destroy network for sandbox \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.621287 containerd[1830]: time="2025-09-13T00:00:04.621191494Z" level=error msg="Failed to destroy network for sandbox \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.621695 containerd[1830]: time="2025-09-13T00:00:04.621657453Z" level=error msg="encountered an error cleaning up failed sandbox \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.621900 containerd[1830]: time="2025-09-13T00:00:04.621750093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-c6kj9,Uid:22768cde-472b-4c66-b597-59195c05c1b3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.621900 containerd[1830]: time="2025-09-13T00:00:04.621819773Z" level=error msg="encountered an error cleaning up failed sandbox \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.621900 containerd[1830]: time="2025-09-13T00:00:04.621860533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d7646db59-7hfrz,Uid:bd16400c-5cb4-4ddc-a634-d229d8486603,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.622053 kubelet[3348]: E0913 00:00:04.621969 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.622111 kubelet[3348]: E0913 00:00:04.622067 3348 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-c6kj9" Sep 13 00:00:04.622154 kubelet[3348]: E0913 00:00:04.622131 3348 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-c6kj9" Sep 13 00:00:04.622240 kubelet[3348]: E0913 00:00:04.622202 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-c6kj9_calico-system(22768cde-472b-4c66-b597-59195c05c1b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-c6kj9_calico-system(22768cde-472b-4c66-b597-59195c05c1b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-c6kj9" podUID="22768cde-472b-4c66-b597-59195c05c1b3" Sep 13 00:00:04.623310 kubelet[3348]: E0913 00:00:04.623274 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.623372 kubelet[3348]: E0913 00:00:04.623330 3348 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d7646db59-7hfrz" Sep 13 00:00:04.623372 kubelet[3348]: E0913 00:00:04.623348 3348 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d7646db59-7hfrz" Sep 13 00:00:04.623419 kubelet[3348]: E0913 00:00:04.623386 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d7646db59-7hfrz_calico-apiserver(bd16400c-5cb4-4ddc-a634-d229d8486603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d7646db59-7hfrz_calico-apiserver(bd16400c-5cb4-4ddc-a634-d229d8486603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d7646db59-7hfrz" podUID="bd16400c-5cb4-4ddc-a634-d229d8486603" Sep 13 00:00:04.854188 kubelet[3348]: I0913 00:00:04.853201 3348 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Sep 13 00:00:04.854353 containerd[1830]: time="2025-09-13T00:00:04.854199419Z" level=info msg="StopPodSandbox for \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\"" Sep 13 00:00:04.854392 containerd[1830]: time="2025-09-13T00:00:04.854371219Z" level=info msg="Ensure that sandbox 72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae in task-service has been cleanup successfully" Sep 13 00:00:04.858154 kubelet[3348]: I0913 00:00:04.858111 3348 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Sep 13 00:00:04.859031 containerd[1830]: time="2025-09-13T00:00:04.858986213Z" level=info msg="StopPodSandbox for \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\"" Sep 13 00:00:04.859231 containerd[1830]: time="2025-09-13T00:00:04.859204373Z" level=info msg="Ensure that sandbox 23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8 in task-service has been cleanup successfully" Sep 13 00:00:04.867714 kubelet[3348]: I0913 00:00:04.867660 3348 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Sep 13 00:00:04.872540 containerd[1830]: time="2025-09-13T00:00:04.872323277Z" level=info msg="StopPodSandbox for \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\"" Sep 13 00:00:04.873528 containerd[1830]: time="2025-09-13T00:00:04.872673597Z" level=info msg="Ensure that sandbox 9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb in task-service has been cleanup successfully" Sep 13 00:00:04.880611 kubelet[3348]: I0913 00:00:04.880557 3348 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Sep 13 00:00:04.884821 containerd[1830]: time="2025-09-13T00:00:04.884763943Z" level=info msg="StopPodSandbox for \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\"" Sep 13 00:00:04.885047 containerd[1830]: time="2025-09-13T00:00:04.885015422Z" level=info msg="Ensure that sandbox f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344 in task-service has been cleanup successfully" Sep 13 00:00:04.887592 kubelet[3348]: I0913 00:00:04.887539 3348 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Sep 13 00:00:04.890608 containerd[1830]: time="2025-09-13T00:00:04.890559056Z" level=info msg="StopPodSandbox for \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\"" Sep 13 00:00:04.890943 containerd[1830]: time="2025-09-13T00:00:04.890913055Z" level=info msg="Ensure that sandbox e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52 in task-service has been cleanup successfully" Sep 13 00:00:04.911118 kubelet[3348]: I0913 00:00:04.908487 3348 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Sep 13 00:00:04.911234 containerd[1830]: time="2025-09-13T00:00:04.910414272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:00:04.914608 containerd[1830]: time="2025-09-13T00:00:04.914399028Z" level=info msg="StopPodSandbox for \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\"" Sep 13 00:00:04.916237 containerd[1830]: time="2025-09-13T00:00:04.916149226Z" level=info msg="Ensure that sandbox b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634 in task-service has been cleanup successfully" Sep 13 00:00:04.919267 kubelet[3348]: I0913 00:00:04.919224 3348 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Sep 13 00:00:04.921136 containerd[1830]: time="2025-09-13T00:00:04.919991501Z" level=info msg="StopPodSandbox for \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\"" Sep 13 00:00:04.929487 containerd[1830]: time="2025-09-13T00:00:04.925399215Z" level=info msg="Ensure that sandbox c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8 in task-service has been cleanup successfully" Sep 13 00:00:04.942177 kubelet[3348]: I0913 00:00:04.941667 3348 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Sep 13 00:00:04.942513 containerd[1830]: time="2025-09-13T00:00:04.942479955Z" level=info msg="StopPodSandbox for \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\"" Sep 13 00:00:04.942759 containerd[1830]: time="2025-09-13T00:00:04.942738714Z" level=info msg="Ensure that sandbox e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24 in task-service has been cleanup successfully" Sep 13 00:00:04.972992 containerd[1830]: time="2025-09-13T00:00:04.972932919Z" level=error msg="StopPodSandbox for \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\" failed" error="failed to destroy network for sandbox \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.976268 kubelet[3348]: E0913 00:00:04.975887 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Sep 13 00:00:04.976268 kubelet[3348]: E0913 00:00:04.975950 3348 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8"} Sep 13 00:00:04.976268 kubelet[3348]: E0913 00:00:04.976005 3348 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3c611723-85b5-4e9c-9b6a-4091949cf62d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:04.976268 kubelet[3348]: E0913 00:00:04.976026 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3c611723-85b5-4e9c-9b6a-4091949cf62d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-88m8h" podUID="3c611723-85b5-4e9c-9b6a-4091949cf62d" Sep 13 00:00:04.984460 containerd[1830]: time="2025-09-13T00:00:04.984371265Z" level=error msg="StopPodSandbox for \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\" failed" error="failed to destroy network for sandbox \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.984710 kubelet[3348]: E0913 00:00:04.984662 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Sep 13 00:00:04.984764 kubelet[3348]: E0913 00:00:04.984715 3348 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae"} Sep 13 00:00:04.984764 kubelet[3348]: E0913 00:00:04.984750 3348 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7fc7d1ef-2f94-42c6-a027-0084519d796b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:04.984846 kubelet[3348]: E0913 00:00:04.984772 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7fc7d1ef-2f94-42c6-a027-0084519d796b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4wv45" podUID="7fc7d1ef-2f94-42c6-a027-0084519d796b" Sep 13 00:00:04.998097 containerd[1830]: time="2025-09-13T00:00:04.998035529Z" level=error msg="StopPodSandbox for \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\" failed" error="failed to destroy network for sandbox \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:04.999111 kubelet[3348]: E0913 00:00:04.998445 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Sep 13 00:00:04.999111 kubelet[3348]: E0913 00:00:04.998504 3348 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb"} Sep 13 00:00:04.999111 kubelet[3348]: E0913 00:00:04.998537 3348 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19d99d46-ced1-4edc-abbd-62bd19d065ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:04.999111 kubelet[3348]: E0913 00:00:04.998561 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19d99d46-ced1-4edc-abbd-62bd19d065ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b5dd65974-vbvff" podUID="19d99d46-ced1-4edc-abbd-62bd19d065ab" Sep 13 00:00:05.004710 containerd[1830]: time="2025-09-13T00:00:05.004653401Z" level=error msg="StopPodSandbox for \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\" failed" error="failed to destroy network for sandbox \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:05.005168 kubelet[3348]: E0913 00:00:05.004981 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Sep 13 00:00:05.005168 kubelet[3348]: E0913 00:00:05.005038 3348 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344"} Sep 13 00:00:05.005168 kubelet[3348]: E0913 00:00:05.005075 3348 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"22768cde-472b-4c66-b597-59195c05c1b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:05.005168 kubelet[3348]: E0913 00:00:05.005109 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"22768cde-472b-4c66-b597-59195c05c1b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-c6kj9" podUID="22768cde-472b-4c66-b597-59195c05c1b3" Sep 13 00:00:05.020565 containerd[1830]: time="2025-09-13T00:00:05.020507982Z" level=error msg="StopPodSandbox for \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\" failed" error="failed to destroy network for sandbox \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:05.021005 kubelet[3348]: E0913 00:00:05.020863 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Sep 13 00:00:05.021005 kubelet[3348]: E0913 00:00:05.020920 3348 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634"} Sep 13 00:00:05.021005 kubelet[3348]: E0913 00:00:05.020954 3348 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"38b924fa-67d4-4dd5-b92a-a61c832c28b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:05.021005 kubelet[3348]: E0913 00:00:05.020975 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"38b924fa-67d4-4dd5-b92a-a61c832c28b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c5f5d79c6-pslxk" podUID="38b924fa-67d4-4dd5-b92a-a61c832c28b3" Sep 13 00:00:05.031003 containerd[1830]: time="2025-09-13T00:00:05.030756170Z" level=error msg="StopPodSandbox for \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\" failed" error="failed to destroy network for sandbox \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:05.032343 kubelet[3348]: E0913 00:00:05.031154 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Sep 13 00:00:05.032443 kubelet[3348]: E0913 00:00:05.032376 3348 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52"} Sep 13 00:00:05.032469 kubelet[3348]: E0913 00:00:05.032424 3348 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bd16400c-5cb4-4ddc-a634-d229d8486603\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:05.032534 kubelet[3348]: E0913 00:00:05.032463 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bd16400c-5cb4-4ddc-a634-d229d8486603\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d7646db59-7hfrz" podUID="bd16400c-5cb4-4ddc-a634-d229d8486603" Sep 13 00:00:05.043328 containerd[1830]: time="2025-09-13T00:00:05.043185076Z" level=error msg="StopPodSandbox for \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\" failed" error="failed to destroy network for sandbox \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:05.043586 kubelet[3348]: E0913 00:00:05.043469 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Sep 13 00:00:05.043586 kubelet[3348]: E0913 00:00:05.043535 3348 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8"} Sep 13 00:00:05.043586 kubelet[3348]: E0913 00:00:05.043580 3348 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eddd75bf-28ce-46f2-ab6e-7773bfe85938\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:05.043730 kubelet[3348]: E0913 00:00:05.043607 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eddd75bf-28ce-46f2-ab6e-7773bfe85938\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d7646db59-w2j5v" podUID="eddd75bf-28ce-46f2-ab6e-7773bfe85938" Sep 13 00:00:05.045697 containerd[1830]: time="2025-09-13T00:00:05.045611913Z" level=error msg="StopPodSandbox for \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\" failed" error="failed to destroy network for sandbox \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:05.046041 kubelet[3348]: E0913 00:00:05.045990 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Sep 13 00:00:05.046179 kubelet[3348]: E0913 00:00:05.046052 3348 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24"} Sep 13 00:00:05.046179 kubelet[3348]: E0913 00:00:05.046085 3348 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9fdfb2ff-1d8f-40fb-8d56-8897236ec308\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:05.046278 kubelet[3348]: E0913 00:00:05.046182 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9fdfb2ff-1d8f-40fb-8d56-8897236ec308\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-q8jln" podUID="9fdfb2ff-1d8f-40fb-8d56-8897236ec308" Sep 13 00:00:05.248907 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634-shm.mount: Deactivated successfully. Sep 13 00:00:05.249080 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8-shm.mount: Deactivated successfully. Sep 13 00:00:15.738068 containerd[1830]: time="2025-09-13T00:00:15.736593139Z" level=info msg="StopPodSandbox for \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\"" Sep 13 00:00:15.764470 containerd[1830]: time="2025-09-13T00:00:15.764407543Z" level=error msg="StopPodSandbox for \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\" failed" error="failed to destroy network for sandbox \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:15.764942 kubelet[3348]: E0913 00:00:15.764874 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Sep 13 00:00:15.765393 kubelet[3348]: E0913 00:00:15.764951 3348 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24"} Sep 13 00:00:15.765393 kubelet[3348]: E0913 00:00:15.764992 3348 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9fdfb2ff-1d8f-40fb-8d56-8897236ec308\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:15.765393 kubelet[3348]: E0913 00:00:15.765015 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9fdfb2ff-1d8f-40fb-8d56-8897236ec308\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-q8jln" podUID="9fdfb2ff-1d8f-40fb-8d56-8897236ec308" Sep 13 00:00:16.736248 containerd[1830]: time="2025-09-13T00:00:16.736196180Z" level=info msg="StopPodSandbox for \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\"" Sep 13 00:00:16.737681 containerd[1830]: time="2025-09-13T00:00:16.736773739Z" level=info msg="StopPodSandbox for \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\"" Sep 13 00:00:16.737681 containerd[1830]: time="2025-09-13T00:00:16.737071419Z" level=info msg="StopPodSandbox for \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\"" Sep 13 00:00:16.739831 containerd[1830]: time="2025-09-13T00:00:16.739779935Z" level=info msg="StopPodSandbox for \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\"" Sep 13 00:00:16.823650 containerd[1830]: time="2025-09-13T00:00:16.823427185Z" level=error msg="StopPodSandbox for \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\" failed" error="failed to destroy network for sandbox \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:16.824014 kubelet[3348]: E0913 00:00:16.823724 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Sep 13 00:00:16.824014 kubelet[3348]: E0913 00:00:16.823789 3348 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb"} Sep 13 00:00:16.824014 kubelet[3348]: E0913 00:00:16.823829 3348 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19d99d46-ced1-4edc-abbd-62bd19d065ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:16.824014 kubelet[3348]: E0913 00:00:16.823854 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19d99d46-ced1-4edc-abbd-62bd19d065ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b5dd65974-vbvff" podUID="19d99d46-ced1-4edc-abbd-62bd19d065ab" Sep 13 00:00:16.825201 containerd[1830]: time="2025-09-13T00:00:16.824851063Z" level=error msg="StopPodSandbox for \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\" failed" error="failed to destroy network for sandbox \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:16.825350 kubelet[3348]: E0913 00:00:16.825256 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Sep 13 00:00:16.825350 kubelet[3348]: E0913 00:00:16.825335 3348 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8"} Sep 13 00:00:16.825421 kubelet[3348]: E0913 00:00:16.825376 3348 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eddd75bf-28ce-46f2-ab6e-7773bfe85938\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:16.825481 kubelet[3348]: E0913 00:00:16.825426 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eddd75bf-28ce-46f2-ab6e-7773bfe85938\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d7646db59-w2j5v" podUID="eddd75bf-28ce-46f2-ab6e-7773bfe85938" Sep 13 00:00:16.825830 containerd[1830]: time="2025-09-13T00:00:16.825030263Z" level=error msg="StopPodSandbox for \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\" failed" error="failed to destroy network for sandbox \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:16.825980 kubelet[3348]: E0913 00:00:16.825913 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Sep 13 00:00:16.826041 kubelet[3348]: E0913 00:00:16.825988 3348 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634"} Sep 13 00:00:16.826041 kubelet[3348]: E0913 00:00:16.826031 3348 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"38b924fa-67d4-4dd5-b92a-a61c832c28b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:16.826176 kubelet[3348]: E0913 00:00:16.826054 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"38b924fa-67d4-4dd5-b92a-a61c832c28b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c5f5d79c6-pslxk" podUID="38b924fa-67d4-4dd5-b92a-a61c832c28b3" Sep 13 00:00:16.827182 containerd[1830]: time="2025-09-13T00:00:16.827074100Z" level=error msg="StopPodSandbox for \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\" failed" error="failed to destroy network for sandbox \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:16.827517 kubelet[3348]: E0913 00:00:16.827467 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Sep 13 00:00:16.827595 kubelet[3348]: E0913 00:00:16.827525 3348 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae"} Sep 13 00:00:16.827595 kubelet[3348]: E0913 00:00:16.827562 3348 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7fc7d1ef-2f94-42c6-a027-0084519d796b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:16.827671 kubelet[3348]: E0913 00:00:16.827590 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7fc7d1ef-2f94-42c6-a027-0084519d796b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4wv45" podUID="7fc7d1ef-2f94-42c6-a027-0084519d796b" Sep 13 00:00:18.736523 containerd[1830]: time="2025-09-13T00:00:18.736146020Z" level=info msg="StopPodSandbox for \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\"" Sep 13 00:00:20.523562 containerd[1830]: time="2025-09-13T00:00:19.737644218Z" level=info msg="StopPodSandbox for \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\"" Sep 13 00:00:20.523562 containerd[1830]: time="2025-09-13T00:00:19.737657098Z" level=info msg="StopPodSandbox for \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\"" Sep 13 00:00:20.528413 containerd[1830]: time="2025-09-13T00:00:20.527519175Z" level=error msg="StopPodSandbox for \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\" failed" error="failed to destroy network for sandbox \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:20.528528 kubelet[3348]: E0913 00:00:20.528259 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Sep 13 00:00:20.528528 kubelet[3348]: E0913 00:00:20.528318 3348 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8"} Sep 13 00:00:20.528528 kubelet[3348]: E0913 00:00:20.528354 3348 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3c611723-85b5-4e9c-9b6a-4091949cf62d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:20.528528 kubelet[3348]: E0913 00:00:20.528376 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3c611723-85b5-4e9c-9b6a-4091949cf62d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-88m8h" podUID="3c611723-85b5-4e9c-9b6a-4091949cf62d" Sep 13 00:00:20.529630 kubelet[3348]: E0913 00:00:20.528877 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Sep 13 00:00:20.529630 kubelet[3348]: E0913 00:00:20.528912 3348 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52"} Sep 13 00:00:20.529630 kubelet[3348]: E0913 00:00:20.528937 3348 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bd16400c-5cb4-4ddc-a634-d229d8486603\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:20.529630 kubelet[3348]: E0913 00:00:20.528955 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bd16400c-5cb4-4ddc-a634-d229d8486603\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d7646db59-7hfrz" podUID="bd16400c-5cb4-4ddc-a634-d229d8486603" Sep 13 00:00:20.529758 containerd[1830]: time="2025-09-13T00:00:20.528715733Z" level=error msg="StopPodSandbox for \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\" failed" error="failed to destroy network for sandbox \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:20.530254 containerd[1830]: time="2025-09-13T00:00:20.530216251Z" level=error msg="StopPodSandbox for \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\" failed" error="failed to destroy network for sandbox \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:20.530751 kubelet[3348]: E0913 00:00:20.530381 3348 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Sep 13 00:00:20.530751 kubelet[3348]: E0913 00:00:20.530416 3348 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344"} Sep 13 00:00:20.530751 kubelet[3348]: E0913 00:00:20.530443 3348 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"22768cde-472b-4c66-b597-59195c05c1b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:20.530751 kubelet[3348]: E0913 00:00:20.530460 3348 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"22768cde-472b-4c66-b597-59195c05c1b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-c6kj9" podUID="22768cde-472b-4c66-b597-59195c05c1b3" Sep 13 00:00:22.685000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2628023933.mount: Deactivated successfully. Sep 13 00:00:22.740199 containerd[1830]: time="2025-09-13T00:00:22.740136094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:22.743329 containerd[1830]: time="2025-09-13T00:00:22.743136730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 13 00:00:22.747319 containerd[1830]: time="2025-09-13T00:00:22.747257725Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:22.752216 containerd[1830]: time="2025-09-13T00:00:22.752165478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:22.753125 containerd[1830]: time="2025-09-13T00:00:22.752903197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 17.842440845s" Sep 13 00:00:22.753125 containerd[1830]: time="2025-09-13T00:00:22.752941517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 13 00:00:22.769674 containerd[1830]: time="2025-09-13T00:00:22.769377136Z" level=info msg="CreateContainer within sandbox \"d538c4bd01ad4a0bf0b8ed615dde50b247b2a489f6896169a9a42ef568007616\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:00:22.821409 containerd[1830]: time="2025-09-13T00:00:22.821355947Z" level=info msg="CreateContainer within sandbox \"d538c4bd01ad4a0bf0b8ed615dde50b247b2a489f6896169a9a42ef568007616\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e49c18cfe43f09291268bbb9fec86f848729d406869b09099470d79b8e686e82\"" Sep 13 00:00:22.822313 containerd[1830]: time="2025-09-13T00:00:22.822291666Z" level=info msg="StartContainer for \"e49c18cfe43f09291268bbb9fec86f848729d406869b09099470d79b8e686e82\"" Sep 13 00:00:22.884123 containerd[1830]: time="2025-09-13T00:00:22.883532385Z" level=info msg="StartContainer for \"e49c18cfe43f09291268bbb9fec86f848729d406869b09099470d79b8e686e82\" returns successfully" Sep 13 00:00:23.009854 kubelet[3348]: I0913 00:00:23.009539 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kzhr4" podStartSLOduration=1.412523829 podStartE2EDuration="30.009519739s" podCreationTimestamp="2025-09-12 23:59:53 +0000 UTC" firstStartedPulling="2025-09-12 23:59:54.156851446 +0000 UTC m=+20.533343164" lastFinishedPulling="2025-09-13 00:00:22.753847356 +0000 UTC m=+49.130339074" observedRunningTime="2025-09-13 00:00:23.006538182 +0000 UTC m=+49.383029900" watchObservedRunningTime="2025-09-13 00:00:23.009519739 +0000 UTC m=+49.386011457" Sep 13 00:00:23.398358 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:00:23.398501 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:00:23.564354 containerd[1830]: time="2025-09-13T00:00:23.562918143Z" level=info msg="StopPodSandbox for \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\"" Sep 13 00:00:23.764219 containerd[1830]: 2025-09-13 00:00:23.692 [INFO][4699] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Sep 13 00:00:23.764219 containerd[1830]: 2025-09-13 00:00:23.692 [INFO][4699] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" iface="eth0" netns="/var/run/netns/cni-53d46899-f329-6f5b-23ce-fc531ada50a3" Sep 13 00:00:23.764219 containerd[1830]: 2025-09-13 00:00:23.693 [INFO][4699] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" iface="eth0" netns="/var/run/netns/cni-53d46899-f329-6f5b-23ce-fc531ada50a3" Sep 13 00:00:23.764219 containerd[1830]: 2025-09-13 00:00:23.695 [INFO][4699] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" iface="eth0" netns="/var/run/netns/cni-53d46899-f329-6f5b-23ce-fc531ada50a3" Sep 13 00:00:23.764219 containerd[1830]: 2025-09-13 00:00:23.695 [INFO][4699] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Sep 13 00:00:23.764219 containerd[1830]: 2025-09-13 00:00:23.695 [INFO][4699] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Sep 13 00:00:23.764219 containerd[1830]: 2025-09-13 00:00:23.732 [INFO][4707] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" HandleID="k8s-pod-network.9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Workload="ci--4081.3.5--n--d2844e2d10-k8s-whisker--5b5dd65974--vbvff-eth0" Sep 13 00:00:23.764219 containerd[1830]: 2025-09-13 00:00:23.732 [INFO][4707] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:23.764219 containerd[1830]: 2025-09-13 00:00:23.732 [INFO][4707] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:23.764219 containerd[1830]: 2025-09-13 00:00:23.749 [WARNING][4707] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" HandleID="k8s-pod-network.9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Workload="ci--4081.3.5--n--d2844e2d10-k8s-whisker--5b5dd65974--vbvff-eth0" Sep 13 00:00:23.764219 containerd[1830]: 2025-09-13 00:00:23.749 [INFO][4707] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" HandleID="k8s-pod-network.9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Workload="ci--4081.3.5--n--d2844e2d10-k8s-whisker--5b5dd65974--vbvff-eth0" Sep 13 00:00:23.764219 containerd[1830]: 2025-09-13 00:00:23.752 [INFO][4707] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:23.764219 containerd[1830]: 2025-09-13 00:00:23.759 [INFO][4699] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Sep 13 00:00:23.767511 containerd[1830]: time="2025-09-13T00:00:23.767467015Z" level=info msg="TearDown network for sandbox \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\" successfully" Sep 13 00:00:23.767511 containerd[1830]: time="2025-09-13T00:00:23.767502175Z" level=info msg="StopPodSandbox for \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\" returns successfully" Sep 13 00:00:23.774279 systemd[1]: run-netns-cni\x2d53d46899\x2df329\x2d6f5b\x2d23ce\x2dfc531ada50a3.mount: Deactivated successfully. Sep 13 00:00:23.902933 kubelet[3348]: I0913 00:00:23.902882 3348 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/19d99d46-ced1-4edc-abbd-62bd19d065ab-whisker-backend-key-pair\") pod \"19d99d46-ced1-4edc-abbd-62bd19d065ab\" (UID: \"19d99d46-ced1-4edc-abbd-62bd19d065ab\") " Sep 13 00:00:23.902933 kubelet[3348]: I0913 00:00:23.902943 3348 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19d99d46-ced1-4edc-abbd-62bd19d065ab-whisker-ca-bundle\") pod \"19d99d46-ced1-4edc-abbd-62bd19d065ab\" (UID: \"19d99d46-ced1-4edc-abbd-62bd19d065ab\") " Sep 13 00:00:23.903139 kubelet[3348]: I0913 00:00:23.902965 3348 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh2gh\" (UniqueName: \"kubernetes.io/projected/19d99d46-ced1-4edc-abbd-62bd19d065ab-kube-api-access-hh2gh\") pod \"19d99d46-ced1-4edc-abbd-62bd19d065ab\" (UID: \"19d99d46-ced1-4edc-abbd-62bd19d065ab\") " Sep 13 00:00:23.904832 kubelet[3348]: I0913 00:00:23.904658 3348 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19d99d46-ced1-4edc-abbd-62bd19d065ab-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "19d99d46-ced1-4edc-abbd-62bd19d065ab" (UID: "19d99d46-ced1-4edc-abbd-62bd19d065ab"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:00:23.910465 systemd[1]: var-lib-kubelet-pods-19d99d46\x2dced1\x2d4edc\x2dabbd\x2d62bd19d065ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhh2gh.mount: Deactivated successfully. Sep 13 00:00:23.910905 kubelet[3348]: I0913 00:00:23.910759 3348 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19d99d46-ced1-4edc-abbd-62bd19d065ab-kube-api-access-hh2gh" (OuterVolumeSpecName: "kube-api-access-hh2gh") pod "19d99d46-ced1-4edc-abbd-62bd19d065ab" (UID: "19d99d46-ced1-4edc-abbd-62bd19d065ab"). InnerVolumeSpecName "kube-api-access-hh2gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:00:23.910861 systemd[1]: var-lib-kubelet-pods-19d99d46\x2dced1\x2d4edc\x2dabbd\x2d62bd19d065ab-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:00:23.912211 kubelet[3348]: I0913 00:00:23.911539 3348 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19d99d46-ced1-4edc-abbd-62bd19d065ab-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "19d99d46-ced1-4edc-abbd-62bd19d065ab" (UID: "19d99d46-ced1-4edc-abbd-62bd19d065ab"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:00:24.004374 kubelet[3348]: I0913 00:00:24.004033 3348 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/19d99d46-ced1-4edc-abbd-62bd19d065ab-whisker-backend-key-pair\") on node \"ci-4081.3.5-n-d2844e2d10\" DevicePath \"\"" Sep 13 00:00:24.004374 kubelet[3348]: I0913 00:00:24.004066 3348 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19d99d46-ced1-4edc-abbd-62bd19d065ab-whisker-ca-bundle\") on node \"ci-4081.3.5-n-d2844e2d10\" DevicePath \"\"" Sep 13 00:00:24.004374 kubelet[3348]: I0913 00:00:24.004078 3348 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh2gh\" (UniqueName: \"kubernetes.io/projected/19d99d46-ced1-4edc-abbd-62bd19d065ab-kube-api-access-hh2gh\") on node \"ci-4081.3.5-n-d2844e2d10\" DevicePath \"\"" Sep 13 00:00:24.205055 kubelet[3348]: I0913 00:00:24.204953 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/340788de-455a-46ad-8015-5efb6f0289f2-whisker-backend-key-pair\") pod \"whisker-6c8b9c8fc-jbbh7\" (UID: \"340788de-455a-46ad-8015-5efb6f0289f2\") " pod="calico-system/whisker-6c8b9c8fc-jbbh7" Sep 13 00:00:24.205055 kubelet[3348]: I0913 00:00:24.205005 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/340788de-455a-46ad-8015-5efb6f0289f2-whisker-ca-bundle\") pod \"whisker-6c8b9c8fc-jbbh7\" (UID: \"340788de-455a-46ad-8015-5efb6f0289f2\") " pod="calico-system/whisker-6c8b9c8fc-jbbh7" Sep 13 00:00:24.205055 kubelet[3348]: I0913 00:00:24.205028 3348 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbhv5\" (UniqueName: \"kubernetes.io/projected/340788de-455a-46ad-8015-5efb6f0289f2-kube-api-access-xbhv5\") pod \"whisker-6c8b9c8fc-jbbh7\" (UID: \"340788de-455a-46ad-8015-5efb6f0289f2\") " pod="calico-system/whisker-6c8b9c8fc-jbbh7" Sep 13 00:00:24.407364 containerd[1830]: time="2025-09-13T00:00:24.406929401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c8b9c8fc-jbbh7,Uid:340788de-455a-46ad-8015-5efb6f0289f2,Namespace:calico-system,Attempt:0,}" Sep 13 00:00:24.658304 systemd-networkd[1407]: calif987d34f808: Link UP Sep 13 00:00:24.658545 systemd-networkd[1407]: calif987d34f808: Gained carrier Sep 13 00:00:24.676439 containerd[1830]: 2025-09-13 00:00:24.478 [INFO][4752] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:00:24.676439 containerd[1830]: 2025-09-13 00:00:24.495 [INFO][4752] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--d2844e2d10-k8s-whisker--6c8b9c8fc--jbbh7-eth0 whisker-6c8b9c8fc- calico-system 340788de-455a-46ad-8015-5efb6f0289f2 915 0 2025-09-13 00:00:24 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c8b9c8fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.5-n-d2844e2d10 whisker-6c8b9c8fc-jbbh7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif987d34f808 [] [] }} ContainerID="36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602" Namespace="calico-system" Pod="whisker-6c8b9c8fc-jbbh7" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-whisker--6c8b9c8fc--jbbh7-" Sep 13 00:00:24.676439 containerd[1830]: 2025-09-13 00:00:24.495 [INFO][4752] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602" Namespace="calico-system" Pod="whisker-6c8b9c8fc-jbbh7" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-whisker--6c8b9c8fc--jbbh7-eth0" Sep 13 00:00:24.676439 containerd[1830]: 2025-09-13 00:00:24.537 [INFO][4764] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602" HandleID="k8s-pod-network.36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602" Workload="ci--4081.3.5--n--d2844e2d10-k8s-whisker--6c8b9c8fc--jbbh7-eth0" Sep 13 00:00:24.676439 containerd[1830]: 2025-09-13 00:00:24.537 [INFO][4764] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602" HandleID="k8s-pod-network.36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602" Workload="ci--4081.3.5--n--d2844e2d10-k8s-whisker--6c8b9c8fc--jbbh7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-d2844e2d10", "pod":"whisker-6c8b9c8fc-jbbh7", "timestamp":"2025-09-13 00:00:24.537187203 +0000 UTC"}, Hostname:"ci-4081.3.5-n-d2844e2d10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:00:24.676439 containerd[1830]: 2025-09-13 00:00:24.537 [INFO][4764] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:24.676439 containerd[1830]: 2025-09-13 00:00:24.537 [INFO][4764] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:24.676439 containerd[1830]: 2025-09-13 00:00:24.537 [INFO][4764] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-d2844e2d10' Sep 13 00:00:24.676439 containerd[1830]: 2025-09-13 00:00:24.550 [INFO][4764] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:24.676439 containerd[1830]: 2025-09-13 00:00:24.556 [INFO][4764] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:24.676439 containerd[1830]: 2025-09-13 00:00:24.564 [INFO][4764] ipam/ipam.go 511: Trying affinity for 192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:24.676439 containerd[1830]: 2025-09-13 00:00:24.567 [INFO][4764] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:24.676439 containerd[1830]: 2025-09-13 00:00:24.571 [INFO][4764] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:24.676439 containerd[1830]: 2025-09-13 00:00:24.572 [INFO][4764] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:24.676439 containerd[1830]: 2025-09-13 00:00:24.580 [INFO][4764] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602 Sep 13 00:00:24.676439 containerd[1830]: 2025-09-13 00:00:24.586 [INFO][4764] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:24.676439 containerd[1830]: 2025-09-13 00:00:24.598 [INFO][4764] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.117.1/26] block=192.168.117.0/26 handle="k8s-pod-network.36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:24.676439 containerd[1830]: 2025-09-13 00:00:24.599 [INFO][4764] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.1/26] handle="k8s-pod-network.36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:24.676439 containerd[1830]: 2025-09-13 00:00:24.599 [INFO][4764] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:24.676439 containerd[1830]: 2025-09-13 00:00:24.599 [INFO][4764] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.117.1/26] IPv6=[] ContainerID="36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602" HandleID="k8s-pod-network.36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602" Workload="ci--4081.3.5--n--d2844e2d10-k8s-whisker--6c8b9c8fc--jbbh7-eth0" Sep 13 00:00:24.678711 containerd[1830]: 2025-09-13 00:00:24.602 [INFO][4752] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602" Namespace="calico-system" Pod="whisker-6c8b9c8fc-jbbh7" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-whisker--6c8b9c8fc--jbbh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-whisker--6c8b9c8fc--jbbh7-eth0", GenerateName:"whisker-6c8b9c8fc-", Namespace:"calico-system", SelfLink:"", UID:"340788de-455a-46ad-8015-5efb6f0289f2", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c8b9c8fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"", Pod:"whisker-6c8b9c8fc-jbbh7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.117.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif987d34f808", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:24.678711 containerd[1830]: 2025-09-13 00:00:24.602 [INFO][4752] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.1/32] ContainerID="36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602" Namespace="calico-system" Pod="whisker-6c8b9c8fc-jbbh7" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-whisker--6c8b9c8fc--jbbh7-eth0" Sep 13 00:00:24.678711 containerd[1830]: 2025-09-13 00:00:24.602 [INFO][4752] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif987d34f808 ContainerID="36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602" Namespace="calico-system" Pod="whisker-6c8b9c8fc-jbbh7" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-whisker--6c8b9c8fc--jbbh7-eth0" Sep 13 00:00:24.678711 containerd[1830]: 2025-09-13 00:00:24.647 [INFO][4752] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602" Namespace="calico-system" Pod="whisker-6c8b9c8fc-jbbh7" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-whisker--6c8b9c8fc--jbbh7-eth0" Sep 13 00:00:24.678711 containerd[1830]: 2025-09-13 00:00:24.647 [INFO][4752] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602" Namespace="calico-system" Pod="whisker-6c8b9c8fc-jbbh7" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-whisker--6c8b9c8fc--jbbh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-whisker--6c8b9c8fc--jbbh7-eth0", GenerateName:"whisker-6c8b9c8fc-", Namespace:"calico-system", SelfLink:"", UID:"340788de-455a-46ad-8015-5efb6f0289f2", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c8b9c8fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602", Pod:"whisker-6c8b9c8fc-jbbh7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.117.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif987d34f808", MAC:"46:85:fa:2e:6f:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:24.678711 containerd[1830]: 2025-09-13 00:00:24.673 [INFO][4752] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602" Namespace="calico-system" Pod="whisker-6c8b9c8fc-jbbh7" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-whisker--6c8b9c8fc--jbbh7-eth0" Sep 13 00:00:24.709787 containerd[1830]: time="2025-09-13T00:00:24.709647514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:00:24.709787 containerd[1830]: time="2025-09-13T00:00:24.709785394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:00:24.710026 containerd[1830]: time="2025-09-13T00:00:24.709813193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:24.710026 containerd[1830]: time="2025-09-13T00:00:24.709939193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:24.768308 containerd[1830]: time="2025-09-13T00:00:24.768249083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c8b9c8fc-jbbh7,Uid:340788de-455a-46ad-8015-5efb6f0289f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602\"" Sep 13 00:00:24.771734 containerd[1830]: time="2025-09-13T00:00:24.771383679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:00:25.518180 kernel: bpftool[4940]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:00:25.752078 kubelet[3348]: I0913 00:00:25.751739 3348 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19d99d46-ced1-4edc-abbd-62bd19d065ab" path="/var/lib/kubelet/pods/19d99d46-ced1-4edc-abbd-62bd19d065ab/volumes" Sep 13 00:00:26.288371 systemd-networkd[1407]: vxlan.calico: Link UP Sep 13 00:00:26.289373 systemd-networkd[1407]: vxlan.calico: Gained carrier Sep 13 00:00:26.616336 systemd-networkd[1407]: calif987d34f808: Gained IPv6LL Sep 13 00:00:27.321322 systemd-networkd[1407]: vxlan.calico: Gained IPv6LL Sep 13 00:00:27.737420 containerd[1830]: time="2025-09-13T00:00:27.737050605Z" level=info msg="StopPodSandbox for \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\"" Sep 13 00:00:27.850353 containerd[1830]: 2025-09-13 00:00:27.798 [INFO][5026] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Sep 13 00:00:27.850353 containerd[1830]: 2025-09-13 00:00:27.798 [INFO][5026] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" iface="eth0" netns="/var/run/netns/cni-d6a1ef39-daed-49c7-54fc-98ce6b480c96" Sep 13 00:00:27.850353 containerd[1830]: 2025-09-13 00:00:27.803 [INFO][5026] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" iface="eth0" netns="/var/run/netns/cni-d6a1ef39-daed-49c7-54fc-98ce6b480c96" Sep 13 00:00:27.850353 containerd[1830]: 2025-09-13 00:00:27.804 [INFO][5026] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" iface="eth0" netns="/var/run/netns/cni-d6a1ef39-daed-49c7-54fc-98ce6b480c96" Sep 13 00:00:27.850353 containerd[1830]: 2025-09-13 00:00:27.804 [INFO][5026] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Sep 13 00:00:27.850353 containerd[1830]: 2025-09-13 00:00:27.804 [INFO][5026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Sep 13 00:00:27.850353 containerd[1830]: 2025-09-13 00:00:27.834 [INFO][5033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" HandleID="k8s-pod-network.e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0" Sep 13 00:00:27.850353 containerd[1830]: 2025-09-13 00:00:27.834 [INFO][5033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:27.850353 containerd[1830]: 2025-09-13 00:00:27.834 [INFO][5033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:27.850353 containerd[1830]: 2025-09-13 00:00:27.844 [WARNING][5033] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" HandleID="k8s-pod-network.e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0" Sep 13 00:00:27.850353 containerd[1830]: 2025-09-13 00:00:27.844 [INFO][5033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" HandleID="k8s-pod-network.e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0" Sep 13 00:00:27.850353 containerd[1830]: 2025-09-13 00:00:27.846 [INFO][5033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:27.850353 containerd[1830]: 2025-09-13 00:00:27.848 [INFO][5026] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Sep 13 00:00:27.852681 containerd[1830]: time="2025-09-13T00:00:27.850514187Z" level=info msg="TearDown network for sandbox \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\" successfully" Sep 13 00:00:27.852681 containerd[1830]: time="2025-09-13T00:00:27.850544187Z" level=info msg="StopPodSandbox for \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\" returns successfully" Sep 13 00:00:27.853517 containerd[1830]: time="2025-09-13T00:00:27.853459184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q8jln,Uid:9fdfb2ff-1d8f-40fb-8d56-8897236ec308,Namespace:kube-system,Attempt:1,}" Sep 13 00:00:27.854580 systemd[1]: run-netns-cni\x2dd6a1ef39\x2ddaed\x2d49c7\x2d54fc\x2d98ce6b480c96.mount: Deactivated successfully. Sep 13 00:00:28.049959 systemd-networkd[1407]: cali08a642dbabd: Link UP Sep 13 00:00:28.064385 systemd-networkd[1407]: cali08a642dbabd: Gained carrier Sep 13 00:00:28.102662 containerd[1830]: 2025-09-13 00:00:27.935 [INFO][5039] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0 coredns-7c65d6cfc9- kube-system 9fdfb2ff-1d8f-40fb-8d56-8897236ec308 931 0 2025-09-12 23:59:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-d2844e2d10 coredns-7c65d6cfc9-q8jln eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali08a642dbabd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q8jln" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-" Sep 13 00:00:28.102662 containerd[1830]: 2025-09-13 00:00:27.935 [INFO][5039] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q8jln" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0" Sep 13 00:00:28.102662 containerd[1830]: 2025-09-13 00:00:27.965 [INFO][5051] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da" HandleID="k8s-pod-network.82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0" Sep 13 00:00:28.102662 containerd[1830]: 2025-09-13 00:00:27.965 [INFO][5051] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da" HandleID="k8s-pod-network.82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b0a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-d2844e2d10", "pod":"coredns-7c65d6cfc9-q8jln", "timestamp":"2025-09-13 00:00:27.965659288 +0000 UTC"}, Hostname:"ci-4081.3.5-n-d2844e2d10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:00:28.102662 containerd[1830]: 2025-09-13 00:00:27.965 [INFO][5051] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:28.102662 containerd[1830]: 2025-09-13 00:00:27.965 [INFO][5051] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:28.102662 containerd[1830]: 2025-09-13 00:00:27.965 [INFO][5051] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-d2844e2d10' Sep 13 00:00:28.102662 containerd[1830]: 2025-09-13 00:00:27.978 [INFO][5051] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:28.102662 containerd[1830]: 2025-09-13 00:00:27.985 [INFO][5051] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:28.102662 containerd[1830]: 2025-09-13 00:00:27.992 [INFO][5051] ipam/ipam.go 511: Trying affinity for 192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:28.102662 containerd[1830]: 2025-09-13 00:00:27.995 [INFO][5051] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:28.102662 containerd[1830]: 2025-09-13 00:00:27.999 [INFO][5051] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:28.102662 containerd[1830]: 2025-09-13 00:00:28.000 [INFO][5051] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:28.102662 containerd[1830]: 2025-09-13 00:00:28.002 [INFO][5051] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da Sep 13 00:00:28.102662 containerd[1830]: 2025-09-13 00:00:28.018 [INFO][5051] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:28.102662 containerd[1830]: 2025-09-13 00:00:28.029 [INFO][5051] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.117.2/26] block=192.168.117.0/26 handle="k8s-pod-network.82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:28.102662 containerd[1830]: 2025-09-13 00:00:28.029 [INFO][5051] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.2/26] handle="k8s-pod-network.82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:28.102662 containerd[1830]: 2025-09-13 00:00:28.029 [INFO][5051] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:28.102662 containerd[1830]: 2025-09-13 00:00:28.030 [INFO][5051] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.117.2/26] IPv6=[] ContainerID="82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da" HandleID="k8s-pod-network.82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0" Sep 13 00:00:28.103788 containerd[1830]: 2025-09-13 00:00:28.034 [INFO][5039] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q8jln" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9fdfb2ff-1d8f-40fb-8d56-8897236ec308", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"", Pod:"coredns-7c65d6cfc9-q8jln", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08a642dbabd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:28.103788 containerd[1830]: 2025-09-13 00:00:28.035 [INFO][5039] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.2/32] ContainerID="82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q8jln" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0" Sep 13 00:00:28.103788 containerd[1830]: 2025-09-13 00:00:28.035 [INFO][5039] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08a642dbabd ContainerID="82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q8jln" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0" Sep 13 00:00:28.103788 containerd[1830]: 2025-09-13 00:00:28.069 [INFO][5039] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q8jln" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0" Sep 13 00:00:28.103788 containerd[1830]: 2025-09-13 00:00:28.074 [INFO][5039] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q8jln" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9fdfb2ff-1d8f-40fb-8d56-8897236ec308", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da", Pod:"coredns-7c65d6cfc9-q8jln", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08a642dbabd", MAC:"be:38:f3:76:23:0a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:28.103788 containerd[1830]: 2025-09-13 00:00:28.098 [INFO][5039] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da" Namespace="kube-system" Pod="coredns-7c65d6cfc9-q8jln" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0" Sep 13 00:00:28.156341 containerd[1830]: time="2025-09-13T00:00:28.155872937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:00:28.156341 containerd[1830]: time="2025-09-13T00:00:28.155949817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:00:28.156341 containerd[1830]: time="2025-09-13T00:00:28.155992097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:28.158176 containerd[1830]: time="2025-09-13T00:00:28.156904496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:28.233986 containerd[1830]: time="2025-09-13T00:00:28.233927563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-q8jln,Uid:9fdfb2ff-1d8f-40fb-8d56-8897236ec308,Namespace:kube-system,Attempt:1,} returns sandbox id \"82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da\"" Sep 13 00:00:28.238466 containerd[1830]: time="2025-09-13T00:00:28.238365637Z" level=info msg="CreateContainer within sandbox \"82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:00:28.287915 containerd[1830]: time="2025-09-13T00:00:28.281704985Z" level=info msg="CreateContainer within sandbox \"82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2799ea923cb6c9e3e95bf52888624fe41a4a34c53d9e07f3f8b80c9ae8f6d3f3\"" Sep 13 00:00:28.289220 containerd[1830]: time="2025-09-13T00:00:28.288959216Z" level=info msg="StartContainer for \"2799ea923cb6c9e3e95bf52888624fe41a4a34c53d9e07f3f8b80c9ae8f6d3f3\"" Sep 13 00:00:28.369364 containerd[1830]: time="2025-09-13T00:00:28.368489000Z" level=info msg="StartContainer for \"2799ea923cb6c9e3e95bf52888624fe41a4a34c53d9e07f3f8b80c9ae8f6d3f3\" returns successfully" Sep 13 00:00:28.735597 containerd[1830]: time="2025-09-13T00:00:28.735463515Z" level=info msg="StopPodSandbox for \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\"" Sep 13 00:00:28.854678 containerd[1830]: 2025-09-13 00:00:28.798 [INFO][5155] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Sep 13 00:00:28.854678 containerd[1830]: 2025-09-13 00:00:28.799 [INFO][5155] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" iface="eth0" netns="/var/run/netns/cni-24f50216-e1d1-145f-09a5-52fcb0a40137" Sep 13 00:00:28.854678 containerd[1830]: 2025-09-13 00:00:28.799 [INFO][5155] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" iface="eth0" netns="/var/run/netns/cni-24f50216-e1d1-145f-09a5-52fcb0a40137" Sep 13 00:00:28.854678 containerd[1830]: 2025-09-13 00:00:28.799 [INFO][5155] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" iface="eth0" netns="/var/run/netns/cni-24f50216-e1d1-145f-09a5-52fcb0a40137" Sep 13 00:00:28.854678 containerd[1830]: 2025-09-13 00:00:28.799 [INFO][5155] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Sep 13 00:00:28.854678 containerd[1830]: 2025-09-13 00:00:28.799 [INFO][5155] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Sep 13 00:00:28.854678 containerd[1830]: 2025-09-13 00:00:28.829 [INFO][5163] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" HandleID="k8s-pod-network.b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0" Sep 13 00:00:28.854678 containerd[1830]: 2025-09-13 00:00:28.829 [INFO][5163] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:28.854678 containerd[1830]: 2025-09-13 00:00:28.829 [INFO][5163] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:28.854678 containerd[1830]: 2025-09-13 00:00:28.843 [WARNING][5163] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" HandleID="k8s-pod-network.b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0" Sep 13 00:00:28.854678 containerd[1830]: 2025-09-13 00:00:28.843 [INFO][5163] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" HandleID="k8s-pod-network.b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0" Sep 13 00:00:28.854678 containerd[1830]: 2025-09-13 00:00:28.846 [INFO][5163] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:28.854678 containerd[1830]: 2025-09-13 00:00:28.848 [INFO][5155] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Sep 13 00:00:28.855909 containerd[1830]: time="2025-09-13T00:00:28.855278210Z" level=info msg="TearDown network for sandbox \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\" successfully" Sep 13 00:00:28.855909 containerd[1830]: time="2025-09-13T00:00:28.855317610Z" level=info msg="StopPodSandbox for \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\" returns successfully" Sep 13 00:00:28.861512 containerd[1830]: time="2025-09-13T00:00:28.859254125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c5f5d79c6-pslxk,Uid:38b924fa-67d4-4dd5-b92a-a61c832c28b3,Namespace:calico-system,Attempt:1,}" Sep 13 00:00:28.862756 systemd[1]: run-netns-cni\x2d24f50216\x2de1d1\x2d145f\x2d09a5\x2d52fcb0a40137.mount: Deactivated successfully. Sep 13 00:00:29.106748 kubelet[3348]: I0913 00:00:29.102390 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-q8jln" podStartSLOduration=49.10235919 podStartE2EDuration="49.10235919s" podCreationTimestamp="2025-09-12 23:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:00:29.064750996 +0000 UTC m=+55.441242714" watchObservedRunningTime="2025-09-13 00:00:29.10235919 +0000 UTC m=+55.478850908" Sep 13 00:00:29.190732 systemd-networkd[1407]: cali565fcf2ba77: Link UP Sep 13 00:00:29.195644 systemd-networkd[1407]: cali565fcf2ba77: Gained carrier Sep 13 00:00:29.218677 containerd[1830]: 2025-09-13 00:00:28.974 [INFO][5169] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0 calico-kube-controllers-7c5f5d79c6- calico-system 38b924fa-67d4-4dd5-b92a-a61c832c28b3 940 0 2025-09-12 23:59:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c5f5d79c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.5-n-d2844e2d10 calico-kube-controllers-7c5f5d79c6-pslxk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali565fcf2ba77 [] [] }} ContainerID="8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5" Namespace="calico-system" Pod="calico-kube-controllers-7c5f5d79c6-pslxk" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-" Sep 13 00:00:29.218677 containerd[1830]: 2025-09-13 00:00:28.974 [INFO][5169] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5" Namespace="calico-system" Pod="calico-kube-controllers-7c5f5d79c6-pslxk" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0" Sep 13 00:00:29.218677 containerd[1830]: 2025-09-13 00:00:29.077 [INFO][5181] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5" HandleID="k8s-pod-network.8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0" Sep 13 00:00:29.218677 containerd[1830]: 2025-09-13 00:00:29.079 [INFO][5181] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5" HandleID="k8s-pod-network.8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-d2844e2d10", "pod":"calico-kube-controllers-7c5f5d79c6-pslxk", "timestamp":"2025-09-13 00:00:29.07782846 +0000 UTC"}, Hostname:"ci-4081.3.5-n-d2844e2d10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:00:29.218677 containerd[1830]: 2025-09-13 00:00:29.079 [INFO][5181] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:29.218677 containerd[1830]: 2025-09-13 00:00:29.079 [INFO][5181] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:29.218677 containerd[1830]: 2025-09-13 00:00:29.079 [INFO][5181] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-d2844e2d10' Sep 13 00:00:29.218677 containerd[1830]: 2025-09-13 00:00:29.108 [INFO][5181] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:29.218677 containerd[1830]: 2025-09-13 00:00:29.130 [INFO][5181] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:29.218677 containerd[1830]: 2025-09-13 00:00:29.149 [INFO][5181] ipam/ipam.go 511: Trying affinity for 192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:29.218677 containerd[1830]: 2025-09-13 00:00:29.153 [INFO][5181] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:29.218677 containerd[1830]: 2025-09-13 00:00:29.156 [INFO][5181] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:29.218677 containerd[1830]: 2025-09-13 00:00:29.156 [INFO][5181] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:29.218677 containerd[1830]: 2025-09-13 00:00:29.159 [INFO][5181] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5 Sep 13 00:00:29.218677 containerd[1830]: 2025-09-13 00:00:29.166 [INFO][5181] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:29.218677 containerd[1830]: 2025-09-13 00:00:29.177 [INFO][5181] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.117.3/26] block=192.168.117.0/26 handle="k8s-pod-network.8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:29.218677 containerd[1830]: 2025-09-13 00:00:29.177 [INFO][5181] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.3/26] handle="k8s-pod-network.8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:29.218677 containerd[1830]: 2025-09-13 00:00:29.177 [INFO][5181] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:29.218677 containerd[1830]: 2025-09-13 00:00:29.177 [INFO][5181] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.117.3/26] IPv6=[] ContainerID="8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5" HandleID="k8s-pod-network.8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0" Sep 13 00:00:29.220319 containerd[1830]: 2025-09-13 00:00:29.180 [INFO][5169] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5" Namespace="calico-system" Pod="calico-kube-controllers-7c5f5d79c6-pslxk" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0", GenerateName:"calico-kube-controllers-7c5f5d79c6-", Namespace:"calico-system", SelfLink:"", UID:"38b924fa-67d4-4dd5-b92a-a61c832c28b3", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c5f5d79c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"", Pod:"calico-kube-controllers-7c5f5d79c6-pslxk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.117.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali565fcf2ba77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:29.220319 containerd[1830]: 2025-09-13 00:00:29.181 [INFO][5169] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.3/32] ContainerID="8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5" Namespace="calico-system" Pod="calico-kube-controllers-7c5f5d79c6-pslxk" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0" Sep 13 00:00:29.220319 containerd[1830]: 2025-09-13 00:00:29.181 [INFO][5169] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali565fcf2ba77 ContainerID="8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5" Namespace="calico-system" Pod="calico-kube-controllers-7c5f5d79c6-pslxk" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0" Sep 13 00:00:29.220319 containerd[1830]: 2025-09-13 00:00:29.194 [INFO][5169] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5" Namespace="calico-system" Pod="calico-kube-controllers-7c5f5d79c6-pslxk" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0" Sep 13 00:00:29.220319 containerd[1830]: 2025-09-13 00:00:29.194 [INFO][5169] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5" Namespace="calico-system" Pod="calico-kube-controllers-7c5f5d79c6-pslxk" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0", GenerateName:"calico-kube-controllers-7c5f5d79c6-", Namespace:"calico-system", SelfLink:"", UID:"38b924fa-67d4-4dd5-b92a-a61c832c28b3", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c5f5d79c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5", Pod:"calico-kube-controllers-7c5f5d79c6-pslxk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.117.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali565fcf2ba77", MAC:"96:21:68:70:b0:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:29.220319 containerd[1830]: 2025-09-13 00:00:29.212 [INFO][5169] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5" Namespace="calico-system" Pod="calico-kube-controllers-7c5f5d79c6-pslxk" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0" Sep 13 00:00:29.248589 containerd[1830]: time="2025-09-13T00:00:29.248455493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:00:29.248589 containerd[1830]: time="2025-09-13T00:00:29.248536413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:00:29.248589 containerd[1830]: time="2025-09-13T00:00:29.248551893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:29.249241 containerd[1830]: time="2025-09-13T00:00:29.248650013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:29.311311 containerd[1830]: time="2025-09-13T00:00:29.311264617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c5f5d79c6-pslxk,Uid:38b924fa-67d4-4dd5-b92a-a61c832c28b3,Namespace:calico-system,Attempt:1,} returns sandbox id \"8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5\"" Sep 13 00:00:30.008550 systemd-networkd[1407]: cali08a642dbabd: Gained IPv6LL Sep 13 00:00:30.735393 containerd[1830]: time="2025-09-13T00:00:30.735260332Z" level=info msg="StopPodSandbox for \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\"" Sep 13 00:00:30.848742 containerd[1830]: 2025-09-13 00:00:30.796 [INFO][5251] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Sep 13 00:00:30.848742 containerd[1830]: 2025-09-13 00:00:30.797 [INFO][5251] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" iface="eth0" netns="/var/run/netns/cni-e967f107-8b13-fe3f-eeb9-7786355ef584" Sep 13 00:00:30.848742 containerd[1830]: 2025-09-13 00:00:30.798 [INFO][5251] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" iface="eth0" netns="/var/run/netns/cni-e967f107-8b13-fe3f-eeb9-7786355ef584" Sep 13 00:00:30.848742 containerd[1830]: 2025-09-13 00:00:30.799 [INFO][5251] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" iface="eth0" netns="/var/run/netns/cni-e967f107-8b13-fe3f-eeb9-7786355ef584" Sep 13 00:00:30.848742 containerd[1830]: 2025-09-13 00:00:30.799 [INFO][5251] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Sep 13 00:00:30.848742 containerd[1830]: 2025-09-13 00:00:30.799 [INFO][5251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Sep 13 00:00:30.848742 containerd[1830]: 2025-09-13 00:00:30.828 [INFO][5258] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" HandleID="k8s-pod-network.72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Workload="ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0" Sep 13 00:00:30.848742 containerd[1830]: 2025-09-13 00:00:30.828 [INFO][5258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:30.848742 containerd[1830]: 2025-09-13 00:00:30.828 [INFO][5258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:30.848742 containerd[1830]: 2025-09-13 00:00:30.842 [WARNING][5258] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" HandleID="k8s-pod-network.72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Workload="ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0" Sep 13 00:00:30.848742 containerd[1830]: 2025-09-13 00:00:30.842 [INFO][5258] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" HandleID="k8s-pod-network.72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Workload="ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0" Sep 13 00:00:30.848742 containerd[1830]: 2025-09-13 00:00:30.844 [INFO][5258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:30.848742 containerd[1830]: 2025-09-13 00:00:30.846 [INFO][5251] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Sep 13 00:00:30.852236 containerd[1830]: time="2025-09-13T00:00:30.849812153Z" level=info msg="TearDown network for sandbox \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\" successfully" Sep 13 00:00:30.852236 containerd[1830]: time="2025-09-13T00:00:30.849862473Z" level=info msg="StopPodSandbox for \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\" returns successfully" Sep 13 00:00:30.852236 containerd[1830]: time="2025-09-13T00:00:30.851714430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4wv45,Uid:7fc7d1ef-2f94-42c6-a027-0084519d796b,Namespace:calico-system,Attempt:1,}" Sep 13 00:00:30.854751 systemd[1]: run-netns-cni\x2de967f107\x2d8b13\x2dfe3f\x2deeb9\x2d7786355ef584.mount: Deactivated successfully. Sep 13 00:00:30.969206 systemd-networkd[1407]: cali565fcf2ba77: Gained IPv6LL Sep 13 00:00:31.091514 systemd-networkd[1407]: calibca27c33735: Link UP Sep 13 00:00:31.092882 systemd-networkd[1407]: calibca27c33735: Gained carrier Sep 13 00:00:31.123363 containerd[1830]: 2025-09-13 00:00:30.993 [INFO][5264] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0 csi-node-driver- calico-system 7fc7d1ef-2f94-42c6-a027-0084519d796b 957 0 2025-09-12 23:59:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.5-n-d2844e2d10 csi-node-driver-4wv45 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibca27c33735 [] [] }} ContainerID="6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb" Namespace="calico-system" Pod="csi-node-driver-4wv45" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-" Sep 13 00:00:31.123363 containerd[1830]: 2025-09-13 00:00:30.993 [INFO][5264] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb" Namespace="calico-system" Pod="csi-node-driver-4wv45" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0" Sep 13 00:00:31.123363 containerd[1830]: 2025-09-13 00:00:31.031 [INFO][5276] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb" HandleID="k8s-pod-network.6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb" Workload="ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0" Sep 13 00:00:31.123363 containerd[1830]: 2025-09-13 00:00:31.034 [INFO][5276] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb" HandleID="k8s-pod-network.6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb" Workload="ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aaff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-d2844e2d10", "pod":"csi-node-driver-4wv45", "timestamp":"2025-09-13 00:00:31.031532532 +0000 UTC"}, Hostname:"ci-4081.3.5-n-d2844e2d10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:00:31.123363 containerd[1830]: 2025-09-13 00:00:31.034 [INFO][5276] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:31.123363 containerd[1830]: 2025-09-13 00:00:31.034 [INFO][5276] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:31.123363 containerd[1830]: 2025-09-13 00:00:31.034 [INFO][5276] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-d2844e2d10' Sep 13 00:00:31.123363 containerd[1830]: 2025-09-13 00:00:31.045 [INFO][5276] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:31.123363 containerd[1830]: 2025-09-13 00:00:31.051 [INFO][5276] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:31.123363 containerd[1830]: 2025-09-13 00:00:31.057 [INFO][5276] ipam/ipam.go 511: Trying affinity for 192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:31.123363 containerd[1830]: 2025-09-13 00:00:31.060 [INFO][5276] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:31.123363 containerd[1830]: 2025-09-13 00:00:31.063 [INFO][5276] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:31.123363 containerd[1830]: 2025-09-13 00:00:31.063 [INFO][5276] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:31.123363 containerd[1830]: 2025-09-13 00:00:31.065 [INFO][5276] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb Sep 13 00:00:31.123363 containerd[1830]: 2025-09-13 00:00:31.071 [INFO][5276] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:31.123363 containerd[1830]: 2025-09-13 00:00:31.083 [INFO][5276] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.117.4/26] block=192.168.117.0/26 handle="k8s-pod-network.6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:31.123363 containerd[1830]: 2025-09-13 00:00:31.083 [INFO][5276] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.4/26] handle="k8s-pod-network.6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:31.123363 containerd[1830]: 2025-09-13 00:00:31.083 [INFO][5276] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:31.123363 containerd[1830]: 2025-09-13 00:00:31.083 [INFO][5276] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.117.4/26] IPv6=[] ContainerID="6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb" HandleID="k8s-pod-network.6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb" Workload="ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0" Sep 13 00:00:31.123990 containerd[1830]: 2025-09-13 00:00:31.086 [INFO][5264] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb" Namespace="calico-system" Pod="csi-node-driver-4wv45" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7fc7d1ef-2f94-42c6-a027-0084519d796b", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"", Pod:"csi-node-driver-4wv45", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.117.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibca27c33735", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:31.123990 containerd[1830]: 2025-09-13 00:00:31.086 [INFO][5264] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.4/32] ContainerID="6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb" Namespace="calico-system" Pod="csi-node-driver-4wv45" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0" Sep 13 00:00:31.123990 containerd[1830]: 2025-09-13 00:00:31.086 [INFO][5264] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibca27c33735 ContainerID="6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb" Namespace="calico-system" Pod="csi-node-driver-4wv45" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0" Sep 13 00:00:31.123990 containerd[1830]: 2025-09-13 00:00:31.092 [INFO][5264] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb" Namespace="calico-system" Pod="csi-node-driver-4wv45" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0" Sep 13 00:00:31.123990 containerd[1830]: 2025-09-13 00:00:31.095 [INFO][5264] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb" Namespace="calico-system" Pod="csi-node-driver-4wv45" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7fc7d1ef-2f94-42c6-a027-0084519d796b", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb", Pod:"csi-node-driver-4wv45", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.117.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibca27c33735", MAC:"b2:e5:05:d5:15:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:31.123990 containerd[1830]: 2025-09-13 00:00:31.117 [INFO][5264] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb" Namespace="calico-system" Pod="csi-node-driver-4wv45" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0" Sep 13 00:00:31.157072 containerd[1830]: time="2025-09-13T00:00:31.156690168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:00:31.157072 containerd[1830]: time="2025-09-13T00:00:31.156777528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:00:31.157072 containerd[1830]: time="2025-09-13T00:00:31.156826768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:31.157072 containerd[1830]: time="2025-09-13T00:00:31.156993607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:31.207011 containerd[1830]: time="2025-09-13T00:00:31.206963918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4wv45,Uid:7fc7d1ef-2f94-42c6-a027-0084519d796b,Namespace:calico-system,Attempt:1,} returns sandbox id \"6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb\"" Sep 13 00:00:31.737759 containerd[1830]: time="2025-09-13T00:00:31.737189684Z" level=info msg="StopPodSandbox for \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\"" Sep 13 00:00:31.840575 containerd[1830]: 2025-09-13 00:00:31.796 [INFO][5341] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Sep 13 00:00:31.840575 containerd[1830]: 2025-09-13 00:00:31.796 [INFO][5341] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" iface="eth0" netns="/var/run/netns/cni-7608268f-be8b-172b-8641-80bf026ebf68" Sep 13 00:00:31.840575 containerd[1830]: 2025-09-13 00:00:31.797 [INFO][5341] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" iface="eth0" netns="/var/run/netns/cni-7608268f-be8b-172b-8641-80bf026ebf68" Sep 13 00:00:31.840575 containerd[1830]: 2025-09-13 00:00:31.800 [INFO][5341] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" iface="eth0" netns="/var/run/netns/cni-7608268f-be8b-172b-8641-80bf026ebf68" Sep 13 00:00:31.840575 containerd[1830]: 2025-09-13 00:00:31.800 [INFO][5341] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Sep 13 00:00:31.840575 containerd[1830]: 2025-09-13 00:00:31.800 [INFO][5341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Sep 13 00:00:31.840575 containerd[1830]: 2025-09-13 00:00:31.824 [INFO][5349] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" HandleID="k8s-pod-network.c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0" Sep 13 00:00:31.840575 containerd[1830]: 2025-09-13 00:00:31.824 [INFO][5349] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:31.840575 containerd[1830]: 2025-09-13 00:00:31.824 [INFO][5349] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:31.840575 containerd[1830]: 2025-09-13 00:00:31.834 [WARNING][5349] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" HandleID="k8s-pod-network.c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0" Sep 13 00:00:31.840575 containerd[1830]: 2025-09-13 00:00:31.834 [INFO][5349] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" HandleID="k8s-pod-network.c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0" Sep 13 00:00:31.840575 containerd[1830]: 2025-09-13 00:00:31.836 [INFO][5349] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:31.840575 containerd[1830]: 2025-09-13 00:00:31.838 [INFO][5341] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Sep 13 00:00:31.841577 containerd[1830]: time="2025-09-13T00:00:31.841388337Z" level=info msg="TearDown network for sandbox \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\" successfully" Sep 13 00:00:31.841577 containerd[1830]: time="2025-09-13T00:00:31.841432816Z" level=info msg="StopPodSandbox for \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\" returns successfully" Sep 13 00:00:31.843724 containerd[1830]: time="2025-09-13T00:00:31.842413135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d7646db59-w2j5v,Uid:eddd75bf-28ce-46f2-ab6e-7773bfe85938,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:00:31.844788 systemd[1]: run-netns-cni\x2d7608268f\x2dbe8b\x2d172b\x2d8641\x2d80bf026ebf68.mount: Deactivated successfully. Sep 13 00:00:32.015376 systemd-networkd[1407]: cali0017733e628: Link UP Sep 13 00:00:32.016808 systemd-networkd[1407]: cali0017733e628: Gained carrier Sep 13 00:00:32.048237 containerd[1830]: 2025-09-13 00:00:31.921 [INFO][5360] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0 calico-apiserver-7d7646db59- calico-apiserver eddd75bf-28ce-46f2-ab6e-7773bfe85938 965 0 2025-09-12 23:59:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d7646db59 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-d2844e2d10 calico-apiserver-7d7646db59-w2j5v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0017733e628 [] [] }} ContainerID="7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b" Namespace="calico-apiserver" Pod="calico-apiserver-7d7646db59-w2j5v" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-" Sep 13 00:00:32.048237 containerd[1830]: 2025-09-13 00:00:31.922 [INFO][5360] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b" Namespace="calico-apiserver" Pod="calico-apiserver-7d7646db59-w2j5v" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0" Sep 13 00:00:32.048237 containerd[1830]: 2025-09-13 00:00:31.955 [INFO][5373] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b" HandleID="k8s-pod-network.7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0" Sep 13 00:00:32.048237 containerd[1830]: 2025-09-13 00:00:31.955 [INFO][5373] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b" HandleID="k8s-pod-network.7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-d2844e2d10", "pod":"calico-apiserver-7d7646db59-w2j5v", "timestamp":"2025-09-13 00:00:31.955391411 +0000 UTC"}, Hostname:"ci-4081.3.5-n-d2844e2d10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:00:32.048237 containerd[1830]: 2025-09-13 00:00:31.955 [INFO][5373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:32.048237 containerd[1830]: 2025-09-13 00:00:31.955 [INFO][5373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:32.048237 containerd[1830]: 2025-09-13 00:00:31.955 [INFO][5373] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-d2844e2d10' Sep 13 00:00:32.048237 containerd[1830]: 2025-09-13 00:00:31.966 [INFO][5373] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:32.048237 containerd[1830]: 2025-09-13 00:00:31.971 [INFO][5373] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:32.048237 containerd[1830]: 2025-09-13 00:00:31.977 [INFO][5373] ipam/ipam.go 511: Trying affinity for 192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:32.048237 containerd[1830]: 2025-09-13 00:00:31.981 [INFO][5373] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:32.048237 containerd[1830]: 2025-09-13 00:00:31.984 [INFO][5373] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:32.048237 containerd[1830]: 2025-09-13 00:00:31.984 [INFO][5373] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:32.048237 containerd[1830]: 2025-09-13 00:00:31.986 [INFO][5373] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b Sep 13 00:00:32.048237 containerd[1830]: 2025-09-13 00:00:31.992 [INFO][5373] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:32.048237 containerd[1830]: 2025-09-13 00:00:32.004 [INFO][5373] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.117.5/26] block=192.168.117.0/26 handle="k8s-pod-network.7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:32.048237 containerd[1830]: 2025-09-13 00:00:32.004 [INFO][5373] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.5/26] handle="k8s-pod-network.7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:32.048237 containerd[1830]: 2025-09-13 00:00:32.004 [INFO][5373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:32.048237 containerd[1830]: 2025-09-13 00:00:32.004 [INFO][5373] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.117.5/26] IPv6=[] ContainerID="7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b" HandleID="k8s-pod-network.7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0" Sep 13 00:00:32.048797 containerd[1830]: 2025-09-13 00:00:32.007 [INFO][5360] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b" Namespace="calico-apiserver" Pod="calico-apiserver-7d7646db59-w2j5v" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0", GenerateName:"calico-apiserver-7d7646db59-", Namespace:"calico-apiserver", SelfLink:"", UID:"eddd75bf-28ce-46f2-ab6e-7773bfe85938", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d7646db59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"", Pod:"calico-apiserver-7d7646db59-w2j5v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0017733e628", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:32.048797 containerd[1830]: 2025-09-13 00:00:32.007 [INFO][5360] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.5/32] ContainerID="7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b" Namespace="calico-apiserver" Pod="calico-apiserver-7d7646db59-w2j5v" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0" Sep 13 00:00:32.048797 containerd[1830]: 2025-09-13 00:00:32.007 [INFO][5360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0017733e628 ContainerID="7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b" Namespace="calico-apiserver" Pod="calico-apiserver-7d7646db59-w2j5v" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0" Sep 13 00:00:32.048797 containerd[1830]: 2025-09-13 00:00:32.015 [INFO][5360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b" Namespace="calico-apiserver" Pod="calico-apiserver-7d7646db59-w2j5v" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0" Sep 13 00:00:32.048797 containerd[1830]: 2025-09-13 00:00:32.016 [INFO][5360] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b" Namespace="calico-apiserver" Pod="calico-apiserver-7d7646db59-w2j5v" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0", GenerateName:"calico-apiserver-7d7646db59-", Namespace:"calico-apiserver", SelfLink:"", UID:"eddd75bf-28ce-46f2-ab6e-7773bfe85938", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d7646db59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b", Pod:"calico-apiserver-7d7646db59-w2j5v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0017733e628", MAC:"02:a3:28:07:2c:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:32.048797 containerd[1830]: 2025-09-13 00:00:32.035 [INFO][5360] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b" Namespace="calico-apiserver" Pod="calico-apiserver-7d7646db59-w2j5v" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0" Sep 13 00:00:32.078459 containerd[1830]: time="2025-09-13T00:00:32.078212911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:00:32.078459 containerd[1830]: time="2025-09-13T00:00:32.078309790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:00:32.078459 containerd[1830]: time="2025-09-13T00:00:32.078322710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:32.078646 containerd[1830]: time="2025-09-13T00:00:32.078563670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:32.104674 systemd[1]: run-containerd-runc-k8s.io-7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b-runc.xGEXLm.mount: Deactivated successfully. Sep 13 00:00:32.157642 containerd[1830]: time="2025-09-13T00:00:32.157546728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d7646db59-w2j5v,Uid:eddd75bf-28ce-46f2-ab6e-7773bfe85938,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b\"" Sep 13 00:00:33.080252 systemd-networkd[1407]: calibca27c33735: Gained IPv6LL Sep 13 00:00:33.738208 containerd[1830]: time="2025-09-13T00:00:33.737885766Z" level=info msg="StopPodSandbox for \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\"" Sep 13 00:00:33.740613 containerd[1830]: time="2025-09-13T00:00:33.739450083Z" level=info msg="StopPodSandbox for \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\"" Sep 13 00:00:33.752371 containerd[1830]: time="2025-09-13T00:00:33.752044380Z" level=info msg="StopPodSandbox for \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\"" Sep 13 00:00:33.898343 containerd[1830]: 2025-09-13 00:00:33.813 [WARNING][5467] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0", GenerateName:"calico-apiserver-7d7646db59-", Namespace:"calico-apiserver", SelfLink:"", UID:"eddd75bf-28ce-46f2-ab6e-7773bfe85938", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d7646db59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b", Pod:"calico-apiserver-7d7646db59-w2j5v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0017733e628", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:33.898343 containerd[1830]: 2025-09-13 00:00:33.814 [INFO][5467] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Sep 13 00:00:33.898343 containerd[1830]: 2025-09-13 00:00:33.814 [INFO][5467] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" iface="eth0" netns="" Sep 13 00:00:33.898343 containerd[1830]: 2025-09-13 00:00:33.814 [INFO][5467] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Sep 13 00:00:33.898343 containerd[1830]: 2025-09-13 00:00:33.814 [INFO][5467] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Sep 13 00:00:33.898343 containerd[1830]: 2025-09-13 00:00:33.863 [INFO][5479] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" HandleID="k8s-pod-network.c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0" Sep 13 00:00:33.898343 containerd[1830]: 2025-09-13 00:00:33.865 [INFO][5479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:33.898343 containerd[1830]: 2025-09-13 00:00:33.865 [INFO][5479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:33.898343 containerd[1830]: 2025-09-13 00:00:33.882 [WARNING][5479] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" HandleID="k8s-pod-network.c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0" Sep 13 00:00:33.898343 containerd[1830]: 2025-09-13 00:00:33.882 [INFO][5479] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" HandleID="k8s-pod-network.c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0" Sep 13 00:00:33.898343 containerd[1830]: 2025-09-13 00:00:33.885 [INFO][5479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:33.898343 containerd[1830]: 2025-09-13 00:00:33.893 [INFO][5467] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Sep 13 00:00:33.898939 containerd[1830]: time="2025-09-13T00:00:33.898444197Z" level=info msg="TearDown network for sandbox \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\" successfully" Sep 13 00:00:33.898939 containerd[1830]: time="2025-09-13T00:00:33.898474757Z" level=info msg="StopPodSandbox for \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\" returns successfully" Sep 13 00:00:33.899878 containerd[1830]: time="2025-09-13T00:00:33.899305755Z" level=info msg="RemovePodSandbox for \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\"" Sep 13 00:00:33.899878 containerd[1830]: time="2025-09-13T00:00:33.899350555Z" level=info msg="Forcibly stopping sandbox \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\"" Sep 13 00:00:33.947128 containerd[1830]: 2025-09-13 00:00:33.843 [INFO][5452] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Sep 13 00:00:33.947128 containerd[1830]: 2025-09-13 00:00:33.844 [INFO][5452] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" iface="eth0" netns="/var/run/netns/cni-9a851089-fdaf-c104-df05-7bff6a4b431d" Sep 13 00:00:33.947128 containerd[1830]: 2025-09-13 00:00:33.844 [INFO][5452] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" iface="eth0" netns="/var/run/netns/cni-9a851089-fdaf-c104-df05-7bff6a4b431d" Sep 13 00:00:33.947128 containerd[1830]: 2025-09-13 00:00:33.844 [INFO][5452] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" iface="eth0" netns="/var/run/netns/cni-9a851089-fdaf-c104-df05-7bff6a4b431d" Sep 13 00:00:33.947128 containerd[1830]: 2025-09-13 00:00:33.844 [INFO][5452] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Sep 13 00:00:33.947128 containerd[1830]: 2025-09-13 00:00:33.844 [INFO][5452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Sep 13 00:00:33.947128 containerd[1830]: 2025-09-13 00:00:33.912 [INFO][5485] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" HandleID="k8s-pod-network.e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0" Sep 13 00:00:33.947128 containerd[1830]: 2025-09-13 00:00:33.912 [INFO][5485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:33.947128 containerd[1830]: 2025-09-13 00:00:33.912 [INFO][5485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:33.947128 containerd[1830]: 2025-09-13 00:00:33.931 [WARNING][5485] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" HandleID="k8s-pod-network.e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0" Sep 13 00:00:33.947128 containerd[1830]: 2025-09-13 00:00:33.931 [INFO][5485] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" HandleID="k8s-pod-network.e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0" Sep 13 00:00:33.947128 containerd[1830]: 2025-09-13 00:00:33.935 [INFO][5485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:33.947128 containerd[1830]: 2025-09-13 00:00:33.939 [INFO][5452] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Sep 13 00:00:33.948376 containerd[1830]: time="2025-09-13T00:00:33.948322787Z" level=info msg="TearDown network for sandbox \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\" successfully" Sep 13 00:00:33.948376 containerd[1830]: time="2025-09-13T00:00:33.948372427Z" level=info msg="StopPodSandbox for \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\" returns successfully" Sep 13 00:00:33.949775 systemd[1]: run-netns-cni\x2d9a851089\x2dfdaf\x2dc104\x2ddf05\x2d7bff6a4b431d.mount: Deactivated successfully. Sep 13 00:00:33.953666 containerd[1830]: time="2025-09-13T00:00:33.952906539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d7646db59-7hfrz,Uid:bd16400c-5cb4-4ddc-a634-d229d8486603,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:00:33.973603 containerd[1830]: 2025-09-13 00:00:33.852 [INFO][5451] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Sep 13 00:00:33.973603 containerd[1830]: 2025-09-13 00:00:33.852 [INFO][5451] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" iface="eth0" netns="/var/run/netns/cni-8d0d5636-3e82-ab60-f4f2-a3a0d2cbd301" Sep 13 00:00:33.973603 containerd[1830]: 2025-09-13 00:00:33.852 [INFO][5451] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" iface="eth0" netns="/var/run/netns/cni-8d0d5636-3e82-ab60-f4f2-a3a0d2cbd301" Sep 13 00:00:33.973603 containerd[1830]: 2025-09-13 00:00:33.854 [INFO][5451] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" iface="eth0" netns="/var/run/netns/cni-8d0d5636-3e82-ab60-f4f2-a3a0d2cbd301" Sep 13 00:00:33.973603 containerd[1830]: 2025-09-13 00:00:33.854 [INFO][5451] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Sep 13 00:00:33.973603 containerd[1830]: 2025-09-13 00:00:33.854 [INFO][5451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Sep 13 00:00:33.973603 containerd[1830]: 2025-09-13 00:00:33.923 [INFO][5490] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" HandleID="k8s-pod-network.f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Workload="ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0" Sep 13 00:00:33.973603 containerd[1830]: 2025-09-13 00:00:33.923 [INFO][5490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:33.973603 containerd[1830]: 2025-09-13 00:00:33.939 [INFO][5490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:33.973603 containerd[1830]: 2025-09-13 00:00:33.960 [WARNING][5490] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" HandleID="k8s-pod-network.f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Workload="ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0" Sep 13 00:00:33.973603 containerd[1830]: 2025-09-13 00:00:33.960 [INFO][5490] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" HandleID="k8s-pod-network.f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Workload="ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0" Sep 13 00:00:33.973603 containerd[1830]: 2025-09-13 00:00:33.963 [INFO][5490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:33.973603 containerd[1830]: 2025-09-13 00:00:33.968 [INFO][5451] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Sep 13 00:00:33.979458 containerd[1830]: time="2025-09-13T00:00:33.978790012Z" level=info msg="TearDown network for sandbox \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\" successfully" Sep 13 00:00:33.979458 containerd[1830]: time="2025-09-13T00:00:33.978836812Z" level=info msg="StopPodSandbox for \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\" returns successfully" Sep 13 00:00:33.979612 systemd[1]: run-netns-cni\x2d8d0d5636\x2d3e82\x2dab60\x2df4f2\x2da3a0d2cbd301.mount: Deactivated successfully. Sep 13 00:00:33.980280 systemd-networkd[1407]: cali0017733e628: Gained IPv6LL Sep 13 00:00:33.983262 containerd[1830]: time="2025-09-13T00:00:33.982953765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-c6kj9,Uid:22768cde-472b-4c66-b597-59195c05c1b3,Namespace:calico-system,Attempt:1,}" Sep 13 00:00:34.080796 containerd[1830]: 2025-09-13 00:00:34.013 [WARNING][5505] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0", GenerateName:"calico-apiserver-7d7646db59-", Namespace:"calico-apiserver", SelfLink:"", UID:"eddd75bf-28ce-46f2-ab6e-7773bfe85938", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d7646db59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b", Pod:"calico-apiserver-7d7646db59-w2j5v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0017733e628", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:34.080796 containerd[1830]: 2025-09-13 00:00:34.014 [INFO][5505] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Sep 13 00:00:34.080796 containerd[1830]: 2025-09-13 00:00:34.014 [INFO][5505] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" iface="eth0" netns="" Sep 13 00:00:34.080796 containerd[1830]: 2025-09-13 00:00:34.014 [INFO][5505] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Sep 13 00:00:34.080796 containerd[1830]: 2025-09-13 00:00:34.014 [INFO][5505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Sep 13 00:00:34.080796 containerd[1830]: 2025-09-13 00:00:34.048 [INFO][5516] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" HandleID="k8s-pod-network.c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0" Sep 13 00:00:34.080796 containerd[1830]: 2025-09-13 00:00:34.048 [INFO][5516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:34.080796 containerd[1830]: 2025-09-13 00:00:34.048 [INFO][5516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:34.080796 containerd[1830]: 2025-09-13 00:00:34.062 [WARNING][5516] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" HandleID="k8s-pod-network.c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0" Sep 13 00:00:34.080796 containerd[1830]: 2025-09-13 00:00:34.062 [INFO][5516] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" HandleID="k8s-pod-network.c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--w2j5v-eth0" Sep 13 00:00:34.080796 containerd[1830]: 2025-09-13 00:00:34.064 [INFO][5516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:34.080796 containerd[1830]: 2025-09-13 00:00:34.068 [INFO][5505] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8" Sep 13 00:00:34.080796 containerd[1830]: time="2025-09-13T00:00:34.079212992Z" level=info msg="TearDown network for sandbox \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\" successfully" Sep 13 00:00:34.121820 containerd[1830]: time="2025-09-13T00:00:34.121761835Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:00:34.121969 containerd[1830]: time="2025-09-13T00:00:34.121846275Z" level=info msg="RemovePodSandbox \"c2ca29967db327367e638f64ef73e96a0fdafa68a17a0ec343635c274bc41dc8\" returns successfully" Sep 13 00:00:34.122636 containerd[1830]: time="2025-09-13T00:00:34.122597354Z" level=info msg="StopPodSandbox for \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\"" Sep 13 00:00:34.246810 systemd-networkd[1407]: cali07c3c8d5146: Link UP Sep 13 00:00:34.247972 systemd-networkd[1407]: cali07c3c8d5146: Gained carrier Sep 13 00:00:34.285710 containerd[1830]: 2025-09-13 00:00:34.087 [INFO][5521] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0 calico-apiserver-7d7646db59- calico-apiserver bd16400c-5cb4-4ddc-a634-d229d8486603 977 0 2025-09-12 23:59:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d7646db59 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-d2844e2d10 calico-apiserver-7d7646db59-7hfrz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali07c3c8d5146 [] [] }} ContainerID="7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673" Namespace="calico-apiserver" Pod="calico-apiserver-7d7646db59-7hfrz" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-" Sep 13 00:00:34.285710 containerd[1830]: 2025-09-13 00:00:34.088 [INFO][5521] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673" Namespace="calico-apiserver" Pod="calico-apiserver-7d7646db59-7hfrz" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0" Sep 13 00:00:34.285710 containerd[1830]: 2025-09-13 00:00:34.131 [INFO][5538] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673" HandleID="k8s-pod-network.7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0" Sep 13 00:00:34.285710 containerd[1830]: 2025-09-13 00:00:34.131 [INFO][5538] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673" HandleID="k8s-pod-network.7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002caff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-d2844e2d10", "pod":"calico-apiserver-7d7646db59-7hfrz", "timestamp":"2025-09-13 00:00:34.131438418 +0000 UTC"}, Hostname:"ci-4081.3.5-n-d2844e2d10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:00:34.285710 containerd[1830]: 2025-09-13 00:00:34.131 [INFO][5538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:34.285710 containerd[1830]: 2025-09-13 00:00:34.132 [INFO][5538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:34.285710 containerd[1830]: 2025-09-13 00:00:34.132 [INFO][5538] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-d2844e2d10' Sep 13 00:00:34.285710 containerd[1830]: 2025-09-13 00:00:34.155 [INFO][5538] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:34.285710 containerd[1830]: 2025-09-13 00:00:34.164 [INFO][5538] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:34.285710 containerd[1830]: 2025-09-13 00:00:34.172 [INFO][5538] ipam/ipam.go 511: Trying affinity for 192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:34.285710 containerd[1830]: 2025-09-13 00:00:34.182 [INFO][5538] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:34.285710 containerd[1830]: 2025-09-13 00:00:34.192 [INFO][5538] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:34.285710 containerd[1830]: 2025-09-13 00:00:34.193 [INFO][5538] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:34.285710 containerd[1830]: 2025-09-13 00:00:34.197 [INFO][5538] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673 Sep 13 00:00:34.285710 containerd[1830]: 2025-09-13 00:00:34.212 [INFO][5538] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:34.285710 containerd[1830]: 2025-09-13 00:00:34.230 [INFO][5538] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.117.6/26] block=192.168.117.0/26 handle="k8s-pod-network.7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:34.285710 containerd[1830]: 2025-09-13 00:00:34.231 [INFO][5538] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.6/26] handle="k8s-pod-network.7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:34.285710 containerd[1830]: 2025-09-13 00:00:34.231 [INFO][5538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:34.285710 containerd[1830]: 2025-09-13 00:00:34.231 [INFO][5538] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.117.6/26] IPv6=[] ContainerID="7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673" HandleID="k8s-pod-network.7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0" Sep 13 00:00:34.286411 containerd[1830]: 2025-09-13 00:00:34.238 [INFO][5521] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673" Namespace="calico-apiserver" Pod="calico-apiserver-7d7646db59-7hfrz" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0", GenerateName:"calico-apiserver-7d7646db59-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd16400c-5cb4-4ddc-a634-d229d8486603", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d7646db59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"", Pod:"calico-apiserver-7d7646db59-7hfrz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07c3c8d5146", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:34.286411 containerd[1830]: 2025-09-13 00:00:34.238 [INFO][5521] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.6/32] ContainerID="7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673" Namespace="calico-apiserver" Pod="calico-apiserver-7d7646db59-7hfrz" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0" Sep 13 00:00:34.286411 containerd[1830]: 2025-09-13 00:00:34.238 [INFO][5521] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07c3c8d5146 ContainerID="7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673" Namespace="calico-apiserver" Pod="calico-apiserver-7d7646db59-7hfrz" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0" Sep 13 00:00:34.286411 containerd[1830]: 2025-09-13 00:00:34.249 [INFO][5521] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673" Namespace="calico-apiserver" Pod="calico-apiserver-7d7646db59-7hfrz" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0" Sep 13 00:00:34.286411 containerd[1830]: 2025-09-13 00:00:34.250 [INFO][5521] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673" Namespace="calico-apiserver" Pod="calico-apiserver-7d7646db59-7hfrz" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0", GenerateName:"calico-apiserver-7d7646db59-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd16400c-5cb4-4ddc-a634-d229d8486603", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d7646db59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673", Pod:"calico-apiserver-7d7646db59-7hfrz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07c3c8d5146", MAC:"06:e9:44:8a:ac:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:34.286411 containerd[1830]: 2025-09-13 00:00:34.282 [INFO][5521] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673" Namespace="calico-apiserver" Pod="calico-apiserver-7d7646db59-7hfrz" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0" Sep 13 00:00:34.333455 containerd[1830]: time="2025-09-13T00:00:34.332549856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:00:34.333455 containerd[1830]: time="2025-09-13T00:00:34.332830976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:00:34.333455 containerd[1830]: time="2025-09-13T00:00:34.332961055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:34.334259 containerd[1830]: time="2025-09-13T00:00:34.334138213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:34.374032 systemd-networkd[1407]: cali83e8384ee4a: Link UP Sep 13 00:00:34.374744 systemd-networkd[1407]: cali83e8384ee4a: Gained carrier Sep 13 00:00:34.409518 containerd[1830]: 2025-09-13 00:00:34.196 [INFO][5543] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0 goldmane-7988f88666- calico-system 22768cde-472b-4c66-b597-59195c05c1b3 978 0 2025-09-12 23:59:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.5-n-d2844e2d10 goldmane-7988f88666-c6kj9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali83e8384ee4a [] [] }} ContainerID="7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74" Namespace="calico-system" Pod="goldmane-7988f88666-c6kj9" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-" Sep 13 00:00:34.409518 containerd[1830]: 2025-09-13 00:00:34.198 [INFO][5543] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74" Namespace="calico-system" Pod="goldmane-7988f88666-c6kj9" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0" Sep 13 00:00:34.409518 containerd[1830]: 2025-09-13 00:00:34.269 [INFO][5570] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74" HandleID="k8s-pod-network.7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74" Workload="ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0" Sep 13 00:00:34.409518 containerd[1830]: 2025-09-13 00:00:34.269 [INFO][5570] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74" HandleID="k8s-pod-network.7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74" Workload="ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b040), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-d2844e2d10", "pod":"goldmane-7988f88666-c6kj9", "timestamp":"2025-09-13 00:00:34.26901557 +0000 UTC"}, Hostname:"ci-4081.3.5-n-d2844e2d10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:00:34.409518 containerd[1830]: 2025-09-13 00:00:34.269 [INFO][5570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:34.409518 containerd[1830]: 2025-09-13 00:00:34.269 [INFO][5570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:34.409518 containerd[1830]: 2025-09-13 00:00:34.269 [INFO][5570] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-d2844e2d10' Sep 13 00:00:34.409518 containerd[1830]: 2025-09-13 00:00:34.287 [INFO][5570] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:34.409518 containerd[1830]: 2025-09-13 00:00:34.296 [INFO][5570] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:34.409518 containerd[1830]: 2025-09-13 00:00:34.306 [INFO][5570] ipam/ipam.go 511: Trying affinity for 192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:34.409518 containerd[1830]: 2025-09-13 00:00:34.311 [INFO][5570] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:34.409518 containerd[1830]: 2025-09-13 00:00:34.320 [INFO][5570] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:34.409518 containerd[1830]: 2025-09-13 00:00:34.320 [INFO][5570] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:34.409518 containerd[1830]: 2025-09-13 00:00:34.324 [INFO][5570] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74 Sep 13 00:00:34.409518 containerd[1830]: 2025-09-13 00:00:34.336 [INFO][5570] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:34.409518 containerd[1830]: 2025-09-13 00:00:34.361 [INFO][5570] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.117.7/26] block=192.168.117.0/26 handle="k8s-pod-network.7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:34.409518 containerd[1830]: 2025-09-13 00:00:34.361 [INFO][5570] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.7/26] handle="k8s-pod-network.7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:34.409518 containerd[1830]: 2025-09-13 00:00:34.361 [INFO][5570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:34.409518 containerd[1830]: 2025-09-13 00:00:34.361 [INFO][5570] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.117.7/26] IPv6=[] ContainerID="7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74" HandleID="k8s-pod-network.7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74" Workload="ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0" Sep 13 00:00:34.411247 containerd[1830]: 2025-09-13 00:00:34.367 [INFO][5543] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74" Namespace="calico-system" Pod="goldmane-7988f88666-c6kj9" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"22768cde-472b-4c66-b597-59195c05c1b3", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"", Pod:"goldmane-7988f88666-c6kj9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.117.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali83e8384ee4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:34.411247 containerd[1830]: 2025-09-13 00:00:34.367 [INFO][5543] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.7/32] ContainerID="7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74" Namespace="calico-system" Pod="goldmane-7988f88666-c6kj9" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0" Sep 13 00:00:34.411247 containerd[1830]: 2025-09-13 00:00:34.367 [INFO][5543] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83e8384ee4a ContainerID="7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74" Namespace="calico-system" Pod="goldmane-7988f88666-c6kj9" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0" Sep 13 00:00:34.411247 containerd[1830]: 2025-09-13 00:00:34.370 [INFO][5543] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74" Namespace="calico-system" Pod="goldmane-7988f88666-c6kj9" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0" Sep 13 00:00:34.411247 containerd[1830]: 2025-09-13 00:00:34.370 [INFO][5543] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74" Namespace="calico-system" Pod="goldmane-7988f88666-c6kj9" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"22768cde-472b-4c66-b597-59195c05c1b3", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74", Pod:"goldmane-7988f88666-c6kj9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.117.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali83e8384ee4a", MAC:"32:7d:a8:64:21:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:34.411247 containerd[1830]: 2025-09-13 00:00:34.403 [INFO][5543] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74" Namespace="calico-system" Pod="goldmane-7988f88666-c6kj9" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0" Sep 13 00:00:34.413489 containerd[1830]: 2025-09-13 00:00:34.215 [WARNING][5560] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0", GenerateName:"calico-kube-controllers-7c5f5d79c6-", Namespace:"calico-system", SelfLink:"", UID:"38b924fa-67d4-4dd5-b92a-a61c832c28b3", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c5f5d79c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5", Pod:"calico-kube-controllers-7c5f5d79c6-pslxk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.117.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali565fcf2ba77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:34.413489 containerd[1830]: 2025-09-13 00:00:34.220 [INFO][5560] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Sep 13 00:00:34.413489 containerd[1830]: 2025-09-13 00:00:34.220 [INFO][5560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" iface="eth0" netns="" Sep 13 00:00:34.413489 containerd[1830]: 2025-09-13 00:00:34.220 [INFO][5560] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Sep 13 00:00:34.413489 containerd[1830]: 2025-09-13 00:00:34.220 [INFO][5560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Sep 13 00:00:34.413489 containerd[1830]: 2025-09-13 00:00:34.272 [INFO][5573] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" HandleID="k8s-pod-network.b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0" Sep 13 00:00:34.413489 containerd[1830]: 2025-09-13 00:00:34.272 [INFO][5573] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:34.413489 containerd[1830]: 2025-09-13 00:00:34.362 [INFO][5573] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:34.413489 containerd[1830]: 2025-09-13 00:00:34.388 [WARNING][5573] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" HandleID="k8s-pod-network.b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0" Sep 13 00:00:34.413489 containerd[1830]: 2025-09-13 00:00:34.388 [INFO][5573] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" HandleID="k8s-pod-network.b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0" Sep 13 00:00:34.413489 containerd[1830]: 2025-09-13 00:00:34.399 [INFO][5573] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:34.413489 containerd[1830]: 2025-09-13 00:00:34.405 [INFO][5560] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Sep 13 00:00:34.414478 containerd[1830]: time="2025-09-13T00:00:34.413858110Z" level=info msg="TearDown network for sandbox \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\" successfully" Sep 13 00:00:34.414478 containerd[1830]: time="2025-09-13T00:00:34.413885510Z" level=info msg="StopPodSandbox for \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\" returns successfully" Sep 13 00:00:34.414478 containerd[1830]: time="2025-09-13T00:00:34.414356469Z" level=info msg="RemovePodSandbox for \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\"" Sep 13 00:00:34.414478 containerd[1830]: time="2025-09-13T00:00:34.414393829Z" level=info msg="Forcibly stopping sandbox \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\"" Sep 13 00:00:34.495310 containerd[1830]: time="2025-09-13T00:00:34.495186484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:00:34.495310 containerd[1830]: time="2025-09-13T00:00:34.495269083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:00:34.495700 containerd[1830]: time="2025-09-13T00:00:34.495287163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:34.495700 containerd[1830]: time="2025-09-13T00:00:34.495421403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:34.505144 containerd[1830]: time="2025-09-13T00:00:34.505040946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d7646db59-7hfrz,Uid:bd16400c-5cb4-4ddc-a634-d229d8486603,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673\"" Sep 13 00:00:34.605991 containerd[1830]: time="2025-09-13T00:00:34.605828365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-c6kj9,Uid:22768cde-472b-4c66-b597-59195c05c1b3,Namespace:calico-system,Attempt:1,} returns sandbox id \"7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74\"" Sep 13 00:00:34.611896 containerd[1830]: 2025-09-13 00:00:34.522 [WARNING][5643] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0", GenerateName:"calico-kube-controllers-7c5f5d79c6-", Namespace:"calico-system", SelfLink:"", UID:"38b924fa-67d4-4dd5-b92a-a61c832c28b3", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c5f5d79c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5", Pod:"calico-kube-controllers-7c5f5d79c6-pslxk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.117.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali565fcf2ba77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:34.611896 containerd[1830]: 2025-09-13 00:00:34.522 [INFO][5643] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Sep 13 00:00:34.611896 containerd[1830]: 2025-09-13 00:00:34.522 [INFO][5643] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" iface="eth0" netns="" Sep 13 00:00:34.611896 containerd[1830]: 2025-09-13 00:00:34.522 [INFO][5643] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Sep 13 00:00:34.611896 containerd[1830]: 2025-09-13 00:00:34.522 [INFO][5643] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Sep 13 00:00:34.611896 containerd[1830]: 2025-09-13 00:00:34.576 [INFO][5689] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" HandleID="k8s-pod-network.b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0" Sep 13 00:00:34.611896 containerd[1830]: 2025-09-13 00:00:34.578 [INFO][5689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:34.611896 containerd[1830]: 2025-09-13 00:00:34.578 [INFO][5689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:34.611896 containerd[1830]: 2025-09-13 00:00:34.594 [WARNING][5689] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" HandleID="k8s-pod-network.b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0" Sep 13 00:00:34.611896 containerd[1830]: 2025-09-13 00:00:34.594 [INFO][5689] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" HandleID="k8s-pod-network.b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--kube--controllers--7c5f5d79c6--pslxk-eth0" Sep 13 00:00:34.611896 containerd[1830]: 2025-09-13 00:00:34.598 [INFO][5689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:34.611896 containerd[1830]: 2025-09-13 00:00:34.605 [INFO][5643] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634" Sep 13 00:00:34.612554 containerd[1830]: time="2025-09-13T00:00:34.611952674Z" level=info msg="TearDown network for sandbox \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\" successfully" Sep 13 00:00:34.638217 containerd[1830]: time="2025-09-13T00:00:34.637933787Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:00:34.638217 containerd[1830]: time="2025-09-13T00:00:34.638015907Z" level=info msg="RemovePodSandbox \"b3e22a4248428e31cf33c376057a0d6dbc1a6b8b519b4313c866daa689dcf634\" returns successfully" Sep 13 00:00:34.639130 containerd[1830]: time="2025-09-13T00:00:34.638871585Z" level=info msg="StopPodSandbox for \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\"" Sep 13 00:00:34.667460 containerd[1830]: time="2025-09-13T00:00:34.667401014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:34.676596 containerd[1830]: time="2025-09-13T00:00:34.676544037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 13 00:00:34.681073 containerd[1830]: time="2025-09-13T00:00:34.680990869Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:34.687928 containerd[1830]: time="2025-09-13T00:00:34.687479378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:34.688930 containerd[1830]: time="2025-09-13T00:00:34.688883535Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 9.917457576s" Sep 13 00:00:34.689067 containerd[1830]: time="2025-09-13T00:00:34.689050535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 13 00:00:34.690697 containerd[1830]: time="2025-09-13T00:00:34.690663932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:00:34.693762 containerd[1830]: time="2025-09-13T00:00:34.693711047Z" level=info msg="CreateContainer within sandbox \"36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:00:34.732123 containerd[1830]: 2025-09-13 00:00:34.680 [WARNING][5720] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7fc7d1ef-2f94-42c6-a027-0084519d796b", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb", Pod:"csi-node-driver-4wv45", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.117.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibca27c33735", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:34.732123 containerd[1830]: 2025-09-13 00:00:34.682 [INFO][5720] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Sep 13 00:00:34.732123 containerd[1830]: 2025-09-13 00:00:34.682 [INFO][5720] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" iface="eth0" netns="" Sep 13 00:00:34.732123 containerd[1830]: 2025-09-13 00:00:34.682 [INFO][5720] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Sep 13 00:00:34.732123 containerd[1830]: 2025-09-13 00:00:34.682 [INFO][5720] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Sep 13 00:00:34.732123 containerd[1830]: 2025-09-13 00:00:34.712 [INFO][5728] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" HandleID="k8s-pod-network.72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Workload="ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0" Sep 13 00:00:34.732123 containerd[1830]: 2025-09-13 00:00:34.713 [INFO][5728] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:34.732123 containerd[1830]: 2025-09-13 00:00:34.713 [INFO][5728] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:34.732123 containerd[1830]: 2025-09-13 00:00:34.724 [WARNING][5728] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" HandleID="k8s-pod-network.72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Workload="ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0" Sep 13 00:00:34.732123 containerd[1830]: 2025-09-13 00:00:34.724 [INFO][5728] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" HandleID="k8s-pod-network.72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Workload="ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0" Sep 13 00:00:34.732123 containerd[1830]: 2025-09-13 00:00:34.726 [INFO][5728] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:34.732123 containerd[1830]: 2025-09-13 00:00:34.729 [INFO][5720] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Sep 13 00:00:34.732123 containerd[1830]: time="2025-09-13T00:00:34.731963738Z" level=info msg="TearDown network for sandbox \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\" successfully" Sep 13 00:00:34.732123 containerd[1830]: time="2025-09-13T00:00:34.731990778Z" level=info msg="StopPodSandbox for \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\" returns successfully" Sep 13 00:00:34.733466 containerd[1830]: time="2025-09-13T00:00:34.732879656Z" level=info msg="RemovePodSandbox for \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\"" Sep 13 00:00:34.733466 containerd[1830]: time="2025-09-13T00:00:34.733055296Z" level=info msg="Forcibly stopping sandbox \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\"" Sep 13 00:00:34.738428 containerd[1830]: time="2025-09-13T00:00:34.738290446Z" level=info msg="CreateContainer within sandbox \"36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0d0016363687b4ad713aeafc2eec995098a48e77fcdb1ad65ec557328e78a330\"" Sep 13 00:00:34.739616 containerd[1830]: time="2025-09-13T00:00:34.738555726Z" level=info msg="StopPodSandbox for \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\"" Sep 13 00:00:34.754924 containerd[1830]: time="2025-09-13T00:00:34.754770897Z" level=info msg="StartContainer for \"0d0016363687b4ad713aeafc2eec995098a48e77fcdb1ad65ec557328e78a330\"" Sep 13 00:00:34.854600 containerd[1830]: time="2025-09-13T00:00:34.854465357Z" level=info msg="StartContainer for \"0d0016363687b4ad713aeafc2eec995098a48e77fcdb1ad65ec557328e78a330\" returns successfully" Sep 13 00:00:34.928133 containerd[1830]: 2025-09-13 00:00:34.851 [WARNING][5749] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7fc7d1ef-2f94-42c6-a027-0084519d796b", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb", Pod:"csi-node-driver-4wv45", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.117.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibca27c33735", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:34.928133 containerd[1830]: 2025-09-13 00:00:34.852 [INFO][5749] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Sep 13 00:00:34.928133 containerd[1830]: 2025-09-13 00:00:34.852 [INFO][5749] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" iface="eth0" netns="" Sep 13 00:00:34.928133 containerd[1830]: 2025-09-13 00:00:34.852 [INFO][5749] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Sep 13 00:00:34.928133 containerd[1830]: 2025-09-13 00:00:34.852 [INFO][5749] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Sep 13 00:00:34.928133 containerd[1830]: 2025-09-13 00:00:34.902 [INFO][5798] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" HandleID="k8s-pod-network.72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Workload="ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0" Sep 13 00:00:34.928133 containerd[1830]: 2025-09-13 00:00:34.902 [INFO][5798] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:34.928133 containerd[1830]: 2025-09-13 00:00:34.902 [INFO][5798] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:34.928133 containerd[1830]: 2025-09-13 00:00:34.914 [WARNING][5798] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" HandleID="k8s-pod-network.72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Workload="ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0" Sep 13 00:00:34.928133 containerd[1830]: 2025-09-13 00:00:34.914 [INFO][5798] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" HandleID="k8s-pod-network.72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Workload="ci--4081.3.5--n--d2844e2d10-k8s-csi--node--driver--4wv45-eth0" Sep 13 00:00:34.928133 containerd[1830]: 2025-09-13 00:00:34.918 [INFO][5798] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:34.928133 containerd[1830]: 2025-09-13 00:00:34.923 [INFO][5749] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae" Sep 13 00:00:34.928133 containerd[1830]: time="2025-09-13T00:00:34.927540146Z" level=info msg="TearDown network for sandbox \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\" successfully" Sep 13 00:00:34.937434 containerd[1830]: time="2025-09-13T00:00:34.937368808Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:00:34.937624 containerd[1830]: time="2025-09-13T00:00:34.937453208Z" level=info msg="RemovePodSandbox \"72e363f3eabf8b755d67309d84cd6853f13ea22a082057ffe8f1594d40999dae\" returns successfully" Sep 13 00:00:34.937969 containerd[1830]: time="2025-09-13T00:00:34.937936767Z" level=info msg="StopPodSandbox for \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\"" Sep 13 00:00:34.964359 containerd[1830]: 2025-09-13 00:00:34.894 [INFO][5757] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Sep 13 00:00:34.964359 containerd[1830]: 2025-09-13 00:00:34.895 [INFO][5757] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" iface="eth0" netns="/var/run/netns/cni-52ceb083-f772-0d14-38d2-6c3e0a716d0f" Sep 13 00:00:34.964359 containerd[1830]: 2025-09-13 00:00:34.895 [INFO][5757] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" iface="eth0" netns="/var/run/netns/cni-52ceb083-f772-0d14-38d2-6c3e0a716d0f" Sep 13 00:00:34.964359 containerd[1830]: 2025-09-13 00:00:34.895 [INFO][5757] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" iface="eth0" netns="/var/run/netns/cni-52ceb083-f772-0d14-38d2-6c3e0a716d0f" Sep 13 00:00:34.964359 containerd[1830]: 2025-09-13 00:00:34.895 [INFO][5757] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Sep 13 00:00:34.964359 containerd[1830]: 2025-09-13 00:00:34.895 [INFO][5757] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Sep 13 00:00:34.964359 containerd[1830]: 2025-09-13 00:00:34.940 [INFO][5808] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" HandleID="k8s-pod-network.23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0" Sep 13 00:00:34.964359 containerd[1830]: 2025-09-13 00:00:34.940 [INFO][5808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:34.964359 containerd[1830]: 2025-09-13 00:00:34.940 [INFO][5808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:34.964359 containerd[1830]: 2025-09-13 00:00:34.957 [WARNING][5808] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" HandleID="k8s-pod-network.23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0" Sep 13 00:00:34.964359 containerd[1830]: 2025-09-13 00:00:34.957 [INFO][5808] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" HandleID="k8s-pod-network.23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0" Sep 13 00:00:34.964359 containerd[1830]: 2025-09-13 00:00:34.960 [INFO][5808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:34.964359 containerd[1830]: 2025-09-13 00:00:34.962 [INFO][5757] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Sep 13 00:00:34.966354 containerd[1830]: time="2025-09-13T00:00:34.965277478Z" level=info msg="TearDown network for sandbox \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\" successfully" Sep 13 00:00:34.966354 containerd[1830]: time="2025-09-13T00:00:34.965316638Z" level=info msg="StopPodSandbox for \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\" returns successfully" Sep 13 00:00:34.968013 systemd[1]: run-netns-cni\x2d52ceb083\x2df772\x2d0d14\x2d38d2\x2d6c3e0a716d0f.mount: Deactivated successfully. Sep 13 00:00:34.968666 containerd[1830]: time="2025-09-13T00:00:34.968342753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-88m8h,Uid:3c611723-85b5-4e9c-9b6a-4091949cf62d,Namespace:kube-system,Attempt:1,}" Sep 13 00:00:35.063142 containerd[1830]: 2025-09-13 00:00:35.004 [WARNING][5823] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-whisker--5b5dd65974--vbvff-eth0" Sep 13 00:00:35.063142 containerd[1830]: 2025-09-13 00:00:35.004 [INFO][5823] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Sep 13 00:00:35.063142 containerd[1830]: 2025-09-13 00:00:35.004 [INFO][5823] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" iface="eth0" netns="" Sep 13 00:00:35.063142 containerd[1830]: 2025-09-13 00:00:35.004 [INFO][5823] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Sep 13 00:00:35.063142 containerd[1830]: 2025-09-13 00:00:35.004 [INFO][5823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Sep 13 00:00:35.063142 containerd[1830]: 2025-09-13 00:00:35.043 [INFO][5830] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" HandleID="k8s-pod-network.9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Workload="ci--4081.3.5--n--d2844e2d10-k8s-whisker--5b5dd65974--vbvff-eth0" Sep 13 00:00:35.063142 containerd[1830]: 2025-09-13 00:00:35.043 [INFO][5830] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:35.063142 containerd[1830]: 2025-09-13 00:00:35.043 [INFO][5830] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:35.063142 containerd[1830]: 2025-09-13 00:00:35.054 [WARNING][5830] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" HandleID="k8s-pod-network.9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Workload="ci--4081.3.5--n--d2844e2d10-k8s-whisker--5b5dd65974--vbvff-eth0" Sep 13 00:00:35.063142 containerd[1830]: 2025-09-13 00:00:35.054 [INFO][5830] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" HandleID="k8s-pod-network.9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Workload="ci--4081.3.5--n--d2844e2d10-k8s-whisker--5b5dd65974--vbvff-eth0" Sep 13 00:00:35.063142 containerd[1830]: 2025-09-13 00:00:35.056 [INFO][5830] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:35.063142 containerd[1830]: 2025-09-13 00:00:35.060 [INFO][5823] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Sep 13 00:00:35.063142 containerd[1830]: time="2025-09-13T00:00:35.062607943Z" level=info msg="TearDown network for sandbox \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\" successfully" Sep 13 00:00:35.063142 containerd[1830]: time="2025-09-13T00:00:35.062638903Z" level=info msg="StopPodSandbox for \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\" returns successfully" Sep 13 00:00:35.065070 containerd[1830]: time="2025-09-13T00:00:35.064495060Z" level=info msg="RemovePodSandbox for \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\"" Sep 13 00:00:35.065070 containerd[1830]: time="2025-09-13T00:00:35.064561380Z" level=info msg="Forcibly stopping sandbox \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\"" Sep 13 00:00:35.200334 systemd-networkd[1407]: cali225e89f1b06: Link UP Sep 13 00:00:35.207497 systemd-networkd[1407]: cali225e89f1b06: Gained carrier Sep 13 00:00:35.227812 containerd[1830]: 2025-09-13 00:00:35.136 [WARNING][5855] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-whisker--5b5dd65974--vbvff-eth0" Sep 13 00:00:35.227812 containerd[1830]: 2025-09-13 00:00:35.136 [INFO][5855] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Sep 13 00:00:35.227812 containerd[1830]: 2025-09-13 00:00:35.136 [INFO][5855] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" iface="eth0" netns="" Sep 13 00:00:35.227812 containerd[1830]: 2025-09-13 00:00:35.136 [INFO][5855] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Sep 13 00:00:35.227812 containerd[1830]: 2025-09-13 00:00:35.136 [INFO][5855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Sep 13 00:00:35.227812 containerd[1830]: 2025-09-13 00:00:35.177 [INFO][5868] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" HandleID="k8s-pod-network.9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Workload="ci--4081.3.5--n--d2844e2d10-k8s-whisker--5b5dd65974--vbvff-eth0" Sep 13 00:00:35.227812 containerd[1830]: 2025-09-13 00:00:35.177 [INFO][5868] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:35.227812 containerd[1830]: 2025-09-13 00:00:35.191 [INFO][5868] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:35.227812 containerd[1830]: 2025-09-13 00:00:35.208 [WARNING][5868] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" HandleID="k8s-pod-network.9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Workload="ci--4081.3.5--n--d2844e2d10-k8s-whisker--5b5dd65974--vbvff-eth0" Sep 13 00:00:35.227812 containerd[1830]: 2025-09-13 00:00:35.208 [INFO][5868] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" HandleID="k8s-pod-network.9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Workload="ci--4081.3.5--n--d2844e2d10-k8s-whisker--5b5dd65974--vbvff-eth0" Sep 13 00:00:35.227812 containerd[1830]: 2025-09-13 00:00:35.211 [INFO][5868] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:35.227812 containerd[1830]: 2025-09-13 00:00:35.223 [INFO][5855] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb" Sep 13 00:00:35.229184 containerd[1830]: time="2025-09-13T00:00:35.228849644Z" level=info msg="TearDown network for sandbox \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\" successfully" Sep 13 00:00:35.239971 containerd[1830]: 2025-09-13 00:00:35.085 [INFO][5835] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0 coredns-7c65d6cfc9- kube-system 3c611723-85b5-4e9c-9b6a-4091949cf62d 993 0 2025-09-12 23:59:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-d2844e2d10 coredns-7c65d6cfc9-88m8h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali225e89f1b06 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65" Namespace="kube-system" Pod="coredns-7c65d6cfc9-88m8h" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-" Sep 13 00:00:35.239971 containerd[1830]: 2025-09-13 00:00:35.085 [INFO][5835] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65" Namespace="kube-system" Pod="coredns-7c65d6cfc9-88m8h" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0" Sep 13 00:00:35.239971 containerd[1830]: 2025-09-13 00:00:35.124 [INFO][5860] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65" HandleID="k8s-pod-network.f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0" Sep 13 00:00:35.239971 containerd[1830]: 2025-09-13 00:00:35.124 [INFO][5860] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65" HandleID="k8s-pod-network.f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3820), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-d2844e2d10", "pod":"coredns-7c65d6cfc9-88m8h", "timestamp":"2025-09-13 00:00:35.124320832 +0000 UTC"}, Hostname:"ci-4081.3.5-n-d2844e2d10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:00:35.239971 containerd[1830]: 2025-09-13 00:00:35.125 [INFO][5860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:35.239971 containerd[1830]: 2025-09-13 00:00:35.125 [INFO][5860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:35.239971 containerd[1830]: 2025-09-13 00:00:35.125 [INFO][5860] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-d2844e2d10' Sep 13 00:00:35.239971 containerd[1830]: 2025-09-13 00:00:35.137 [INFO][5860] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:35.239971 containerd[1830]: 2025-09-13 00:00:35.149 [INFO][5860] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:35.239971 containerd[1830]: 2025-09-13 00:00:35.156 [INFO][5860] ipam/ipam.go 511: Trying affinity for 192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:35.239971 containerd[1830]: 2025-09-13 00:00:35.160 [INFO][5860] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:35.239971 containerd[1830]: 2025-09-13 00:00:35.164 [INFO][5860] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:35.239971 containerd[1830]: 2025-09-13 00:00:35.164 [INFO][5860] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:35.239971 containerd[1830]: 2025-09-13 00:00:35.168 [INFO][5860] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65 Sep 13 00:00:35.239971 containerd[1830]: 2025-09-13 00:00:35.176 [INFO][5860] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:35.239971 containerd[1830]: 2025-09-13 00:00:35.190 [INFO][5860] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.117.8/26] block=192.168.117.0/26 handle="k8s-pod-network.f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:35.239971 containerd[1830]: 2025-09-13 00:00:35.191 [INFO][5860] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.8/26] handle="k8s-pod-network.f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65" host="ci-4081.3.5-n-d2844e2d10" Sep 13 00:00:35.239971 containerd[1830]: 2025-09-13 00:00:35.191 [INFO][5860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:35.239971 containerd[1830]: 2025-09-13 00:00:35.191 [INFO][5860] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.117.8/26] IPv6=[] ContainerID="f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65" HandleID="k8s-pod-network.f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0" Sep 13 00:00:35.240702 containerd[1830]: 2025-09-13 00:00:35.193 [INFO][5835] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65" Namespace="kube-system" Pod="coredns-7c65d6cfc9-88m8h" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3c611723-85b5-4e9c-9b6a-4091949cf62d", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"", Pod:"coredns-7c65d6cfc9-88m8h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali225e89f1b06", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:35.240702 containerd[1830]: 2025-09-13 00:00:35.193 [INFO][5835] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.8/32] ContainerID="f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65" Namespace="kube-system" Pod="coredns-7c65d6cfc9-88m8h" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0" Sep 13 00:00:35.240702 containerd[1830]: 2025-09-13 00:00:35.193 [INFO][5835] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali225e89f1b06 ContainerID="f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65" Namespace="kube-system" Pod="coredns-7c65d6cfc9-88m8h" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0" Sep 13 00:00:35.240702 containerd[1830]: 2025-09-13 00:00:35.219 [INFO][5835] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65" Namespace="kube-system" Pod="coredns-7c65d6cfc9-88m8h" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0" Sep 13 00:00:35.240702 containerd[1830]: 2025-09-13 00:00:35.220 [INFO][5835] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65" Namespace="kube-system" Pod="coredns-7c65d6cfc9-88m8h" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3c611723-85b5-4e9c-9b6a-4091949cf62d", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65", Pod:"coredns-7c65d6cfc9-88m8h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali225e89f1b06", MAC:"0e:29:78:10:6e:92", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:35.240702 containerd[1830]: 2025-09-13 00:00:35.236 [INFO][5835] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65" Namespace="kube-system" Pod="coredns-7c65d6cfc9-88m8h" WorkloadEndpoint="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0" Sep 13 00:00:35.246779 containerd[1830]: time="2025-09-13T00:00:35.246229533Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:00:35.246779 containerd[1830]: time="2025-09-13T00:00:35.246311293Z" level=info msg="RemovePodSandbox \"9687a3ea5b385546931985ca638abe1b5c685746578fca97e3527d6dad7a20eb\" returns successfully" Sep 13 00:00:35.247070 containerd[1830]: time="2025-09-13T00:00:35.247032811Z" level=info msg="StopPodSandbox for \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\"" Sep 13 00:00:35.282926 containerd[1830]: time="2025-09-13T00:00:35.282811507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:00:35.283283 containerd[1830]: time="2025-09-13T00:00:35.283153746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:00:35.283474 containerd[1830]: time="2025-09-13T00:00:35.283248266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:35.285392 containerd[1830]: time="2025-09-13T00:00:35.285230743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:35.359489 containerd[1830]: time="2025-09-13T00:00:35.359426649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-88m8h,Uid:3c611723-85b5-4e9c-9b6a-4091949cf62d,Namespace:kube-system,Attempt:1,} returns sandbox id \"f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65\"" Sep 13 00:00:35.366017 containerd[1830]: time="2025-09-13T00:00:35.365882038Z" level=info msg="CreateContainer within sandbox \"f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:00:35.405898 containerd[1830]: 2025-09-13 00:00:35.352 [WARNING][5898] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9fdfb2ff-1d8f-40fb-8d56-8897236ec308", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da", Pod:"coredns-7c65d6cfc9-q8jln", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08a642dbabd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:35.405898 containerd[1830]: 2025-09-13 00:00:35.352 [INFO][5898] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Sep 13 00:00:35.405898 containerd[1830]: 2025-09-13 00:00:35.352 [INFO][5898] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" iface="eth0" netns="" Sep 13 00:00:35.405898 containerd[1830]: 2025-09-13 00:00:35.352 [INFO][5898] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Sep 13 00:00:35.405898 containerd[1830]: 2025-09-13 00:00:35.352 [INFO][5898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Sep 13 00:00:35.405898 containerd[1830]: 2025-09-13 00:00:35.386 [INFO][5940] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" HandleID="k8s-pod-network.e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0" Sep 13 00:00:35.405898 containerd[1830]: 2025-09-13 00:00:35.386 [INFO][5940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:35.405898 containerd[1830]: 2025-09-13 00:00:35.386 [INFO][5940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:35.405898 containerd[1830]: 2025-09-13 00:00:35.400 [WARNING][5940] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" HandleID="k8s-pod-network.e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0" Sep 13 00:00:35.405898 containerd[1830]: 2025-09-13 00:00:35.400 [INFO][5940] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" HandleID="k8s-pod-network.e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0" Sep 13 00:00:35.405898 containerd[1830]: 2025-09-13 00:00:35.402 [INFO][5940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:35.405898 containerd[1830]: 2025-09-13 00:00:35.404 [INFO][5898] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Sep 13 00:00:35.406646 containerd[1830]: time="2025-09-13T00:00:35.405932686Z" level=info msg="TearDown network for sandbox \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\" successfully" Sep 13 00:00:35.406646 containerd[1830]: time="2025-09-13T00:00:35.405960005Z" level=info msg="StopPodSandbox for \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\" returns successfully" Sep 13 00:00:35.406646 containerd[1830]: time="2025-09-13T00:00:35.406557164Z" level=info msg="RemovePodSandbox for \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\"" Sep 13 00:00:35.406646 containerd[1830]: time="2025-09-13T00:00:35.406598524Z" level=info msg="Forcibly stopping sandbox \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\"" Sep 13 00:00:35.414973 containerd[1830]: time="2025-09-13T00:00:35.414890789Z" level=info msg="CreateContainer within sandbox \"f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b88aa8d66ebc55816c51c49a2c043477dd6089045a3cab54d50767668956b507\"" Sep 13 00:00:35.416530 containerd[1830]: time="2025-09-13T00:00:35.416451027Z" level=info msg="StartContainer for \"b88aa8d66ebc55816c51c49a2c043477dd6089045a3cab54d50767668956b507\"" Sep 13 00:00:35.498100 containerd[1830]: time="2025-09-13T00:00:35.497947200Z" level=info msg="StartContainer for \"b88aa8d66ebc55816c51c49a2c043477dd6089045a3cab54d50767668956b507\" returns successfully" Sep 13 00:00:35.542247 containerd[1830]: 2025-09-13 00:00:35.468 [WARNING][5959] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9fdfb2ff-1d8f-40fb-8d56-8897236ec308", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"82176bc6551a56a874d76ea9ebc419e99885419233783f4382893c191ea4b0da", Pod:"coredns-7c65d6cfc9-q8jln", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08a642dbabd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:00:35.542247 containerd[1830]: 2025-09-13 00:00:35.469 [INFO][5959] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Sep 13 00:00:35.542247 containerd[1830]: 2025-09-13 00:00:35.469 [INFO][5959] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" iface="eth0" netns="" Sep 13 00:00:35.542247 containerd[1830]: 2025-09-13 00:00:35.469 [INFO][5959] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Sep 13 00:00:35.542247 containerd[1830]: 2025-09-13 00:00:35.469 [INFO][5959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Sep 13 00:00:35.542247 containerd[1830]: 2025-09-13 00:00:35.520 [INFO][5990] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" HandleID="k8s-pod-network.e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0" Sep 13 00:00:35.542247 containerd[1830]: 2025-09-13 00:00:35.520 [INFO][5990] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:00:35.542247 containerd[1830]: 2025-09-13 00:00:35.520 [INFO][5990] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:00:35.542247 containerd[1830]: 2025-09-13 00:00:35.534 [WARNING][5990] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" HandleID="k8s-pod-network.e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0" Sep 13 00:00:35.542247 containerd[1830]: 2025-09-13 00:00:35.534 [INFO][5990] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" HandleID="k8s-pod-network.e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--q8jln-eth0" Sep 13 00:00:35.542247 containerd[1830]: 2025-09-13 00:00:35.537 [INFO][5990] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:00:35.542247 containerd[1830]: 2025-09-13 00:00:35.539 [INFO][5959] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24" Sep 13 00:00:35.543285 containerd[1830]: time="2025-09-13T00:00:35.543232039Z" level=info msg="TearDown network for sandbox \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\" successfully" Sep 13 00:00:35.557047 containerd[1830]: time="2025-09-13T00:00:35.556819774Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:00:35.557047 containerd[1830]: time="2025-09-13T00:00:35.556909614Z" level=info msg="RemovePodSandbox \"e0b60e756e140961e8a0859fa42027c135374bdbfa2de123ad5cebba14679c24\" returns successfully" Sep 13 00:00:35.576283 systemd-networkd[1407]: cali07c3c8d5146: Gained IPv6LL Sep 13 00:00:36.118514 kubelet[3348]: I0913 00:00:36.117745 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-88m8h" podStartSLOduration=57.117723845 podStartE2EDuration="57.117723845s" podCreationTimestamp="2025-09-12 23:59:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:00:36.114739691 +0000 UTC m=+62.491231449" watchObservedRunningTime="2025-09-13 00:00:36.117723845 +0000 UTC m=+62.494215563" Sep 13 00:00:36.408349 systemd-networkd[1407]: cali83e8384ee4a: Gained IPv6LL Sep 13 00:00:37.176351 systemd-networkd[1407]: cali225e89f1b06: Gained IPv6LL Sep 13 00:00:43.615779 containerd[1830]: time="2025-09-13T00:00:43.615715156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:43.620655 containerd[1830]: time="2025-09-13T00:00:43.620602072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 13 00:00:43.624508 containerd[1830]: time="2025-09-13T00:00:43.624441109Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:43.631364 containerd[1830]: time="2025-09-13T00:00:43.631256863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:43.632746 containerd[1830]: time="2025-09-13T00:00:43.631920463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 8.940751692s" Sep 13 00:00:43.632746 containerd[1830]: time="2025-09-13T00:00:43.631965583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 13 00:00:43.633934 containerd[1830]: time="2025-09-13T00:00:43.633895221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:00:43.651751 containerd[1830]: time="2025-09-13T00:00:43.651592486Z" level=info msg="CreateContainer within sandbox \"8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:00:43.698625 containerd[1830]: time="2025-09-13T00:00:43.698563407Z" level=info msg="CreateContainer within sandbox \"8692a2805466682ce29ebfea1384dc9f37e436b0d750a7ef37f09cb5206134d5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"16e91bddd521d2beeb71f10b66cd6d22604c58d64cbaec3eae9ef46f5ce7b7c5\"" Sep 13 00:00:43.700466 containerd[1830]: time="2025-09-13T00:00:43.699487646Z" level=info msg="StartContainer for \"16e91bddd521d2beeb71f10b66cd6d22604c58d64cbaec3eae9ef46f5ce7b7c5\"" Sep 13 00:00:43.773767 containerd[1830]: time="2025-09-13T00:00:43.773713544Z" level=info msg="StartContainer for \"16e91bddd521d2beeb71f10b66cd6d22604c58d64cbaec3eae9ef46f5ce7b7c5\" returns successfully" Sep 13 00:00:44.206401 kubelet[3348]: I0913 00:00:44.206327 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7c5f5d79c6-pslxk" podStartSLOduration=35.887572213 podStartE2EDuration="50.206304461s" podCreationTimestamp="2025-09-12 23:59:54 +0000 UTC" firstStartedPulling="2025-09-13 00:00:29.314852933 +0000 UTC m=+55.691344651" lastFinishedPulling="2025-09-13 00:00:43.633585181 +0000 UTC m=+70.010076899" observedRunningTime="2025-09-13 00:00:44.151044788 +0000 UTC m=+70.527536506" watchObservedRunningTime="2025-09-13 00:00:44.206304461 +0000 UTC m=+70.582796179" Sep 13 00:00:49.228803 containerd[1830]: time="2025-09-13T00:00:49.228745612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:49.236726 containerd[1830]: time="2025-09-13T00:00:49.236667084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 13 00:00:49.243555 containerd[1830]: time="2025-09-13T00:00:49.243488957Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:49.248747 containerd[1830]: time="2025-09-13T00:00:49.248674952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:49.249599 containerd[1830]: time="2025-09-13T00:00:49.249472791Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 5.61553537s" Sep 13 00:00:49.249599 containerd[1830]: time="2025-09-13T00:00:49.249508111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 13 00:00:49.251359 containerd[1830]: time="2025-09-13T00:00:49.250782390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:00:49.254103 containerd[1830]: time="2025-09-13T00:00:49.254040706Z" level=info msg="CreateContainer within sandbox \"6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:00:49.307416 containerd[1830]: time="2025-09-13T00:00:49.307360893Z" level=info msg="CreateContainer within sandbox \"6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"20b03c1a1a036ec04b5f4bbd0da582d8a00bb997607672ba0da507bf614608b5\"" Sep 13 00:00:49.309053 containerd[1830]: time="2025-09-13T00:00:49.308955971Z" level=info msg="StartContainer for \"20b03c1a1a036ec04b5f4bbd0da582d8a00bb997607672ba0da507bf614608b5\"" Sep 13 00:00:49.349275 systemd[1]: run-containerd-runc-k8s.io-20b03c1a1a036ec04b5f4bbd0da582d8a00bb997607672ba0da507bf614608b5-runc.L3QwrR.mount: Deactivated successfully. Sep 13 00:00:49.391573 containerd[1830]: time="2025-09-13T00:00:49.391442368Z" level=info msg="StartContainer for \"20b03c1a1a036ec04b5f4bbd0da582d8a00bb997607672ba0da507bf614608b5\" returns successfully" Sep 13 00:00:56.584269 containerd[1830]: time="2025-09-13T00:00:56.584203633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:56.589186 containerd[1830]: time="2025-09-13T00:00:56.589123307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 13 00:00:56.592946 containerd[1830]: time="2025-09-13T00:00:56.592127544Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:56.598631 containerd[1830]: time="2025-09-13T00:00:56.598577856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:56.600825 containerd[1830]: time="2025-09-13T00:00:56.600756134Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 7.349926304s" Sep 13 00:00:56.600825 containerd[1830]: time="2025-09-13T00:00:56.600811454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 00:00:56.606406 containerd[1830]: time="2025-09-13T00:00:56.606334448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:00:56.608758 containerd[1830]: time="2025-09-13T00:00:56.608715765Z" level=info msg="CreateContainer within sandbox \"7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:00:56.656944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1530093978.mount: Deactivated successfully. Sep 13 00:00:56.664843 containerd[1830]: time="2025-09-13T00:00:56.664778462Z" level=info msg="CreateContainer within sandbox \"7b53fa218143bc4024fc504b3ea987d19125ce746a14c126fc9bad0f10d7687b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d5ae0c003eb9cac92c87b9a408d6090f77034f65380b2da951df2f8c4beea489\"" Sep 13 00:00:56.666126 containerd[1830]: time="2025-09-13T00:00:56.665606701Z" level=info msg="StartContainer for \"d5ae0c003eb9cac92c87b9a408d6090f77034f65380b2da951df2f8c4beea489\"" Sep 13 00:00:56.802517 containerd[1830]: time="2025-09-13T00:00:56.802456867Z" level=info msg="StartContainer for \"d5ae0c003eb9cac92c87b9a408d6090f77034f65380b2da951df2f8c4beea489\" returns successfully" Sep 13 00:00:56.957076 containerd[1830]: time="2025-09-13T00:00:56.957023093Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:56.968115 containerd[1830]: time="2025-09-13T00:00:56.966599122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:00:56.972183 containerd[1830]: time="2025-09-13T00:00:56.971280197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 364.879669ms" Sep 13 00:00:56.972183 containerd[1830]: time="2025-09-13T00:00:56.971404597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 00:00:56.979150 containerd[1830]: time="2025-09-13T00:00:56.976665271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:00:56.981200 containerd[1830]: time="2025-09-13T00:00:56.979351308Z" level=info msg="CreateContainer within sandbox \"7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:00:57.037419 containerd[1830]: time="2025-09-13T00:00:57.037356523Z" level=info msg="CreateContainer within sandbox \"7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4e41e5a91dd7903d3eb4599d0a17e4731f69ce5bb2c6d49201510700e90715f1\"" Sep 13 00:00:57.040126 containerd[1830]: time="2025-09-13T00:00:57.040050560Z" level=info msg="StartContainer for \"4e41e5a91dd7903d3eb4599d0a17e4731f69ce5bb2c6d49201510700e90715f1\"" Sep 13 00:00:57.196766 kubelet[3348]: I0913 00:00:57.196408 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d7646db59-w2j5v" podStartSLOduration=40.747422303 podStartE2EDuration="1m5.192076589s" podCreationTimestamp="2025-09-12 23:59:52 +0000 UTC" firstStartedPulling="2025-09-13 00:00:32.159703804 +0000 UTC m=+58.536195522" lastFinishedPulling="2025-09-13 00:00:56.60435813 +0000 UTC m=+82.980849808" observedRunningTime="2025-09-13 00:00:57.191753669 +0000 UTC m=+83.568245387" watchObservedRunningTime="2025-09-13 00:00:57.192076589 +0000 UTC m=+83.568568307" Sep 13 00:00:57.219212 containerd[1830]: time="2025-09-13T00:00:57.217675000Z" level=info msg="StartContainer for \"4e41e5a91dd7903d3eb4599d0a17e4731f69ce5bb2c6d49201510700e90715f1\" returns successfully" Sep 13 00:00:58.216622 kubelet[3348]: I0913 00:00:58.216016 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d7646db59-7hfrz" podStartSLOduration=43.750105262 podStartE2EDuration="1m6.215996716s" podCreationTimestamp="2025-09-12 23:59:52 +0000 UTC" firstStartedPulling="2025-09-13 00:00:34.508766219 +0000 UTC m=+60.885257937" lastFinishedPulling="2025-09-13 00:00:56.974657673 +0000 UTC m=+83.351149391" observedRunningTime="2025-09-13 00:00:58.215954676 +0000 UTC m=+84.592446394" watchObservedRunningTime="2025-09-13 00:00:58.215996716 +0000 UTC m=+84.592488394" Sep 13 00:01:01.306934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3438578078.mount: Deactivated successfully. Sep 13 00:01:02.493117 containerd[1830]: time="2025-09-13T00:01:02.493051463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:02.499465 containerd[1830]: time="2025-09-13T00:01:02.499262296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 13 00:01:02.504472 containerd[1830]: time="2025-09-13T00:01:02.504398570Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:02.511917 containerd[1830]: time="2025-09-13T00:01:02.511415162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:02.513011 containerd[1830]: time="2025-09-13T00:01:02.512970720Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 5.536249969s" Sep 13 00:01:02.513501 containerd[1830]: time="2025-09-13T00:01:02.513471360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 13 00:01:02.517518 containerd[1830]: time="2025-09-13T00:01:02.515128558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:01:02.520542 containerd[1830]: time="2025-09-13T00:01:02.520499032Z" level=info msg="CreateContainer within sandbox \"7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:01:02.570156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1640376518.mount: Deactivated successfully. Sep 13 00:01:02.628911 containerd[1830]: time="2025-09-13T00:01:02.628772110Z" level=info msg="CreateContainer within sandbox \"7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"cb1d7bacc498704c50bf64bb8cf879ec600b8664ca7370ed10564ae85474cb49\"" Sep 13 00:01:02.631018 containerd[1830]: time="2025-09-13T00:01:02.630969547Z" level=info msg="StartContainer for \"cb1d7bacc498704c50bf64bb8cf879ec600b8664ca7370ed10564ae85474cb49\"" Sep 13 00:01:02.890793 containerd[1830]: time="2025-09-13T00:01:02.890658175Z" level=info msg="StartContainer for \"cb1d7bacc498704c50bf64bb8cf879ec600b8664ca7370ed10564ae85474cb49\" returns successfully" Sep 13 00:01:03.244943 kubelet[3348]: I0913 00:01:03.244745 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-c6kj9" podStartSLOduration=41.339040972 podStartE2EDuration="1m9.244723892s" podCreationTimestamp="2025-09-12 23:59:54 +0000 UTC" firstStartedPulling="2025-09-13 00:00:34.609239678 +0000 UTC m=+60.985731396" lastFinishedPulling="2025-09-13 00:01:02.514922638 +0000 UTC m=+88.891414316" observedRunningTime="2025-09-13 00:01:03.243075054 +0000 UTC m=+89.619566772" watchObservedRunningTime="2025-09-13 00:01:03.244723892 +0000 UTC m=+89.621215650" Sep 13 00:01:03.566545 systemd[1]: run-containerd-runc-k8s.io-cb1d7bacc498704c50bf64bb8cf879ec600b8664ca7370ed10564ae85474cb49-runc.cWskw9.mount: Deactivated successfully. Sep 13 00:01:04.270317 systemd[1]: run-containerd-runc-k8s.io-cb1d7bacc498704c50bf64bb8cf879ec600b8664ca7370ed10564ae85474cb49-runc.5ez0UF.mount: Deactivated successfully. Sep 13 00:01:10.907128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2652359040.mount: Deactivated successfully. Sep 13 00:01:12.060148 containerd[1830]: time="2025-09-13T00:01:12.058388981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:12.062781 containerd[1830]: time="2025-09-13T00:01:12.062718215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 13 00:01:12.066725 containerd[1830]: time="2025-09-13T00:01:12.066053851Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:12.080219 containerd[1830]: time="2025-09-13T00:01:12.079976391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:12.082677 containerd[1830]: time="2025-09-13T00:01:12.081941468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 9.56676435s" Sep 13 00:01:12.084116 containerd[1830]: time="2025-09-13T00:01:12.083314666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 13 00:01:12.089682 containerd[1830]: time="2025-09-13T00:01:12.089466617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:01:12.095460 containerd[1830]: time="2025-09-13T00:01:12.095215249Z" level=info msg="CreateContainer within sandbox \"36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:01:12.167564 containerd[1830]: time="2025-09-13T00:01:12.167442706Z" level=info msg="CreateContainer within sandbox \"36ece0c843f5e8cb8ef531ac410fdc4de1c600e212c9d3c92cf46e9f7921c602\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6c5a5622b1b0af3ac42484d5b3e2ce47c4152666595fa03c6be5b1233c63c1af\"" Sep 13 00:01:12.170141 containerd[1830]: time="2025-09-13T00:01:12.169860423Z" level=info msg="StartContainer for \"6c5a5622b1b0af3ac42484d5b3e2ce47c4152666595fa03c6be5b1233c63c1af\"" Sep 13 00:01:12.303654 containerd[1830]: time="2025-09-13T00:01:12.301823915Z" level=info msg="StartContainer for \"6c5a5622b1b0af3ac42484d5b3e2ce47c4152666595fa03c6be5b1233c63c1af\" returns successfully" Sep 13 00:01:16.011986 containerd[1830]: time="2025-09-13T00:01:16.011921201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:16.016452 containerd[1830]: time="2025-09-13T00:01:16.016402555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 13 00:01:16.022905 containerd[1830]: time="2025-09-13T00:01:16.022454786Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:16.031397 containerd[1830]: time="2025-09-13T00:01:16.031346453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:16.033161 containerd[1830]: time="2025-09-13T00:01:16.033083211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 3.943544354s" Sep 13 00:01:16.033506 containerd[1830]: time="2025-09-13T00:01:16.033332290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 13 00:01:16.037657 containerd[1830]: time="2025-09-13T00:01:16.037497725Z" level=info msg="CreateContainer within sandbox \"6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:01:16.076841 containerd[1830]: time="2025-09-13T00:01:16.076214789Z" level=info msg="CreateContainer within sandbox \"6aaa9b12ab3d0fa65f8d7dfea28569c6947ce0c5d258c59fd7c41ce25f0c17eb\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8adf07b64c81044f936f17b30a32f6154f9365080c5e0ca3f22a1b3ba93a6678\"" Sep 13 00:01:16.077367 containerd[1830]: time="2025-09-13T00:01:16.077221428Z" level=info msg="StartContainer for \"8adf07b64c81044f936f17b30a32f6154f9365080c5e0ca3f22a1b3ba93a6678\"" Sep 13 00:01:16.183747 containerd[1830]: time="2025-09-13T00:01:16.182487958Z" level=info msg="StartContainer for \"8adf07b64c81044f936f17b30a32f6154f9365080c5e0ca3f22a1b3ba93a6678\" returns successfully" Sep 13 00:01:16.300394 systemd[1]: run-containerd-runc-k8s.io-16e91bddd521d2beeb71f10b66cd6d22604c58d64cbaec3eae9ef46f5ce7b7c5-runc.O2TICi.mount: Deactivated successfully. Sep 13 00:01:16.505119 kubelet[3348]: I0913 00:01:16.501975 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6c8b9c8fc-jbbh7" podStartSLOduration=5.183312366 podStartE2EDuration="52.501954304s" podCreationTimestamp="2025-09-13 00:00:24 +0000 UTC" firstStartedPulling="2025-09-13 00:00:24.77019268 +0000 UTC m=+51.146684398" lastFinishedPulling="2025-09-13 00:01:12.088834618 +0000 UTC m=+98.465326336" observedRunningTime="2025-09-13 00:01:12.48140334 +0000 UTC m=+98.857895058" watchObservedRunningTime="2025-09-13 00:01:16.501954304 +0000 UTC m=+102.878446062" Sep 13 00:01:16.769127 kubelet[3348]: I0913 00:01:16.768633 3348 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:01:16.774675 kubelet[3348]: I0913 00:01:16.774621 3348 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:01:21.885267 update_engine[1793]: I20250913 00:01:21.885192 1793 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 13 00:01:21.885267 update_engine[1793]: I20250913 00:01:21.885255 1793 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 13 00:01:21.885735 update_engine[1793]: I20250913 00:01:21.885491 1793 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 13 00:01:21.887700 update_engine[1793]: I20250913 00:01:21.887020 1793 omaha_request_params.cc:62] Current group set to lts Sep 13 00:01:21.887700 update_engine[1793]: I20250913 00:01:21.887166 1793 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 13 00:01:21.887700 update_engine[1793]: I20250913 00:01:21.887177 1793 update_attempter.cc:643] Scheduling an action processor start. Sep 13 00:01:21.887700 update_engine[1793]: I20250913 00:01:21.887198 1793 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:01:21.889031 update_engine[1793]: I20250913 00:01:21.888988 1793 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 13 00:01:21.896229 update_engine[1793]: I20250913 00:01:21.892252 1793 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:01:21.896229 update_engine[1793]: I20250913 00:01:21.892288 1793 omaha_request_action.cc:272] Request: Sep 13 00:01:21.896229 update_engine[1793]: Sep 13 00:01:21.896229 update_engine[1793]: Sep 13 00:01:21.896229 update_engine[1793]: Sep 13 00:01:21.896229 update_engine[1793]: Sep 13 00:01:21.896229 update_engine[1793]: Sep 13 00:01:21.896229 update_engine[1793]: Sep 13 00:01:21.896229 update_engine[1793]: Sep 13 00:01:21.896229 update_engine[1793]: Sep 13 00:01:21.896229 update_engine[1793]: I20250913 00:01:21.892295 1793 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:01:21.899602 locksmithd[1887]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 13 00:01:21.901830 update_engine[1793]: I20250913 00:01:21.901319 1793 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:01:21.901830 update_engine[1793]: I20250913 00:01:21.901670 1793 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:01:21.975926 update_engine[1793]: E20250913 00:01:21.975759 1793 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:01:21.975926 update_engine[1793]: I20250913 00:01:21.975885 1793 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 13 00:01:22.360062 systemd[1]: run-containerd-runc-k8s.io-e49c18cfe43f09291268bbb9fec86f848729d406869b09099470d79b8e686e82-runc.i9tIAf.mount: Deactivated successfully. Sep 13 00:01:31.804174 update_engine[1793]: I20250913 00:01:31.803140 1793 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:01:31.804174 update_engine[1793]: I20250913 00:01:31.803381 1793 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:01:31.804174 update_engine[1793]: I20250913 00:01:31.803620 1793 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:01:31.846398 update_engine[1793]: E20250913 00:01:31.846261 1793 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:01:31.846398 update_engine[1793]: I20250913 00:01:31.846357 1793 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 13 00:01:34.237796 systemd[1]: run-containerd-runc-k8s.io-16e91bddd521d2beeb71f10b66cd6d22604c58d64cbaec3eae9ef46f5ce7b7c5-runc.EIwauv.mount: Deactivated successfully. Sep 13 00:01:34.377788 kubelet[3348]: I0913 00:01:34.377071 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4wv45" podStartSLOduration=56.551566543 podStartE2EDuration="1m41.377032598s" podCreationTimestamp="2025-09-12 23:59:53 +0000 UTC" firstStartedPulling="2025-09-13 00:00:31.208940234 +0000 UTC m=+57.585431952" lastFinishedPulling="2025-09-13 00:01:16.034406329 +0000 UTC m=+102.410898007" observedRunningTime="2025-09-13 00:01:16.502957543 +0000 UTC m=+102.879449261" watchObservedRunningTime="2025-09-13 00:01:34.377032598 +0000 UTC m=+120.753524316" Sep 13 00:01:35.565111 containerd[1830]: time="2025-09-13T00:01:35.564778800Z" level=info msg="StopPodSandbox for \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\"" Sep 13 00:01:35.682206 containerd[1830]: 2025-09-13 00:01:35.624 [WARNING][6596] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3c611723-85b5-4e9c-9b6a-4091949cf62d", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65", Pod:"coredns-7c65d6cfc9-88m8h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali225e89f1b06", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:35.682206 containerd[1830]: 2025-09-13 00:01:35.624 [INFO][6596] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Sep 13 00:01:35.682206 containerd[1830]: 2025-09-13 00:01:35.624 [INFO][6596] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" iface="eth0" netns="" Sep 13 00:01:35.682206 containerd[1830]: 2025-09-13 00:01:35.624 [INFO][6596] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Sep 13 00:01:35.682206 containerd[1830]: 2025-09-13 00:01:35.624 [INFO][6596] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Sep 13 00:01:35.682206 containerd[1830]: 2025-09-13 00:01:35.661 [INFO][6604] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" HandleID="k8s-pod-network.23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0" Sep 13 00:01:35.682206 containerd[1830]: 2025-09-13 00:01:35.661 [INFO][6604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:35.682206 containerd[1830]: 2025-09-13 00:01:35.663 [INFO][6604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:35.682206 containerd[1830]: 2025-09-13 00:01:35.673 [WARNING][6604] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" HandleID="k8s-pod-network.23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0" Sep 13 00:01:35.682206 containerd[1830]: 2025-09-13 00:01:35.673 [INFO][6604] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" HandleID="k8s-pod-network.23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0" Sep 13 00:01:35.682206 containerd[1830]: 2025-09-13 00:01:35.675 [INFO][6604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:35.682206 containerd[1830]: 2025-09-13 00:01:35.678 [INFO][6596] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Sep 13 00:01:35.683483 containerd[1830]: time="2025-09-13T00:01:35.682231460Z" level=info msg="TearDown network for sandbox \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\" successfully" Sep 13 00:01:35.683483 containerd[1830]: time="2025-09-13T00:01:35.682268420Z" level=info msg="StopPodSandbox for \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\" returns successfully" Sep 13 00:01:35.683541 containerd[1830]: time="2025-09-13T00:01:35.683486699Z" level=info msg="RemovePodSandbox for \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\"" Sep 13 00:01:35.683541 containerd[1830]: time="2025-09-13T00:01:35.683521899Z" level=info msg="Forcibly stopping sandbox \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\"" Sep 13 00:01:35.824215 containerd[1830]: 2025-09-13 00:01:35.744 [WARNING][6618] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3c611723-85b5-4e9c-9b6a-4091949cf62d", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"f39e9d239659b4e598e910793efb64832e6eeecc511388e0221734c10f89ae65", Pod:"coredns-7c65d6cfc9-88m8h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali225e89f1b06", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:35.824215 containerd[1830]: 2025-09-13 00:01:35.744 [INFO][6618] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Sep 13 00:01:35.824215 containerd[1830]: 2025-09-13 00:01:35.744 [INFO][6618] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" iface="eth0" netns="" Sep 13 00:01:35.824215 containerd[1830]: 2025-09-13 00:01:35.745 [INFO][6618] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Sep 13 00:01:35.824215 containerd[1830]: 2025-09-13 00:01:35.745 [INFO][6618] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Sep 13 00:01:35.824215 containerd[1830]: 2025-09-13 00:01:35.772 [INFO][6625] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" HandleID="k8s-pod-network.23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0" Sep 13 00:01:35.824215 containerd[1830]: 2025-09-13 00:01:35.772 [INFO][6625] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:35.824215 containerd[1830]: 2025-09-13 00:01:35.772 [INFO][6625] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:35.824215 containerd[1830]: 2025-09-13 00:01:35.807 [WARNING][6625] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" HandleID="k8s-pod-network.23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0" Sep 13 00:01:35.824215 containerd[1830]: 2025-09-13 00:01:35.807 [INFO][6625] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" HandleID="k8s-pod-network.23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Workload="ci--4081.3.5--n--d2844e2d10-k8s-coredns--7c65d6cfc9--88m8h-eth0" Sep 13 00:01:35.824215 containerd[1830]: 2025-09-13 00:01:35.816 [INFO][6625] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:35.824215 containerd[1830]: 2025-09-13 00:01:35.818 [INFO][6618] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8" Sep 13 00:01:35.826631 containerd[1830]: time="2025-09-13T00:01:35.824833650Z" level=info msg="TearDown network for sandbox \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\" successfully" Sep 13 00:01:35.850161 containerd[1830]: time="2025-09-13T00:01:35.850113900Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:01:35.850395 containerd[1830]: time="2025-09-13T00:01:35.850353620Z" level=info msg="RemovePodSandbox \"23ad09f031a788098b00b6ddc6e15e92d7e8e6a0134a66145ab6d4ecf4aacdd8\" returns successfully" Sep 13 00:01:35.852037 containerd[1830]: time="2025-09-13T00:01:35.850964819Z" level=info msg="StopPodSandbox for \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\"" Sep 13 00:01:35.984152 containerd[1830]: 2025-09-13 00:01:35.926 [WARNING][6640] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"22768cde-472b-4c66-b597-59195c05c1b3", ResourceVersion:"1204", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74", Pod:"goldmane-7988f88666-c6kj9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.117.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali83e8384ee4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:35.984152 containerd[1830]: 2025-09-13 00:01:35.926 [INFO][6640] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Sep 13 00:01:35.984152 containerd[1830]: 2025-09-13 00:01:35.926 [INFO][6640] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" iface="eth0" netns="" Sep 13 00:01:35.984152 containerd[1830]: 2025-09-13 00:01:35.926 [INFO][6640] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Sep 13 00:01:35.984152 containerd[1830]: 2025-09-13 00:01:35.926 [INFO][6640] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Sep 13 00:01:35.984152 containerd[1830]: 2025-09-13 00:01:35.960 [INFO][6647] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" HandleID="k8s-pod-network.f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Workload="ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0" Sep 13 00:01:35.984152 containerd[1830]: 2025-09-13 00:01:35.960 [INFO][6647] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:35.984152 containerd[1830]: 2025-09-13 00:01:35.961 [INFO][6647] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:35.984152 containerd[1830]: 2025-09-13 00:01:35.973 [WARNING][6647] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" HandleID="k8s-pod-network.f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Workload="ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0" Sep 13 00:01:35.984152 containerd[1830]: 2025-09-13 00:01:35.973 [INFO][6647] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" HandleID="k8s-pod-network.f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Workload="ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0" Sep 13 00:01:35.984152 containerd[1830]: 2025-09-13 00:01:35.975 [INFO][6647] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:35.984152 containerd[1830]: 2025-09-13 00:01:35.979 [INFO][6640] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Sep 13 00:01:35.986301 containerd[1830]: time="2025-09-13T00:01:35.984772899Z" level=info msg="TearDown network for sandbox \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\" successfully" Sep 13 00:01:35.986301 containerd[1830]: time="2025-09-13T00:01:35.984810659Z" level=info msg="StopPodSandbox for \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\" returns successfully" Sep 13 00:01:35.988071 containerd[1830]: time="2025-09-13T00:01:35.987305136Z" level=info msg="RemovePodSandbox for \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\"" Sep 13 00:01:35.988071 containerd[1830]: time="2025-09-13T00:01:35.987339696Z" level=info msg="Forcibly stopping sandbox \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\"" Sep 13 00:01:36.104961 containerd[1830]: 2025-09-13 00:01:36.035 [WARNING][6661] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"22768cde-472b-4c66-b597-59195c05c1b3", ResourceVersion:"1204", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"7835a8101c0c90b53314c6360a9c8d05a7f1696bcde0a666d0977c5a19e37c74", Pod:"goldmane-7988f88666-c6kj9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.117.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali83e8384ee4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:36.104961 containerd[1830]: 2025-09-13 00:01:36.036 [INFO][6661] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Sep 13 00:01:36.104961 containerd[1830]: 2025-09-13 00:01:36.036 [INFO][6661] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" iface="eth0" netns="" Sep 13 00:01:36.104961 containerd[1830]: 2025-09-13 00:01:36.036 [INFO][6661] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Sep 13 00:01:36.104961 containerd[1830]: 2025-09-13 00:01:36.036 [INFO][6661] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Sep 13 00:01:36.104961 containerd[1830]: 2025-09-13 00:01:36.072 [INFO][6669] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" HandleID="k8s-pod-network.f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Workload="ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0" Sep 13 00:01:36.104961 containerd[1830]: 2025-09-13 00:01:36.072 [INFO][6669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:36.104961 containerd[1830]: 2025-09-13 00:01:36.072 [INFO][6669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:36.104961 containerd[1830]: 2025-09-13 00:01:36.087 [WARNING][6669] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" HandleID="k8s-pod-network.f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Workload="ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0" Sep 13 00:01:36.104961 containerd[1830]: 2025-09-13 00:01:36.088 [INFO][6669] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" HandleID="k8s-pod-network.f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Workload="ci--4081.3.5--n--d2844e2d10-k8s-goldmane--7988f88666--c6kj9-eth0" Sep 13 00:01:36.104961 containerd[1830]: 2025-09-13 00:01:36.091 [INFO][6669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:36.104961 containerd[1830]: 2025-09-13 00:01:36.095 [INFO][6661] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344" Sep 13 00:01:36.104961 containerd[1830]: time="2025-09-13T00:01:36.101327720Z" level=info msg="TearDown network for sandbox \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\" successfully" Sep 13 00:01:36.111023 containerd[1830]: time="2025-09-13T00:01:36.110972829Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:01:36.111269 containerd[1830]: time="2025-09-13T00:01:36.111248308Z" level=info msg="RemovePodSandbox \"f9492c6b298b289c237c63e036581178376ee2a5baada95771f9291bed511344\" returns successfully" Sep 13 00:01:36.111825 containerd[1830]: time="2025-09-13T00:01:36.111798628Z" level=info msg="StopPodSandbox for \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\"" Sep 13 00:01:36.233136 containerd[1830]: 2025-09-13 00:01:36.173 [WARNING][6683] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0", GenerateName:"calico-apiserver-7d7646db59-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd16400c-5cb4-4ddc-a634-d229d8486603", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d7646db59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673", Pod:"calico-apiserver-7d7646db59-7hfrz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07c3c8d5146", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:36.233136 containerd[1830]: 2025-09-13 00:01:36.174 [INFO][6683] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Sep 13 00:01:36.233136 containerd[1830]: 2025-09-13 00:01:36.174 [INFO][6683] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" iface="eth0" netns="" Sep 13 00:01:36.233136 containerd[1830]: 2025-09-13 00:01:36.174 [INFO][6683] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Sep 13 00:01:36.233136 containerd[1830]: 2025-09-13 00:01:36.174 [INFO][6683] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Sep 13 00:01:36.233136 containerd[1830]: 2025-09-13 00:01:36.209 [INFO][6690] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" HandleID="k8s-pod-network.e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0" Sep 13 00:01:36.233136 containerd[1830]: 2025-09-13 00:01:36.209 [INFO][6690] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:36.233136 containerd[1830]: 2025-09-13 00:01:36.209 [INFO][6690] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:36.233136 containerd[1830]: 2025-09-13 00:01:36.225 [WARNING][6690] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" HandleID="k8s-pod-network.e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0" Sep 13 00:01:36.233136 containerd[1830]: 2025-09-13 00:01:36.226 [INFO][6690] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" HandleID="k8s-pod-network.e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0" Sep 13 00:01:36.233136 containerd[1830]: 2025-09-13 00:01:36.228 [INFO][6690] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:36.233136 containerd[1830]: 2025-09-13 00:01:36.229 [INFO][6683] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Sep 13 00:01:36.234084 containerd[1830]: time="2025-09-13T00:01:36.233724442Z" level=info msg="TearDown network for sandbox \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\" successfully" Sep 13 00:01:36.234084 containerd[1830]: time="2025-09-13T00:01:36.233765002Z" level=info msg="StopPodSandbox for \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\" returns successfully" Sep 13 00:01:36.234727 containerd[1830]: time="2025-09-13T00:01:36.234435201Z" level=info msg="RemovePodSandbox for \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\"" Sep 13 00:01:36.234727 containerd[1830]: time="2025-09-13T00:01:36.234472641Z" level=info msg="Forcibly stopping sandbox \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\"" Sep 13 00:01:36.333559 containerd[1830]: 2025-09-13 00:01:36.290 [WARNING][6704] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0", GenerateName:"calico-apiserver-7d7646db59-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd16400c-5cb4-4ddc-a634-d229d8486603", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 59, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d7646db59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-d2844e2d10", ContainerID:"7a3370b0798dea7babef32a853b7a87092ac70d3e63f273deda09a9d22f8b673", Pod:"calico-apiserver-7d7646db59-7hfrz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07c3c8d5146", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:36.333559 containerd[1830]: 2025-09-13 00:01:36.290 [INFO][6704] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Sep 13 00:01:36.333559 containerd[1830]: 2025-09-13 00:01:36.291 [INFO][6704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" iface="eth0" netns="" Sep 13 00:01:36.333559 containerd[1830]: 2025-09-13 00:01:36.291 [INFO][6704] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Sep 13 00:01:36.333559 containerd[1830]: 2025-09-13 00:01:36.291 [INFO][6704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Sep 13 00:01:36.333559 containerd[1830]: 2025-09-13 00:01:36.318 [INFO][6711] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" HandleID="k8s-pod-network.e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0" Sep 13 00:01:36.333559 containerd[1830]: 2025-09-13 00:01:36.318 [INFO][6711] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:36.333559 containerd[1830]: 2025-09-13 00:01:36.318 [INFO][6711] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:36.333559 containerd[1830]: 2025-09-13 00:01:36.328 [WARNING][6711] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" HandleID="k8s-pod-network.e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0" Sep 13 00:01:36.333559 containerd[1830]: 2025-09-13 00:01:36.328 [INFO][6711] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" HandleID="k8s-pod-network.e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Workload="ci--4081.3.5--n--d2844e2d10-k8s-calico--apiserver--7d7646db59--7hfrz-eth0" Sep 13 00:01:36.333559 containerd[1830]: 2025-09-13 00:01:36.330 [INFO][6711] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:36.333559 containerd[1830]: 2025-09-13 00:01:36.331 [INFO][6704] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52" Sep 13 00:01:36.335215 containerd[1830]: time="2025-09-13T00:01:36.333602843Z" level=info msg="TearDown network for sandbox \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\" successfully" Sep 13 00:01:36.347181 containerd[1830]: time="2025-09-13T00:01:36.346507267Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:01:36.347181 containerd[1830]: time="2025-09-13T00:01:36.346925027Z" level=info msg="RemovePodSandbox \"e3dae3e1477178b29313eefec8647babe33ca2719ea045089f1cd2ccfd870f52\" returns successfully" Sep 13 00:01:41.801901 update_engine[1793]: I20250913 00:01:41.801815 1793 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:01:41.802344 update_engine[1793]: I20250913 00:01:41.802071 1793 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:01:41.802397 update_engine[1793]: I20250913 00:01:41.802358 1793 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:01:41.817418 update_engine[1793]: E20250913 00:01:41.817359 1793 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:01:41.817512 update_engine[1793]: I20250913 00:01:41.817454 1793 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 13 00:01:51.803223 update_engine[1793]: I20250913 00:01:51.803133 1793 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:01:51.803769 update_engine[1793]: I20250913 00:01:51.803434 1793 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:01:51.803769 update_engine[1793]: I20250913 00:01:51.803701 1793 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:01:51.912254 update_engine[1793]: E20250913 00:01:51.912185 1793 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:01:51.912419 update_engine[1793]: I20250913 00:01:51.912283 1793 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:01:51.912419 update_engine[1793]: I20250913 00:01:51.912292 1793 omaha_request_action.cc:617] Omaha request response: Sep 13 00:01:51.912419 update_engine[1793]: E20250913 00:01:51.912396 1793 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 13 00:01:51.912419 update_engine[1793]: I20250913 00:01:51.912415 1793 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 13 00:01:51.912505 update_engine[1793]: I20250913 00:01:51.912420 1793 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:01:51.912505 update_engine[1793]: I20250913 00:01:51.912425 1793 update_attempter.cc:306] Processing Done. Sep 13 00:01:51.912505 update_engine[1793]: E20250913 00:01:51.912444 1793 update_attempter.cc:619] Update failed. Sep 13 00:01:51.912505 update_engine[1793]: I20250913 00:01:51.912448 1793 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 13 00:01:51.912505 update_engine[1793]: I20250913 00:01:51.912453 1793 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 13 00:01:51.912505 update_engine[1793]: I20250913 00:01:51.912459 1793 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 13 00:01:51.912719 update_engine[1793]: I20250913 00:01:51.912665 1793 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:01:51.912750 update_engine[1793]: I20250913 00:01:51.912726 1793 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:01:51.912750 update_engine[1793]: I20250913 00:01:51.912735 1793 omaha_request_action.cc:272] Request: Sep 13 00:01:51.912750 update_engine[1793]: Sep 13 00:01:51.912750 update_engine[1793]: Sep 13 00:01:51.912750 update_engine[1793]: Sep 13 00:01:51.912750 update_engine[1793]: Sep 13 00:01:51.912750 update_engine[1793]: Sep 13 00:01:51.912750 update_engine[1793]: Sep 13 00:01:51.912750 update_engine[1793]: I20250913 00:01:51.912741 1793 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:01:51.912924 update_engine[1793]: I20250913 00:01:51.912904 1793 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:01:51.913208 locksmithd[1887]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 13 00:01:51.913619 update_engine[1793]: I20250913 00:01:51.913206 1793 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:01:51.985414 update_engine[1793]: E20250913 00:01:51.985342 1793 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:01:51.985585 update_engine[1793]: I20250913 00:01:51.985446 1793 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:01:51.985585 update_engine[1793]: I20250913 00:01:51.985456 1793 omaha_request_action.cc:617] Omaha request response: Sep 13 00:01:51.985585 update_engine[1793]: I20250913 00:01:51.985463 1793 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:01:51.985585 update_engine[1793]: I20250913 00:01:51.985468 1793 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:01:51.985585 update_engine[1793]: I20250913 00:01:51.985479 1793 update_attempter.cc:306] Processing Done. Sep 13 00:01:51.985585 update_engine[1793]: I20250913 00:01:51.985487 1793 update_attempter.cc:310] Error event sent. Sep 13 00:01:51.985585 update_engine[1793]: I20250913 00:01:51.985502 1793 update_check_scheduler.cc:74] Next update check in 44m12s Sep 13 00:01:51.985956 locksmithd[1887]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 13 00:01:52.609591 systemd[1]: Started sshd@7-10.200.20.38:22-10.200.16.10:56082.service - OpenSSH per-connection server daemon (10.200.16.10:56082). Sep 13 00:01:53.063174 sshd[6750]: Accepted publickey for core from 10.200.16.10 port 56082 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:01:53.065929 sshd[6750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:53.070906 systemd-logind[1788]: New session 10 of user core. Sep 13 00:01:53.075450 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:01:53.511443 sshd[6750]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:53.515014 systemd[1]: sshd@7-10.200.20.38:22-10.200.16.10:56082.service: Deactivated successfully. Sep 13 00:01:53.519167 systemd-logind[1788]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:01:53.520667 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:01:53.522214 systemd-logind[1788]: Removed session 10. Sep 13 00:01:58.592598 systemd[1]: Started sshd@8-10.200.20.38:22-10.200.16.10:56092.service - OpenSSH per-connection server daemon (10.200.16.10:56092). Sep 13 00:01:59.027927 sshd[6788]: Accepted publickey for core from 10.200.16.10 port 56092 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:01:59.028876 sshd[6788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:59.038189 systemd-logind[1788]: New session 11 of user core. Sep 13 00:01:59.045746 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:01:59.436367 sshd[6788]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:59.443812 systemd[1]: sshd@8-10.200.20.38:22-10.200.16.10:56092.service: Deactivated successfully. Sep 13 00:01:59.459185 systemd-logind[1788]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:01:59.461548 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:01:59.464803 systemd-logind[1788]: Removed session 11. Sep 13 00:02:04.507426 systemd[1]: Started sshd@9-10.200.20.38:22-10.200.16.10:56618.service - OpenSSH per-connection server daemon (10.200.16.10:56618). Sep 13 00:02:04.941189 sshd[6843]: Accepted publickey for core from 10.200.16.10 port 56618 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:04.943910 sshd[6843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:04.950393 systemd-logind[1788]: New session 12 of user core. Sep 13 00:02:04.955858 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:02:05.367734 sshd[6843]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:05.372454 systemd[1]: sshd@9-10.200.20.38:22-10.200.16.10:56618.service: Deactivated successfully. Sep 13 00:02:05.377041 systemd-logind[1788]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:02:05.377702 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:02:05.379493 systemd-logind[1788]: Removed session 12. Sep 13 00:02:05.439452 systemd[1]: Started sshd@10-10.200.20.38:22-10.200.16.10:56630.service - OpenSSH per-connection server daemon (10.200.16.10:56630). Sep 13 00:02:05.857884 sshd[6857]: Accepted publickey for core from 10.200.16.10 port 56630 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:05.859628 sshd[6857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:05.864384 systemd-logind[1788]: New session 13 of user core. Sep 13 00:02:05.868455 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:02:06.303034 sshd[6857]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:06.308462 systemd[1]: sshd@10-10.200.20.38:22-10.200.16.10:56630.service: Deactivated successfully. Sep 13 00:02:06.314199 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:02:06.315275 systemd-logind[1788]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:02:06.317648 systemd-logind[1788]: Removed session 13. Sep 13 00:02:06.382484 systemd[1]: Started sshd@11-10.200.20.38:22-10.200.16.10:56632.service - OpenSSH per-connection server daemon (10.200.16.10:56632). Sep 13 00:02:06.807064 sshd[6869]: Accepted publickey for core from 10.200.16.10 port 56632 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:06.808909 sshd[6869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:06.814571 systemd-logind[1788]: New session 14 of user core. Sep 13 00:02:06.819502 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:02:07.204383 sshd[6869]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:07.213194 systemd-logind[1788]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:02:07.214584 systemd[1]: sshd@11-10.200.20.38:22-10.200.16.10:56632.service: Deactivated successfully. Sep 13 00:02:07.219478 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:02:07.221859 systemd-logind[1788]: Removed session 14. Sep 13 00:02:12.283468 systemd[1]: Started sshd@12-10.200.20.38:22-10.200.16.10:40208.service - OpenSSH per-connection server daemon (10.200.16.10:40208). Sep 13 00:02:12.704949 sshd[6885]: Accepted publickey for core from 10.200.16.10 port 40208 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:12.706782 sshd[6885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:12.712931 systemd-logind[1788]: New session 15 of user core. Sep 13 00:02:12.720895 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:02:13.104183 sshd[6885]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:13.108933 systemd[1]: sshd@12-10.200.20.38:22-10.200.16.10:40208.service: Deactivated successfully. Sep 13 00:02:13.113785 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:02:13.114542 systemd-logind[1788]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:02:13.115810 systemd-logind[1788]: Removed session 15. Sep 13 00:02:18.181487 systemd[1]: Started sshd@13-10.200.20.38:22-10.200.16.10:40210.service - OpenSSH per-connection server daemon (10.200.16.10:40210). Sep 13 00:02:18.598312 sshd[6919]: Accepted publickey for core from 10.200.16.10 port 40210 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:18.600063 sshd[6919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:18.606388 systemd-logind[1788]: New session 16 of user core. Sep 13 00:02:18.615637 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:02:19.000406 sshd[6919]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:19.006457 systemd[1]: sshd@13-10.200.20.38:22-10.200.16.10:40210.service: Deactivated successfully. Sep 13 00:02:19.009359 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:02:19.011928 systemd-logind[1788]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:02:19.013387 systemd-logind[1788]: Removed session 16. Sep 13 00:02:24.078329 systemd[1]: Started sshd@14-10.200.20.38:22-10.200.16.10:36226.service - OpenSSH per-connection server daemon (10.200.16.10:36226). Sep 13 00:02:24.513884 sshd[6975]: Accepted publickey for core from 10.200.16.10 port 36226 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:24.515948 sshd[6975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:24.520839 systemd-logind[1788]: New session 17 of user core. Sep 13 00:02:24.531559 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:02:24.905966 sshd[6975]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:24.910589 systemd[1]: sshd@14-10.200.20.38:22-10.200.16.10:36226.service: Deactivated successfully. Sep 13 00:02:24.915114 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:02:24.915453 systemd-logind[1788]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:02:24.917782 systemd-logind[1788]: Removed session 17. Sep 13 00:02:29.980669 systemd[1]: Started sshd@15-10.200.20.38:22-10.200.16.10:56456.service - OpenSSH per-connection server daemon (10.200.16.10:56456). Sep 13 00:02:30.415789 sshd[6993]: Accepted publickey for core from 10.200.16.10 port 56456 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:30.417181 sshd[6993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:30.426912 systemd-logind[1788]: New session 18 of user core. Sep 13 00:02:30.433179 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:02:30.842401 sshd[6993]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:30.847456 systemd-logind[1788]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:02:30.849743 systemd[1]: sshd@15-10.200.20.38:22-10.200.16.10:56456.service: Deactivated successfully. Sep 13 00:02:30.859692 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:02:30.864437 systemd-logind[1788]: Removed session 18. Sep 13 00:02:35.925517 systemd[1]: Started sshd@16-10.200.20.38:22-10.200.16.10:56464.service - OpenSSH per-connection server daemon (10.200.16.10:56464). Sep 13 00:02:36.364791 sshd[7048]: Accepted publickey for core from 10.200.16.10 port 56464 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:36.366731 sshd[7048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:36.375751 systemd-logind[1788]: New session 19 of user core. Sep 13 00:02:36.382252 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:02:36.759004 sshd[7048]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:36.764406 systemd-logind[1788]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:02:36.765190 systemd[1]: sshd@16-10.200.20.38:22-10.200.16.10:56464.service: Deactivated successfully. Sep 13 00:02:36.769666 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:02:36.771325 systemd-logind[1788]: Removed session 19. Sep 13 00:02:36.834686 systemd[1]: Started sshd@17-10.200.20.38:22-10.200.16.10:56478.service - OpenSSH per-connection server daemon (10.200.16.10:56478). Sep 13 00:02:37.249944 sshd[7061]: Accepted publickey for core from 10.200.16.10 port 56478 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:37.251733 sshd[7061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:37.257401 systemd-logind[1788]: New session 20 of user core. Sep 13 00:02:37.264439 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:02:37.845371 sshd[7061]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:37.848817 systemd[1]: sshd@17-10.200.20.38:22-10.200.16.10:56478.service: Deactivated successfully. Sep 13 00:02:37.854738 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:02:37.856241 systemd-logind[1788]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:02:37.857552 systemd-logind[1788]: Removed session 20. Sep 13 00:02:37.918410 systemd[1]: Started sshd@18-10.200.20.38:22-10.200.16.10:56488.service - OpenSSH per-connection server daemon (10.200.16.10:56488). Sep 13 00:02:38.340751 sshd[7073]: Accepted publickey for core from 10.200.16.10 port 56488 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:38.342475 sshd[7073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:38.347564 systemd-logind[1788]: New session 21 of user core. Sep 13 00:02:38.353440 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:02:40.633660 sshd[7073]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:40.640869 systemd[1]: sshd@18-10.200.20.38:22-10.200.16.10:56488.service: Deactivated successfully. Sep 13 00:02:40.646922 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:02:40.646999 systemd-logind[1788]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:02:40.650874 systemd-logind[1788]: Removed session 21. Sep 13 00:02:40.705461 systemd[1]: Started sshd@19-10.200.20.38:22-10.200.16.10:38156.service - OpenSSH per-connection server daemon (10.200.16.10:38156). Sep 13 00:02:41.128186 sshd[7099]: Accepted publickey for core from 10.200.16.10 port 38156 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:41.133989 sshd[7099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:41.140020 systemd-logind[1788]: New session 22 of user core. Sep 13 00:02:41.145813 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:02:41.666391 sshd[7099]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:41.672446 systemd[1]: sshd@19-10.200.20.38:22-10.200.16.10:38156.service: Deactivated successfully. Sep 13 00:02:41.676631 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:02:41.679602 systemd-logind[1788]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:02:41.681246 systemd-logind[1788]: Removed session 22. Sep 13 00:02:41.746708 systemd[1]: Started sshd@20-10.200.20.38:22-10.200.16.10:38170.service - OpenSSH per-connection server daemon (10.200.16.10:38170). Sep 13 00:02:42.184178 sshd[7111]: Accepted publickey for core from 10.200.16.10 port 38170 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:42.187467 sshd[7111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:42.196764 systemd-logind[1788]: New session 23 of user core. Sep 13 00:02:42.203564 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:02:42.641390 sshd[7111]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:42.649097 systemd[1]: sshd@20-10.200.20.38:22-10.200.16.10:38170.service: Deactivated successfully. Sep 13 00:02:42.655968 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:02:42.658942 systemd-logind[1788]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:02:42.663664 systemd-logind[1788]: Removed session 23. Sep 13 00:02:47.711449 systemd[1]: Started sshd@21-10.200.20.38:22-10.200.16.10:38176.service - OpenSSH per-connection server daemon (10.200.16.10:38176). Sep 13 00:02:48.135814 sshd[7128]: Accepted publickey for core from 10.200.16.10 port 38176 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:48.137432 sshd[7128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:48.149945 systemd-logind[1788]: New session 24 of user core. Sep 13 00:02:48.156509 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:02:48.532826 sshd[7128]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:48.537776 systemd-logind[1788]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:02:48.538383 systemd[1]: sshd@21-10.200.20.38:22-10.200.16.10:38176.service: Deactivated successfully. Sep 13 00:02:48.543737 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:02:48.545018 systemd-logind[1788]: Removed session 24. Sep 13 00:02:53.609384 systemd[1]: Started sshd@22-10.200.20.38:22-10.200.16.10:54180.service - OpenSSH per-connection server daemon (10.200.16.10:54180). Sep 13 00:02:54.042109 sshd[7164]: Accepted publickey for core from 10.200.16.10 port 54180 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:54.043689 sshd[7164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:54.048464 systemd-logind[1788]: New session 25 of user core. Sep 13 00:02:54.054479 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:02:54.457414 sshd[7164]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:54.460919 systemd[1]: sshd@22-10.200.20.38:22-10.200.16.10:54180.service: Deactivated successfully. Sep 13 00:02:54.467219 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:02:54.469267 systemd-logind[1788]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:02:54.471387 systemd-logind[1788]: Removed session 25. Sep 13 00:02:59.531426 systemd[1]: Started sshd@23-10.200.20.38:22-10.200.16.10:54182.service - OpenSSH per-connection server daemon (10.200.16.10:54182). Sep 13 00:02:59.956129 sshd[7178]: Accepted publickey for core from 10.200.16.10 port 54182 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:59.957200 sshd[7178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:59.962050 systemd-logind[1788]: New session 26 of user core. Sep 13 00:02:59.964400 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:03:00.441525 sshd[7178]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:00.448751 systemd-logind[1788]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:03:00.449082 systemd[1]: sshd@23-10.200.20.38:22-10.200.16.10:54182.service: Deactivated successfully. Sep 13 00:03:00.457872 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:03:00.462015 systemd-logind[1788]: Removed session 26. Sep 13 00:03:05.518406 systemd[1]: Started sshd@24-10.200.20.38:22-10.200.16.10:37272.service - OpenSSH per-connection server daemon (10.200.16.10:37272). Sep 13 00:03:05.943582 sshd[7233]: Accepted publickey for core from 10.200.16.10 port 37272 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:03:05.945982 sshd[7233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:05.953892 systemd-logind[1788]: New session 27 of user core. Sep 13 00:03:05.957738 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 13 00:03:06.320270 sshd[7233]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:06.325754 systemd[1]: sshd@24-10.200.20.38:22-10.200.16.10:37272.service: Deactivated successfully. Sep 13 00:03:06.330116 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:03:06.331046 systemd-logind[1788]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:03:06.332333 systemd-logind[1788]: Removed session 27. Sep 13 00:03:11.394451 systemd[1]: Started sshd@25-10.200.20.38:22-10.200.16.10:49086.service - OpenSSH per-connection server daemon (10.200.16.10:49086). Sep 13 00:03:11.821382 sshd[7256]: Accepted publickey for core from 10.200.16.10 port 49086 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:03:11.823235 sshd[7256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:11.830521 systemd-logind[1788]: New session 28 of user core. Sep 13 00:03:11.836257 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 13 00:03:12.231783 sshd[7256]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:12.236732 systemd[1]: sshd@25-10.200.20.38:22-10.200.16.10:49086.service: Deactivated successfully. Sep 13 00:03:12.241830 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 00:03:12.242950 systemd-logind[1788]: Session 28 logged out. Waiting for processes to exit. Sep 13 00:03:12.243981 systemd-logind[1788]: Removed session 28. Sep 13 00:03:17.307400 systemd[1]: Started sshd@26-10.200.20.38:22-10.200.16.10:49096.service - OpenSSH per-connection server daemon (10.200.16.10:49096). Sep 13 00:03:17.730922 sshd[7293]: Accepted publickey for core from 10.200.16.10 port 49096 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:03:17.732772 sshd[7293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:03:17.739558 systemd-logind[1788]: New session 29 of user core. Sep 13 00:03:17.745678 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 13 00:03:18.131581 sshd[7293]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:18.135388 systemd[1]: sshd@26-10.200.20.38:22-10.200.16.10:49096.service: Deactivated successfully. Sep 13 00:03:18.139202 systemd[1]: session-29.scope: Deactivated successfully. Sep 13 00:03:18.143612 systemd-logind[1788]: Session 29 logged out. Waiting for processes to exit. Sep 13 00:03:18.148593 systemd-logind[1788]: Removed session 29.