Sep 5 23:51:09.255537 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 5 23:51:09.255564 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 5 22:30:47 -00 2025 Sep 5 23:51:09.255572 kernel: KASLR enabled Sep 5 23:51:09.255578 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 5 23:51:09.255586 kernel: printk: bootconsole [pl11] enabled Sep 5 23:51:09.255592 kernel: efi: EFI v2.7 by EDK II Sep 5 23:51:09.255599 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Sep 5 23:51:09.255605 kernel: random: crng init done Sep 5 23:51:09.255611 kernel: ACPI: Early table checksum verification disabled Sep 5 23:51:09.255617 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 5 23:51:09.255623 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:51:09.255629 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:51:09.255637 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 5 23:51:09.255643 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:51:09.255651 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:51:09.255657 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:51:09.255663 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:51:09.255671 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:51:09.255678 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:51:09.255684 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 5 23:51:09.255690 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:51:09.255697 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 5 23:51:09.255703 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 5 23:51:09.255709 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 5 23:51:09.255715 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 5 23:51:09.255722 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 5 23:51:09.255728 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 5 23:51:09.255734 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 5 23:51:09.255742 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 5 23:51:09.255749 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 5 23:51:09.255755 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 5 23:51:09.255761 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 5 23:51:09.255768 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 5 23:51:09.255774 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 5 23:51:09.255780 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Sep 5 23:51:09.255786 kernel: Zone ranges: Sep 5 23:51:09.255793 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 5 23:51:09.255799 kernel: DMA32 empty Sep 5 23:51:09.255805 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 5 23:51:09.255812 kernel: Movable zone start for each node Sep 5 23:51:09.255822 kernel: Early memory node ranges Sep 5 23:51:09.255829 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 5 23:51:09.255836 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 5 23:51:09.255843 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 5 23:51:09.255850 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 5 23:51:09.255858 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 5 23:51:09.255865 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 5 23:51:09.255871 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 5 23:51:09.255878 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 5 23:51:09.255885 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 5 23:51:09.255892 kernel: psci: probing for conduit method from ACPI. Sep 5 23:51:09.255898 kernel: psci: PSCIv1.1 detected in firmware. Sep 5 23:51:09.255905 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 23:51:09.255912 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 5 23:51:09.255919 kernel: psci: SMC Calling Convention v1.4 Sep 5 23:51:09.255925 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 5 23:51:09.255932 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 5 23:51:09.255941 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 5 23:51:09.255947 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 5 23:51:09.255954 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 5 23:51:09.255961 kernel: Detected PIPT I-cache on CPU0 Sep 5 23:51:09.255968 kernel: CPU features: detected: GIC system register CPU interface Sep 5 23:51:09.255974 kernel: CPU features: detected: Hardware dirty bit management Sep 5 23:51:09.255981 kernel: CPU features: detected: Spectre-BHB Sep 5 23:51:09.255988 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 5 23:51:09.255994 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 5 23:51:09.256001 kernel: CPU features: detected: ARM erratum 1418040 Sep 5 23:51:09.256008 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 5 23:51:09.256016 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 5 23:51:09.256023 kernel: alternatives: applying boot alternatives Sep 5 23:51:09.256031 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:51:09.256038 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 23:51:09.256045 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 23:51:09.256052 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 23:51:09.256059 kernel: Fallback order for Node 0: 0 Sep 5 23:51:09.256066 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 5 23:51:09.256072 kernel: Policy zone: Normal Sep 5 23:51:09.256079 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 23:51:09.256085 kernel: software IO TLB: area num 2. Sep 5 23:51:09.256093 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Sep 5 23:51:09.256101 kernel: Memory: 3982628K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 211532K reserved, 0K cma-reserved) Sep 5 23:51:09.256108 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 5 23:51:09.256114 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 23:51:09.256122 kernel: rcu: RCU event tracing is enabled. Sep 5 23:51:09.256129 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 5 23:51:09.256136 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 23:51:09.256143 kernel: Tracing variant of Tasks RCU enabled. Sep 5 23:51:09.256149 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 23:51:09.256156 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 5 23:51:09.256163 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 23:51:09.256171 kernel: GICv3: 960 SPIs implemented Sep 5 23:51:09.256177 kernel: GICv3: 0 Extended SPIs implemented Sep 5 23:51:09.256184 kernel: Root IRQ handler: gic_handle_irq Sep 5 23:51:09.256191 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 5 23:51:09.256198 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 5 23:51:09.256204 kernel: ITS: No ITS available, not enabling LPIs Sep 5 23:51:09.256211 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 23:51:09.256218 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:51:09.256224 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 5 23:51:09.256231 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 5 23:51:09.256238 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 5 23:51:09.256247 kernel: Console: colour dummy device 80x25 Sep 5 23:51:09.256254 kernel: printk: console [tty1] enabled Sep 5 23:51:09.256261 kernel: ACPI: Core revision 20230628 Sep 5 23:51:09.256268 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 5 23:51:09.256275 kernel: pid_max: default: 32768 minimum: 301 Sep 5 23:51:09.256282 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 23:51:09.256289 kernel: landlock: Up and running. Sep 5 23:51:09.256296 kernel: SELinux: Initializing. Sep 5 23:51:09.256303 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:51:09.256310 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:51:09.256318 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:51:09.256325 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:51:09.256332 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 5 23:51:09.256339 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 5 23:51:09.256346 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 5 23:51:09.256353 kernel: rcu: Hierarchical SRCU implementation. Sep 5 23:51:09.256361 kernel: rcu: Max phase no-delay instances is 400. Sep 5 23:51:09.256374 kernel: Remapping and enabling EFI services. Sep 5 23:51:09.256381 kernel: smp: Bringing up secondary CPUs ... Sep 5 23:51:09.256388 kernel: Detected PIPT I-cache on CPU1 Sep 5 23:51:09.256396 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 5 23:51:09.256404 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:51:09.256434 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 5 23:51:09.256442 kernel: smp: Brought up 1 node, 2 CPUs Sep 5 23:51:09.256449 kernel: SMP: Total of 2 processors activated. Sep 5 23:51:09.256457 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 23:51:09.256467 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 5 23:51:09.256474 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 5 23:51:09.256481 kernel: CPU features: detected: CRC32 instructions Sep 5 23:51:09.256489 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 5 23:51:09.256496 kernel: CPU features: detected: LSE atomic instructions Sep 5 23:51:09.256503 kernel: CPU features: detected: Privileged Access Never Sep 5 23:51:09.256510 kernel: CPU: All CPU(s) started at EL1 Sep 5 23:51:09.256518 kernel: alternatives: applying system-wide alternatives Sep 5 23:51:09.256525 kernel: devtmpfs: initialized Sep 5 23:51:09.256534 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 23:51:09.256542 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 5 23:51:09.256549 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 23:51:09.256556 kernel: SMBIOS 3.1.0 present. Sep 5 23:51:09.256564 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 5 23:51:09.256571 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 23:51:09.256578 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 23:51:09.256586 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 23:51:09.256594 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 23:51:09.256602 kernel: audit: initializing netlink subsys (disabled) Sep 5 23:51:09.256610 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Sep 5 23:51:09.256617 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 23:51:09.256624 kernel: cpuidle: using governor menu Sep 5 23:51:09.256631 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 23:51:09.256639 kernel: ASID allocator initialised with 32768 entries Sep 5 23:51:09.256646 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 23:51:09.256653 kernel: Serial: AMBA PL011 UART driver Sep 5 23:51:09.256660 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 5 23:51:09.256669 kernel: Modules: 0 pages in range for non-PLT usage Sep 5 23:51:09.256676 kernel: Modules: 509008 pages in range for PLT usage Sep 5 23:51:09.256684 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 23:51:09.256691 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 23:51:09.256698 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 23:51:09.256705 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 5 23:51:09.256713 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 23:51:09.256720 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 23:51:09.256727 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 23:51:09.256736 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 5 23:51:09.256743 kernel: ACPI: Added _OSI(Module Device) Sep 5 23:51:09.256750 kernel: ACPI: Added _OSI(Processor Device) Sep 5 23:51:09.256758 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 23:51:09.256765 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 23:51:09.256772 kernel: ACPI: Interpreter enabled Sep 5 23:51:09.256779 kernel: ACPI: Using GIC for interrupt routing Sep 5 23:51:09.256787 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 5 23:51:09.256794 kernel: printk: console [ttyAMA0] enabled Sep 5 23:51:09.256803 kernel: printk: bootconsole [pl11] disabled Sep 5 23:51:09.256810 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 5 23:51:09.256817 kernel: iommu: Default domain type: Translated Sep 5 23:51:09.256825 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 23:51:09.256832 kernel: efivars: Registered efivars operations Sep 5 23:51:09.256839 kernel: vgaarb: loaded Sep 5 23:51:09.256847 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 23:51:09.256854 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 23:51:09.256861 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 23:51:09.256870 kernel: pnp: PnP ACPI init Sep 5 23:51:09.256878 kernel: pnp: PnP ACPI: found 0 devices Sep 5 23:51:09.256885 kernel: NET: Registered PF_INET protocol family Sep 5 23:51:09.256892 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 23:51:09.256899 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 23:51:09.256907 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 23:51:09.256914 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 23:51:09.256921 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 23:51:09.256929 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 23:51:09.256938 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:51:09.256945 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:51:09.256952 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 23:51:09.256959 kernel: PCI: CLS 0 bytes, default 64 Sep 5 23:51:09.256967 kernel: kvm [1]: HYP mode not available Sep 5 23:51:09.256974 kernel: Initialise system trusted keyrings Sep 5 23:51:09.256981 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 23:51:09.256988 kernel: Key type asymmetric registered Sep 5 23:51:09.256996 kernel: Asymmetric key parser 'x509' registered Sep 5 23:51:09.257004 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 23:51:09.257012 kernel: io scheduler mq-deadline registered Sep 5 23:51:09.257019 kernel: io scheduler kyber registered Sep 5 23:51:09.257026 kernel: io scheduler bfq registered Sep 5 23:51:09.257034 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 23:51:09.257041 kernel: thunder_xcv, ver 1.0 Sep 5 23:51:09.257048 kernel: thunder_bgx, ver 1.0 Sep 5 23:51:09.257055 kernel: nicpf, ver 1.0 Sep 5 23:51:09.257062 kernel: nicvf, ver 1.0 Sep 5 23:51:09.257215 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 23:51:09.257303 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T23:51:08 UTC (1757116268) Sep 5 23:51:09.257314 kernel: efifb: probing for efifb Sep 5 23:51:09.257323 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 5 23:51:09.257332 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 5 23:51:09.257339 kernel: efifb: scrolling: redraw Sep 5 23:51:09.257347 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 5 23:51:09.257354 kernel: Console: switching to colour frame buffer device 128x48 Sep 5 23:51:09.257364 kernel: fb0: EFI VGA frame buffer device Sep 5 23:51:09.257373 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 5 23:51:09.257382 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 23:51:09.257391 kernel: No ACPI PMU IRQ for CPU0 Sep 5 23:51:09.257399 kernel: No ACPI PMU IRQ for CPU1 Sep 5 23:51:09.257408 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 5 23:51:09.259465 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 5 23:51:09.259475 kernel: watchdog: Hard watchdog permanently disabled Sep 5 23:51:09.259483 kernel: NET: Registered PF_INET6 protocol family Sep 5 23:51:09.259495 kernel: Segment Routing with IPv6 Sep 5 23:51:09.259503 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 23:51:09.259510 kernel: NET: Registered PF_PACKET protocol family Sep 5 23:51:09.259517 kernel: Key type dns_resolver registered Sep 5 23:51:09.259525 kernel: registered taskstats version 1 Sep 5 23:51:09.259532 kernel: Loading compiled-in X.509 certificates Sep 5 23:51:09.259540 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 5b16e1dfa86dac534548885fd675b87757ff9e20' Sep 5 23:51:09.259547 kernel: Key type .fscrypt registered Sep 5 23:51:09.259554 kernel: Key type fscrypt-provisioning registered Sep 5 23:51:09.259563 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 23:51:09.259575 kernel: ima: Allocated hash algorithm: sha1 Sep 5 23:51:09.259583 kernel: ima: No architecture policies found Sep 5 23:51:09.259590 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 23:51:09.259598 kernel: clk: Disabling unused clocks Sep 5 23:51:09.259605 kernel: Freeing unused kernel memory: 39424K Sep 5 23:51:09.259613 kernel: Run /init as init process Sep 5 23:51:09.259620 kernel: with arguments: Sep 5 23:51:09.259627 kernel: /init Sep 5 23:51:09.259636 kernel: with environment: Sep 5 23:51:09.259644 kernel: HOME=/ Sep 5 23:51:09.259651 kernel: TERM=linux Sep 5 23:51:09.259658 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 23:51:09.259668 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:51:09.259678 systemd[1]: Detected virtualization microsoft. Sep 5 23:51:09.259686 systemd[1]: Detected architecture arm64. Sep 5 23:51:09.259694 systemd[1]: Running in initrd. Sep 5 23:51:09.259703 systemd[1]: No hostname configured, using default hostname. Sep 5 23:51:09.259711 systemd[1]: Hostname set to . Sep 5 23:51:09.259719 systemd[1]: Initializing machine ID from random generator. Sep 5 23:51:09.259727 systemd[1]: Queued start job for default target initrd.target. Sep 5 23:51:09.259734 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:51:09.259743 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:51:09.259751 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 23:51:09.259760 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:51:09.259769 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 23:51:09.259777 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 23:51:09.259787 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 23:51:09.259795 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 23:51:09.259803 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:51:09.259811 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:51:09.259820 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:51:09.259828 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:51:09.259836 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:51:09.259844 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:51:09.259852 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:51:09.259860 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:51:09.259868 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 23:51:09.259876 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 23:51:09.259884 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:51:09.259893 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:51:09.259901 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:51:09.259909 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:51:09.259917 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 23:51:09.259925 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:51:09.259933 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 23:51:09.259941 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 23:51:09.259949 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:51:09.259982 systemd-journald[217]: Collecting audit messages is disabled. Sep 5 23:51:09.260003 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:51:09.260011 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:51:09.260020 systemd-journald[217]: Journal started Sep 5 23:51:09.260040 systemd-journald[217]: Runtime Journal (/run/log/journal/d00e3fd6b38b42c8990d72f7e1c33709) is 8.0M, max 78.5M, 70.5M free. Sep 5 23:51:09.267098 systemd-modules-load[218]: Inserted module 'overlay' Sep 5 23:51:09.290133 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:51:09.290799 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 23:51:09.314742 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 23:51:09.314765 kernel: Bridge firewalling registered Sep 5 23:51:09.308441 systemd-modules-load[218]: Inserted module 'br_netfilter' Sep 5 23:51:09.314724 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:51:09.322860 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 23:51:09.327460 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:51:09.337620 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:51:09.365683 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:51:09.374580 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:51:09.394615 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:51:09.412498 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:51:09.419982 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:51:09.441334 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:51:09.449252 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:51:09.461126 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:51:09.488750 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 23:51:09.503630 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:51:09.519360 dracut-cmdline[249]: dracut-dracut-053 Sep 5 23:51:09.519360 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:51:09.518595 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:51:09.579104 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:51:09.588940 systemd-resolved[258]: Positive Trust Anchors: Sep 5 23:51:09.588950 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:51:09.588981 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:51:09.591786 systemd-resolved[258]: Defaulting to hostname 'linux'. Sep 5 23:51:09.596691 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:51:09.604571 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:51:09.712438 kernel: SCSI subsystem initialized Sep 5 23:51:09.723433 kernel: Loading iSCSI transport class v2.0-870. Sep 5 23:51:09.731442 kernel: iscsi: registered transport (tcp) Sep 5 23:51:09.749362 kernel: iscsi: registered transport (qla4xxx) Sep 5 23:51:09.749439 kernel: QLogic iSCSI HBA Driver Sep 5 23:51:09.787917 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 23:51:09.802704 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 23:51:09.834261 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 23:51:09.834346 kernel: device-mapper: uevent: version 1.0.3 Sep 5 23:51:09.834359 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 23:51:09.890445 kernel: raid6: neonx8 gen() 15764 MB/s Sep 5 23:51:09.908426 kernel: raid6: neonx4 gen() 15665 MB/s Sep 5 23:51:09.928420 kernel: raid6: neonx2 gen() 13227 MB/s Sep 5 23:51:09.949421 kernel: raid6: neonx1 gen() 10517 MB/s Sep 5 23:51:09.969420 kernel: raid6: int64x8 gen() 6960 MB/s Sep 5 23:51:09.989419 kernel: raid6: int64x4 gen() 7346 MB/s Sep 5 23:51:10.009421 kernel: raid6: int64x2 gen() 6133 MB/s Sep 5 23:51:10.032282 kernel: raid6: int64x1 gen() 5061 MB/s Sep 5 23:51:10.032301 kernel: raid6: using algorithm neonx8 gen() 15764 MB/s Sep 5 23:51:10.055911 kernel: raid6: .... xor() 12062 MB/s, rmw enabled Sep 5 23:51:10.055948 kernel: raid6: using neon recovery algorithm Sep 5 23:51:10.067819 kernel: xor: measuring software checksum speed Sep 5 23:51:10.067835 kernel: 8regs : 19793 MB/sec Sep 5 23:51:10.071007 kernel: 32regs : 19669 MB/sec Sep 5 23:51:10.074281 kernel: arm64_neon : 27061 MB/sec Sep 5 23:51:10.077970 kernel: xor: using function: arm64_neon (27061 MB/sec) Sep 5 23:51:10.128432 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 23:51:10.139064 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:51:10.158589 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:51:10.180108 systemd-udevd[436]: Using default interface naming scheme 'v255'. Sep 5 23:51:10.185395 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:51:10.207623 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 23:51:10.222235 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Sep 5 23:51:10.249059 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:51:10.266977 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:51:10.305223 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:51:10.320564 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 23:51:10.347307 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 23:51:10.363739 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:51:10.377592 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:51:10.390465 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:51:10.408607 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 23:51:10.424019 kernel: hv_vmbus: Vmbus version:5.3 Sep 5 23:51:10.430645 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:51:10.447990 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:51:10.489095 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 5 23:51:10.489120 kernel: hv_vmbus: registering driver hid_hyperv Sep 5 23:51:10.489130 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 5 23:51:10.489139 kernel: hv_vmbus: registering driver hv_netvsc Sep 5 23:51:10.489148 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 5 23:51:10.489165 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 5 23:51:10.489299 kernel: hv_vmbus: registering driver hv_storvsc Sep 5 23:51:10.489310 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 5 23:51:10.448148 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:51:10.519836 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 5 23:51:10.519858 kernel: scsi host1: storvsc_host_t Sep 5 23:51:10.519911 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:51:10.538721 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 5 23:51:10.538763 kernel: scsi host0: storvsc_host_t Sep 5 23:51:10.526206 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:51:10.526440 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:51:10.553579 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:51:10.580426 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 5 23:51:10.582298 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:51:10.606429 kernel: PTP clock support registered Sep 5 23:51:10.611606 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:51:10.481333 kernel: hv_utils: Registering HyperV Utility Driver Sep 5 23:51:10.496229 kernel: hv_vmbus: registering driver hv_utils Sep 5 23:51:10.496247 kernel: hv_netvsc 00224879-8185-0022-4879-818500224879 eth0: VF slot 1 added Sep 5 23:51:10.506500 kernel: hv_utils: Heartbeat IC version 3.0 Sep 5 23:51:10.506514 kernel: hv_utils: Shutdown IC version 3.2 Sep 5 23:51:10.506522 kernel: hv_utils: TimeSync IC version 4.0 Sep 5 23:51:10.506531 systemd-journald[217]: Time jumped backwards, rotating. Sep 5 23:51:10.506575 kernel: hv_vmbus: registering driver hv_pci Sep 5 23:51:10.506583 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Sep 5 23:51:10.506700 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 23:51:10.645637 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:51:10.536883 kernel: hv_pci d56a67bc-fbce-4ade-8ac1-e51be9ed34af: PCI VMBus probing: Using version 0x10004 Sep 5 23:51:10.537040 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Sep 5 23:51:10.537140 kernel: hv_pci d56a67bc-fbce-4ade-8ac1-e51be9ed34af: PCI host bridge to bus fbce:00 Sep 5 23:51:10.537221 kernel: pci_bus fbce:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 5 23:51:10.478957 systemd-resolved[258]: Clock change detected. Flushing caches. Sep 5 23:51:10.537771 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:51:10.565563 kernel: pci_bus fbce:00: No busn resource found for root bus, will use [bus 00-ff] Sep 5 23:51:10.741591 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 5 23:51:10.741940 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Sep 5 23:51:10.745632 kernel: sd 1:0:0:0: [sda] Write Protect is off Sep 5 23:51:10.745787 kernel: pci fbce:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 5 23:51:10.749956 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 5 23:51:10.756423 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 5 23:51:10.756561 kernel: pci fbce:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 5 23:51:10.775148 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:51:10.775196 kernel: pci fbce:00:02.0: enabling Extended Tags Sep 5 23:51:10.775225 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Sep 5 23:51:10.796459 kernel: pci fbce:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at fbce:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 5 23:51:10.796631 kernel: pci_bus fbce:00: busn_res: [bus 00-ff] end is updated to 00 Sep 5 23:51:10.805545 kernel: pci fbce:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 5 23:51:10.846775 kernel: mlx5_core fbce:00:02.0: enabling device (0000 -> 0002) Sep 5 23:51:10.852874 kernel: mlx5_core fbce:00:02.0: firmware version: 16.31.2424 Sep 5 23:51:11.133806 kernel: hv_netvsc 00224879-8185-0022-4879-818500224879 eth0: VF registering: eth1 Sep 5 23:51:11.134011 kernel: mlx5_core fbce:00:02.0 eth1: joined to eth0 Sep 5 23:51:11.143024 kernel: mlx5_core fbce:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 5 23:51:11.153878 kernel: mlx5_core fbce:00:02.0 enP64462s1: renamed from eth1 Sep 5 23:51:11.362926 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (488) Sep 5 23:51:11.377483 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 5 23:51:11.409437 kernel: BTRFS: device fsid 045c118e-b098-46f0-884a-43665575c70e devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (485) Sep 5 23:51:11.422451 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 5 23:51:11.429626 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 5 23:51:11.453095 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 23:51:11.468990 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 5 23:51:11.490974 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 5 23:51:11.511118 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:51:12.519892 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:51:12.520662 disk-uuid[600]: The operation has completed successfully. Sep 5 23:51:12.575241 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 23:51:12.576885 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 23:51:12.609992 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 23:51:12.622457 sh[713]: Success Sep 5 23:51:12.654979 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 5 23:51:12.860270 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 23:51:12.869290 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 23:51:12.879717 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 23:51:12.914900 kernel: BTRFS info (device dm-0): first mount of filesystem 045c118e-b098-46f0-884a-43665575c70e Sep 5 23:51:12.914938 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:51:12.921236 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 23:51:12.925990 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 23:51:12.929803 kernel: BTRFS info (device dm-0): using free space tree Sep 5 23:51:13.205546 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 23:51:13.210814 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 23:51:13.229136 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 23:51:13.236125 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 23:51:13.272473 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:51:13.272519 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:51:13.276645 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:51:13.318183 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:51:13.327158 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 23:51:13.337869 kernel: BTRFS info (device sda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:51:13.345697 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 23:51:13.358117 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 23:51:13.391093 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:51:13.406987 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:51:13.437251 systemd-networkd[897]: lo: Link UP Sep 5 23:51:13.440375 systemd-networkd[897]: lo: Gained carrier Sep 5 23:51:13.442057 systemd-networkd[897]: Enumeration completed Sep 5 23:51:13.442663 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:51:13.442666 systemd-networkd[897]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:51:13.444442 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:51:13.451370 systemd[1]: Reached target network.target - Network. Sep 5 23:51:13.513882 kernel: mlx5_core fbce:00:02.0 enP64462s1: Link up Sep 5 23:51:13.594812 systemd-networkd[897]: enP64462s1: Link UP Sep 5 23:51:13.598415 kernel: hv_netvsc 00224879-8185-0022-4879-818500224879 eth0: Data path switched to VF: enP64462s1 Sep 5 23:51:13.595066 systemd-networkd[897]: eth0: Link UP Sep 5 23:51:13.595431 systemd-networkd[897]: eth0: Gained carrier Sep 5 23:51:13.595441 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:51:13.607072 systemd-networkd[897]: enP64462s1: Gained carrier Sep 5 23:51:13.630929 systemd-networkd[897]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 5 23:51:14.410352 ignition[866]: Ignition 2.19.0 Sep 5 23:51:14.410363 ignition[866]: Stage: fetch-offline Sep 5 23:51:14.412391 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:51:14.410421 ignition[866]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:51:14.430989 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 5 23:51:14.410430 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:51:14.410525 ignition[866]: parsed url from cmdline: "" Sep 5 23:51:14.410528 ignition[866]: no config URL provided Sep 5 23:51:14.410532 ignition[866]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:51:14.410539 ignition[866]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:51:14.410544 ignition[866]: failed to fetch config: resource requires networking Sep 5 23:51:14.410736 ignition[866]: Ignition finished successfully Sep 5 23:51:14.446810 ignition[907]: Ignition 2.19.0 Sep 5 23:51:14.446817 ignition[907]: Stage: fetch Sep 5 23:51:14.447049 ignition[907]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:51:14.447059 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:51:14.447181 ignition[907]: parsed url from cmdline: "" Sep 5 23:51:14.447184 ignition[907]: no config URL provided Sep 5 23:51:14.447192 ignition[907]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:51:14.447199 ignition[907]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:51:14.447220 ignition[907]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 5 23:51:14.590413 ignition[907]: GET result: OK Sep 5 23:51:14.593243 ignition[907]: config has been read from IMDS userdata Sep 5 23:51:14.593285 ignition[907]: parsing config with SHA512: ad1198a4e79b634a88d4b1def7d1b3f7ef8bc81e369fb2ad7d733ac64476879c452293d37fd887fe631d9c56163399f60172bb823a317c87e51d19536b92bc8f Sep 5 23:51:14.597017 unknown[907]: fetched base config from "system" Sep 5 23:51:14.597375 ignition[907]: fetch: fetch complete Sep 5 23:51:14.597024 unknown[907]: fetched base config from "system" Sep 5 23:51:14.597379 ignition[907]: fetch: fetch passed Sep 5 23:51:14.597029 unknown[907]: fetched user config from "azure" Sep 5 23:51:14.597418 ignition[907]: Ignition finished successfully Sep 5 23:51:14.600844 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 5 23:51:14.617116 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 23:51:14.642319 ignition[914]: Ignition 2.19.0 Sep 5 23:51:14.642325 ignition[914]: Stage: kargs Sep 5 23:51:14.649055 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 23:51:14.642602 ignition[914]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:51:14.642612 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:51:14.645356 ignition[914]: kargs: kargs passed Sep 5 23:51:14.666117 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 23:51:14.645412 ignition[914]: Ignition finished successfully Sep 5 23:51:14.690018 ignition[920]: Ignition 2.19.0 Sep 5 23:51:14.690034 ignition[920]: Stage: disks Sep 5 23:51:14.698287 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 23:51:14.690266 ignition[920]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:51:14.704075 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 23:51:14.690276 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:51:14.714756 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 23:51:14.692112 ignition[920]: disks: disks passed Sep 5 23:51:14.725515 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:51:14.692167 ignition[920]: Ignition finished successfully Sep 5 23:51:14.736013 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:51:14.746960 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:51:14.779030 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 23:51:14.842014 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 5 23:51:14.851819 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 23:51:14.869103 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 23:51:14.921876 kernel: EXT4-fs (sda9): mounted filesystem 72e55cb0-8368-4871-a3a0-8637412e72e8 r/w with ordered data mode. Quota mode: none. Sep 5 23:51:14.922559 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 23:51:14.927254 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 23:51:14.986939 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:51:14.993994 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 23:51:15.005357 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 5 23:51:15.033535 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Sep 5 23:51:15.033558 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:51:15.027203 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 23:51:15.064949 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:51:15.064970 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:51:15.064980 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:51:15.027234 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:51:15.061372 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 23:51:15.076807 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:51:15.092136 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 23:51:15.549513 coreos-metadata[942]: Sep 05 23:51:15.549 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 5 23:51:15.557324 coreos-metadata[942]: Sep 05 23:51:15.557 INFO Fetch successful Sep 5 23:51:15.557324 coreos-metadata[942]: Sep 05 23:51:15.557 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 5 23:51:15.573473 coreos-metadata[942]: Sep 05 23:51:15.572 INFO Fetch successful Sep 5 23:51:15.580054 coreos-metadata[942]: Sep 05 23:51:15.574 INFO wrote hostname ci-4081.3.5-n-29d70f4830 to /sysroot/etc/hostname Sep 5 23:51:15.578943 systemd-networkd[897]: eth0: Gained IPv6LL Sep 5 23:51:15.579642 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 5 23:51:15.923717 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 23:51:15.947703 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Sep 5 23:51:15.971392 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 23:51:15.979618 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 23:51:16.931198 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 23:51:16.953173 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 23:51:16.967143 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 23:51:16.985881 kernel: BTRFS info (device sda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:51:16.981529 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 23:51:17.004783 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 23:51:17.022497 ignition[1061]: INFO : Ignition 2.19.0 Sep 5 23:51:17.027158 ignition[1061]: INFO : Stage: mount Sep 5 23:51:17.031233 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:51:17.031233 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:51:17.031233 ignition[1061]: INFO : mount: mount passed Sep 5 23:51:17.031233 ignition[1061]: INFO : Ignition finished successfully Sep 5 23:51:17.031909 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 23:51:17.057083 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 23:51:17.077845 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:51:17.109668 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1072) Sep 5 23:51:17.109717 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:51:17.119593 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:51:17.119622 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:51:17.126882 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:51:17.127649 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:51:17.151890 ignition[1090]: INFO : Ignition 2.19.0 Sep 5 23:51:17.151890 ignition[1090]: INFO : Stage: files Sep 5 23:51:17.151890 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:51:17.151890 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:51:17.171746 ignition[1090]: DEBUG : files: compiled without relabeling support, skipping Sep 5 23:51:17.171746 ignition[1090]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 23:51:17.171746 ignition[1090]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 23:51:17.238023 ignition[1090]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 23:51:17.245052 ignition[1090]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 23:51:17.245052 ignition[1090]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 23:51:17.239390 unknown[1090]: wrote ssh authorized keys file for user: core Sep 5 23:51:17.275891 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 5 23:51:17.286655 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 5 23:51:17.317854 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 23:51:17.411210 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 5 23:51:17.974408 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 23:51:19.179702 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 23:51:19.179702 ignition[1090]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 23:51:19.215822 ignition[1090]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:51:19.227170 ignition[1090]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:51:19.227170 ignition[1090]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 23:51:19.227170 ignition[1090]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 5 23:51:19.227170 ignition[1090]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 23:51:19.227170 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:51:19.227170 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:51:19.227170 ignition[1090]: INFO : files: files passed Sep 5 23:51:19.227170 ignition[1090]: INFO : Ignition finished successfully Sep 5 23:51:19.232743 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 23:51:19.269150 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 23:51:19.286026 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 23:51:19.344389 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:51:19.344389 initrd-setup-root-after-ignition[1116]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:51:19.307214 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 23:51:19.366933 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:51:19.307303 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 23:51:19.316112 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:51:19.326835 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 23:51:19.361134 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 23:51:19.397966 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 23:51:19.398095 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 23:51:19.410751 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 23:51:19.422511 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 23:51:19.432399 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 23:51:19.451017 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 23:51:19.476915 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:51:19.496113 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 23:51:19.513043 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:51:19.525001 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:51:19.531215 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 23:51:19.541588 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 23:51:19.541707 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:51:19.556375 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 23:51:19.561740 systemd[1]: Stopped target basic.target - Basic System. Sep 5 23:51:19.572817 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 23:51:19.583446 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:51:19.594185 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 23:51:19.606329 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 23:51:19.617435 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:51:19.629778 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 23:51:19.640175 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 23:51:19.652840 systemd[1]: Stopped target swap.target - Swaps. Sep 5 23:51:19.662377 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 23:51:19.662496 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:51:19.677071 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:51:19.683134 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:51:19.694339 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 23:51:19.697882 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:51:19.707400 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 23:51:19.707521 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 23:51:19.725601 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 23:51:19.725720 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:51:19.732292 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 23:51:19.732381 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 23:51:19.806890 ignition[1141]: INFO : Ignition 2.19.0 Sep 5 23:51:19.806890 ignition[1141]: INFO : Stage: umount Sep 5 23:51:19.806890 ignition[1141]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:51:19.806890 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:51:19.742962 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 5 23:51:19.867978 ignition[1141]: INFO : umount: umount passed Sep 5 23:51:19.867978 ignition[1141]: INFO : Ignition finished successfully Sep 5 23:51:19.743055 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 5 23:51:19.778109 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 23:51:19.808115 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 23:51:19.818334 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 23:51:19.818503 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:51:19.836089 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 23:51:19.836203 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:51:19.854686 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 23:51:19.855294 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 23:51:19.855408 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 23:51:19.861642 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 23:51:19.861875 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 23:51:19.873292 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 23:51:19.873361 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 23:51:19.882399 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 5 23:51:19.882441 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 5 23:51:19.892160 systemd[1]: Stopped target network.target - Network. Sep 5 23:51:19.903171 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 23:51:19.903227 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:51:19.914030 systemd[1]: Stopped target paths.target - Path Units. Sep 5 23:51:19.923485 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 23:51:19.934920 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:51:19.948549 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 23:51:19.958721 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 23:51:19.968070 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 23:51:19.968141 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:51:19.978103 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 23:51:19.978154 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:51:19.988446 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 23:51:19.988504 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 23:51:19.998661 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 23:51:19.998707 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 23:51:20.009126 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 23:51:20.019111 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 23:51:20.029491 systemd-networkd[897]: eth0: DHCPv6 lease lost Sep 5 23:51:20.030886 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 23:51:20.031005 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 23:51:20.046100 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 23:51:20.047444 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 23:51:20.057896 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 23:51:20.059883 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 23:51:20.073238 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 23:51:20.073301 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:51:20.111077 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 23:51:20.120012 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 23:51:20.120116 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:51:20.307847 kernel: hv_netvsc 00224879-8185-0022-4879-818500224879 eth0: Data path switched from VF: enP64462s1 Sep 5 23:51:20.130971 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 23:51:20.131028 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:51:20.140491 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 23:51:20.140537 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 23:51:20.153510 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 23:51:20.153553 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:51:20.165177 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:51:20.205802 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 23:51:20.205992 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:51:20.217500 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 23:51:20.217539 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 23:51:20.223447 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 23:51:20.223474 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:51:20.234237 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 23:51:20.234286 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:51:20.250914 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 23:51:20.250967 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 23:51:20.265986 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:51:20.266035 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:51:20.308048 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 23:51:20.321585 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 23:51:20.321666 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:51:20.334100 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 5 23:51:20.334157 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:51:20.345968 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 23:51:20.346015 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:51:20.357927 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:51:20.357975 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:51:20.370014 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 23:51:20.370111 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 23:51:20.390148 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 23:51:20.390281 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 23:51:20.729170 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 23:51:20.729322 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 23:51:20.739081 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 23:51:20.749115 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 23:51:20.749172 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 23:51:20.775086 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 23:51:20.819398 systemd[1]: Switching root. Sep 5 23:51:20.881376 systemd-journald[217]: Journal stopped Sep 5 23:51:09.255537 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 5 23:51:09.255564 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 5 22:30:47 -00 2025 Sep 5 23:51:09.255572 kernel: KASLR enabled Sep 5 23:51:09.255578 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 5 23:51:09.255586 kernel: printk: bootconsole [pl11] enabled Sep 5 23:51:09.255592 kernel: efi: EFI v2.7 by EDK II Sep 5 23:51:09.255599 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Sep 5 23:51:09.255605 kernel: random: crng init done Sep 5 23:51:09.255611 kernel: ACPI: Early table checksum verification disabled Sep 5 23:51:09.255617 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 5 23:51:09.255623 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:51:09.255629 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:51:09.255637 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 5 23:51:09.255643 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:51:09.255651 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:51:09.255657 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:51:09.255663 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:51:09.255671 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:51:09.255678 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:51:09.255684 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 5 23:51:09.255690 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 5 23:51:09.255697 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 5 23:51:09.255703 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 5 23:51:09.255709 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 5 23:51:09.255715 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 5 23:51:09.255722 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 5 23:51:09.255728 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 5 23:51:09.255734 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 5 23:51:09.255742 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 5 23:51:09.255749 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 5 23:51:09.255755 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 5 23:51:09.255761 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 5 23:51:09.255768 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 5 23:51:09.255774 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 5 23:51:09.255780 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Sep 5 23:51:09.255786 kernel: Zone ranges: Sep 5 23:51:09.255793 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 5 23:51:09.255799 kernel: DMA32 empty Sep 5 23:51:09.255805 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 5 23:51:09.255812 kernel: Movable zone start for each node Sep 5 23:51:09.255822 kernel: Early memory node ranges Sep 5 23:51:09.255829 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 5 23:51:09.255836 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 5 23:51:09.255843 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 5 23:51:09.255850 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 5 23:51:09.255858 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 5 23:51:09.255865 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 5 23:51:09.255871 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 5 23:51:09.255878 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 5 23:51:09.255885 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 5 23:51:09.255892 kernel: psci: probing for conduit method from ACPI. Sep 5 23:51:09.255898 kernel: psci: PSCIv1.1 detected in firmware. Sep 5 23:51:09.255905 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 23:51:09.255912 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 5 23:51:09.255919 kernel: psci: SMC Calling Convention v1.4 Sep 5 23:51:09.255925 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 5 23:51:09.255932 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 5 23:51:09.255941 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 5 23:51:09.255947 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 5 23:51:09.255954 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 5 23:51:09.255961 kernel: Detected PIPT I-cache on CPU0 Sep 5 23:51:09.255968 kernel: CPU features: detected: GIC system register CPU interface Sep 5 23:51:09.255974 kernel: CPU features: detected: Hardware dirty bit management Sep 5 23:51:09.255981 kernel: CPU features: detected: Spectre-BHB Sep 5 23:51:09.255988 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 5 23:51:09.255994 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 5 23:51:09.256001 kernel: CPU features: detected: ARM erratum 1418040 Sep 5 23:51:09.256008 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 5 23:51:09.256016 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 5 23:51:09.256023 kernel: alternatives: applying boot alternatives Sep 5 23:51:09.256031 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:51:09.256038 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 23:51:09.256045 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 23:51:09.256052 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 23:51:09.256059 kernel: Fallback order for Node 0: 0 Sep 5 23:51:09.256066 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 5 23:51:09.256072 kernel: Policy zone: Normal Sep 5 23:51:09.256079 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 23:51:09.256085 kernel: software IO TLB: area num 2. Sep 5 23:51:09.256093 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Sep 5 23:51:09.256101 kernel: Memory: 3982628K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 211532K reserved, 0K cma-reserved) Sep 5 23:51:09.256108 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 5 23:51:09.256114 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 23:51:09.256122 kernel: rcu: RCU event tracing is enabled. Sep 5 23:51:09.256129 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 5 23:51:09.256136 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 23:51:09.256143 kernel: Tracing variant of Tasks RCU enabled. Sep 5 23:51:09.256149 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 23:51:09.256156 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 5 23:51:09.256163 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 23:51:09.256171 kernel: GICv3: 960 SPIs implemented Sep 5 23:51:09.256177 kernel: GICv3: 0 Extended SPIs implemented Sep 5 23:51:09.256184 kernel: Root IRQ handler: gic_handle_irq Sep 5 23:51:09.256191 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 5 23:51:09.256198 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 5 23:51:09.256204 kernel: ITS: No ITS available, not enabling LPIs Sep 5 23:51:09.256211 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 23:51:09.256218 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:51:09.256224 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 5 23:51:09.256231 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 5 23:51:09.256238 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 5 23:51:09.256247 kernel: Console: colour dummy device 80x25 Sep 5 23:51:09.256254 kernel: printk: console [tty1] enabled Sep 5 23:51:09.256261 kernel: ACPI: Core revision 20230628 Sep 5 23:51:09.256268 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 5 23:51:09.256275 kernel: pid_max: default: 32768 minimum: 301 Sep 5 23:51:09.256282 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 23:51:09.256289 kernel: landlock: Up and running. Sep 5 23:51:09.256296 kernel: SELinux: Initializing. Sep 5 23:51:09.256303 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:51:09.256310 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:51:09.256318 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:51:09.256325 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:51:09.256332 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 5 23:51:09.256339 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 5 23:51:09.256346 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 5 23:51:09.256353 kernel: rcu: Hierarchical SRCU implementation. Sep 5 23:51:09.256361 kernel: rcu: Max phase no-delay instances is 400. Sep 5 23:51:09.256374 kernel: Remapping and enabling EFI services. Sep 5 23:51:09.256381 kernel: smp: Bringing up secondary CPUs ... Sep 5 23:51:09.256388 kernel: Detected PIPT I-cache on CPU1 Sep 5 23:51:09.256396 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 5 23:51:09.256404 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 23:51:09.256434 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 5 23:51:09.256442 kernel: smp: Brought up 1 node, 2 CPUs Sep 5 23:51:09.256449 kernel: SMP: Total of 2 processors activated. Sep 5 23:51:09.256457 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 23:51:09.256467 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 5 23:51:09.256474 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 5 23:51:09.256481 kernel: CPU features: detected: CRC32 instructions Sep 5 23:51:09.256489 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 5 23:51:09.256496 kernel: CPU features: detected: LSE atomic instructions Sep 5 23:51:09.256503 kernel: CPU features: detected: Privileged Access Never Sep 5 23:51:09.256510 kernel: CPU: All CPU(s) started at EL1 Sep 5 23:51:09.256518 kernel: alternatives: applying system-wide alternatives Sep 5 23:51:09.256525 kernel: devtmpfs: initialized Sep 5 23:51:09.256534 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 23:51:09.256542 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 5 23:51:09.256549 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 23:51:09.256556 kernel: SMBIOS 3.1.0 present. Sep 5 23:51:09.256564 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 5 23:51:09.256571 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 23:51:09.256578 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 23:51:09.256586 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 23:51:09.256594 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 23:51:09.256602 kernel: audit: initializing netlink subsys (disabled) Sep 5 23:51:09.256610 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Sep 5 23:51:09.256617 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 23:51:09.256624 kernel: cpuidle: using governor menu Sep 5 23:51:09.256631 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 23:51:09.256639 kernel: ASID allocator initialised with 32768 entries Sep 5 23:51:09.256646 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 23:51:09.256653 kernel: Serial: AMBA PL011 UART driver Sep 5 23:51:09.256660 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 5 23:51:09.256669 kernel: Modules: 0 pages in range for non-PLT usage Sep 5 23:51:09.256676 kernel: Modules: 509008 pages in range for PLT usage Sep 5 23:51:09.256684 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 23:51:09.256691 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 23:51:09.256698 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 23:51:09.256705 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 5 23:51:09.256713 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 23:51:09.256720 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 23:51:09.256727 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 23:51:09.256736 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 5 23:51:09.256743 kernel: ACPI: Added _OSI(Module Device) Sep 5 23:51:09.256750 kernel: ACPI: Added _OSI(Processor Device) Sep 5 23:51:09.256758 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 23:51:09.256765 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 23:51:09.256772 kernel: ACPI: Interpreter enabled Sep 5 23:51:09.256779 kernel: ACPI: Using GIC for interrupt routing Sep 5 23:51:09.256787 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 5 23:51:09.256794 kernel: printk: console [ttyAMA0] enabled Sep 5 23:51:09.256803 kernel: printk: bootconsole [pl11] disabled Sep 5 23:51:09.256810 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 5 23:51:09.256817 kernel: iommu: Default domain type: Translated Sep 5 23:51:09.256825 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 23:51:09.256832 kernel: efivars: Registered efivars operations Sep 5 23:51:09.256839 kernel: vgaarb: loaded Sep 5 23:51:09.256847 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 23:51:09.256854 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 23:51:09.256861 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 23:51:09.256870 kernel: pnp: PnP ACPI init Sep 5 23:51:09.256878 kernel: pnp: PnP ACPI: found 0 devices Sep 5 23:51:09.256885 kernel: NET: Registered PF_INET protocol family Sep 5 23:51:09.256892 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 23:51:09.256899 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 23:51:09.256907 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 23:51:09.256914 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 23:51:09.256921 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 23:51:09.256929 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 23:51:09.256938 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:51:09.256945 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:51:09.256952 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 23:51:09.256959 kernel: PCI: CLS 0 bytes, default 64 Sep 5 23:51:09.256967 kernel: kvm [1]: HYP mode not available Sep 5 23:51:09.256974 kernel: Initialise system trusted keyrings Sep 5 23:51:09.256981 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 23:51:09.256988 kernel: Key type asymmetric registered Sep 5 23:51:09.256996 kernel: Asymmetric key parser 'x509' registered Sep 5 23:51:09.257004 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 23:51:09.257012 kernel: io scheduler mq-deadline registered Sep 5 23:51:09.257019 kernel: io scheduler kyber registered Sep 5 23:51:09.257026 kernel: io scheduler bfq registered Sep 5 23:51:09.257034 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 23:51:09.257041 kernel: thunder_xcv, ver 1.0 Sep 5 23:51:09.257048 kernel: thunder_bgx, ver 1.0 Sep 5 23:51:09.257055 kernel: nicpf, ver 1.0 Sep 5 23:51:09.257062 kernel: nicvf, ver 1.0 Sep 5 23:51:09.257215 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 23:51:09.257303 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T23:51:08 UTC (1757116268) Sep 5 23:51:09.257314 kernel: efifb: probing for efifb Sep 5 23:51:09.257323 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 5 23:51:09.257332 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 5 23:51:09.257339 kernel: efifb: scrolling: redraw Sep 5 23:51:09.257347 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 5 23:51:09.257354 kernel: Console: switching to colour frame buffer device 128x48 Sep 5 23:51:09.257364 kernel: fb0: EFI VGA frame buffer device Sep 5 23:51:09.257373 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 5 23:51:09.257382 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 23:51:09.257391 kernel: No ACPI PMU IRQ for CPU0 Sep 5 23:51:09.257399 kernel: No ACPI PMU IRQ for CPU1 Sep 5 23:51:09.257408 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 5 23:51:09.259465 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 5 23:51:09.259475 kernel: watchdog: Hard watchdog permanently disabled Sep 5 23:51:09.259483 kernel: NET: Registered PF_INET6 protocol family Sep 5 23:51:09.259495 kernel: Segment Routing with IPv6 Sep 5 23:51:09.259503 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 23:51:09.259510 kernel: NET: Registered PF_PACKET protocol family Sep 5 23:51:09.259517 kernel: Key type dns_resolver registered Sep 5 23:51:09.259525 kernel: registered taskstats version 1 Sep 5 23:51:09.259532 kernel: Loading compiled-in X.509 certificates Sep 5 23:51:09.259540 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 5b16e1dfa86dac534548885fd675b87757ff9e20' Sep 5 23:51:09.259547 kernel: Key type .fscrypt registered Sep 5 23:51:09.259554 kernel: Key type fscrypt-provisioning registered Sep 5 23:51:09.259563 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 23:51:09.259575 kernel: ima: Allocated hash algorithm: sha1 Sep 5 23:51:09.259583 kernel: ima: No architecture policies found Sep 5 23:51:09.259590 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 23:51:09.259598 kernel: clk: Disabling unused clocks Sep 5 23:51:09.259605 kernel: Freeing unused kernel memory: 39424K Sep 5 23:51:09.259613 kernel: Run /init as init process Sep 5 23:51:09.259620 kernel: with arguments: Sep 5 23:51:09.259627 kernel: /init Sep 5 23:51:09.259636 kernel: with environment: Sep 5 23:51:09.259644 kernel: HOME=/ Sep 5 23:51:09.259651 kernel: TERM=linux Sep 5 23:51:09.259658 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 23:51:09.259668 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:51:09.259678 systemd[1]: Detected virtualization microsoft. Sep 5 23:51:09.259686 systemd[1]: Detected architecture arm64. Sep 5 23:51:09.259694 systemd[1]: Running in initrd. Sep 5 23:51:09.259703 systemd[1]: No hostname configured, using default hostname. Sep 5 23:51:09.259711 systemd[1]: Hostname set to . Sep 5 23:51:09.259719 systemd[1]: Initializing machine ID from random generator. Sep 5 23:51:09.259727 systemd[1]: Queued start job for default target initrd.target. Sep 5 23:51:09.259734 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:51:09.259743 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:51:09.259751 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 23:51:09.259760 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:51:09.259769 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 23:51:09.259777 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 23:51:09.259787 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 23:51:09.259795 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 23:51:09.259803 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:51:09.259811 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:51:09.259820 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:51:09.259828 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:51:09.259836 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:51:09.259844 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:51:09.259852 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:51:09.259860 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:51:09.259868 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 23:51:09.259876 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 23:51:09.259884 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:51:09.259893 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:51:09.259901 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:51:09.259909 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:51:09.259917 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 23:51:09.259925 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:51:09.259933 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 23:51:09.259941 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 23:51:09.259949 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:51:09.259982 systemd-journald[217]: Collecting audit messages is disabled. Sep 5 23:51:09.260003 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:51:09.260011 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:51:09.260020 systemd-journald[217]: Journal started Sep 5 23:51:09.260040 systemd-journald[217]: Runtime Journal (/run/log/journal/d00e3fd6b38b42c8990d72f7e1c33709) is 8.0M, max 78.5M, 70.5M free. Sep 5 23:51:09.267098 systemd-modules-load[218]: Inserted module 'overlay' Sep 5 23:51:09.290133 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:51:09.290799 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 23:51:09.314742 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 23:51:09.314765 kernel: Bridge firewalling registered Sep 5 23:51:09.308441 systemd-modules-load[218]: Inserted module 'br_netfilter' Sep 5 23:51:09.314724 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:51:09.322860 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 23:51:09.327460 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:51:09.337620 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:51:09.365683 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:51:09.374580 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:51:09.394615 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:51:09.412498 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:51:09.419982 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:51:09.441334 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:51:09.449252 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:51:09.461126 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:51:09.488750 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 23:51:09.503630 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:51:09.519360 dracut-cmdline[249]: dracut-dracut-053 Sep 5 23:51:09.519360 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:51:09.518595 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:51:09.579104 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:51:09.588940 systemd-resolved[258]: Positive Trust Anchors: Sep 5 23:51:09.588950 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:51:09.588981 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:51:09.591786 systemd-resolved[258]: Defaulting to hostname 'linux'. Sep 5 23:51:09.596691 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:51:09.604571 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:51:09.712438 kernel: SCSI subsystem initialized Sep 5 23:51:09.723433 kernel: Loading iSCSI transport class v2.0-870. Sep 5 23:51:09.731442 kernel: iscsi: registered transport (tcp) Sep 5 23:51:09.749362 kernel: iscsi: registered transport (qla4xxx) Sep 5 23:51:09.749439 kernel: QLogic iSCSI HBA Driver Sep 5 23:51:09.787917 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 23:51:09.802704 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 23:51:09.834261 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 23:51:09.834346 kernel: device-mapper: uevent: version 1.0.3 Sep 5 23:51:09.834359 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 23:51:09.890445 kernel: raid6: neonx8 gen() 15764 MB/s Sep 5 23:51:09.908426 kernel: raid6: neonx4 gen() 15665 MB/s Sep 5 23:51:09.928420 kernel: raid6: neonx2 gen() 13227 MB/s Sep 5 23:51:09.949421 kernel: raid6: neonx1 gen() 10517 MB/s Sep 5 23:51:09.969420 kernel: raid6: int64x8 gen() 6960 MB/s Sep 5 23:51:09.989419 kernel: raid6: int64x4 gen() 7346 MB/s Sep 5 23:51:10.009421 kernel: raid6: int64x2 gen() 6133 MB/s Sep 5 23:51:10.032282 kernel: raid6: int64x1 gen() 5061 MB/s Sep 5 23:51:10.032301 kernel: raid6: using algorithm neonx8 gen() 15764 MB/s Sep 5 23:51:10.055911 kernel: raid6: .... xor() 12062 MB/s, rmw enabled Sep 5 23:51:10.055948 kernel: raid6: using neon recovery algorithm Sep 5 23:51:10.067819 kernel: xor: measuring software checksum speed Sep 5 23:51:10.067835 kernel: 8regs : 19793 MB/sec Sep 5 23:51:10.071007 kernel: 32regs : 19669 MB/sec Sep 5 23:51:10.074281 kernel: arm64_neon : 27061 MB/sec Sep 5 23:51:10.077970 kernel: xor: using function: arm64_neon (27061 MB/sec) Sep 5 23:51:10.128432 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 23:51:10.139064 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:51:10.158589 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:51:10.180108 systemd-udevd[436]: Using default interface naming scheme 'v255'. Sep 5 23:51:10.185395 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:51:10.207623 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 23:51:10.222235 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Sep 5 23:51:10.249059 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:51:10.266977 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:51:10.305223 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:51:10.320564 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 23:51:10.347307 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 23:51:10.363739 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:51:10.377592 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:51:10.390465 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:51:10.408607 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 23:51:10.424019 kernel: hv_vmbus: Vmbus version:5.3 Sep 5 23:51:10.430645 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:51:10.447990 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:51:10.489095 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 5 23:51:10.489120 kernel: hv_vmbus: registering driver hid_hyperv Sep 5 23:51:10.489130 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 5 23:51:10.489139 kernel: hv_vmbus: registering driver hv_netvsc Sep 5 23:51:10.489148 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 5 23:51:10.489165 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 5 23:51:10.489299 kernel: hv_vmbus: registering driver hv_storvsc Sep 5 23:51:10.489310 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 5 23:51:10.448148 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:51:10.519836 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 5 23:51:10.519858 kernel: scsi host1: storvsc_host_t Sep 5 23:51:10.519911 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:51:10.538721 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 5 23:51:10.538763 kernel: scsi host0: storvsc_host_t Sep 5 23:51:10.526206 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:51:10.526440 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:51:10.553579 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:51:10.580426 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 5 23:51:10.582298 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:51:10.606429 kernel: PTP clock support registered Sep 5 23:51:10.611606 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:51:10.481333 kernel: hv_utils: Registering HyperV Utility Driver Sep 5 23:51:10.496229 kernel: hv_vmbus: registering driver hv_utils Sep 5 23:51:10.496247 kernel: hv_netvsc 00224879-8185-0022-4879-818500224879 eth0: VF slot 1 added Sep 5 23:51:10.506500 kernel: hv_utils: Heartbeat IC version 3.0 Sep 5 23:51:10.506514 kernel: hv_utils: Shutdown IC version 3.2 Sep 5 23:51:10.506522 kernel: hv_utils: TimeSync IC version 4.0 Sep 5 23:51:10.506531 systemd-journald[217]: Time jumped backwards, rotating. Sep 5 23:51:10.506575 kernel: hv_vmbus: registering driver hv_pci Sep 5 23:51:10.506583 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Sep 5 23:51:10.506700 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 23:51:10.645637 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:51:10.536883 kernel: hv_pci d56a67bc-fbce-4ade-8ac1-e51be9ed34af: PCI VMBus probing: Using version 0x10004 Sep 5 23:51:10.537040 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Sep 5 23:51:10.537140 kernel: hv_pci d56a67bc-fbce-4ade-8ac1-e51be9ed34af: PCI host bridge to bus fbce:00 Sep 5 23:51:10.537221 kernel: pci_bus fbce:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 5 23:51:10.478957 systemd-resolved[258]: Clock change detected. Flushing caches. Sep 5 23:51:10.537771 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:51:10.565563 kernel: pci_bus fbce:00: No busn resource found for root bus, will use [bus 00-ff] Sep 5 23:51:10.741591 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 5 23:51:10.741940 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Sep 5 23:51:10.745632 kernel: sd 1:0:0:0: [sda] Write Protect is off Sep 5 23:51:10.745787 kernel: pci fbce:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 5 23:51:10.749956 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 5 23:51:10.756423 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 5 23:51:10.756561 kernel: pci fbce:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 5 23:51:10.775148 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:51:10.775196 kernel: pci fbce:00:02.0: enabling Extended Tags Sep 5 23:51:10.775225 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Sep 5 23:51:10.796459 kernel: pci fbce:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at fbce:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 5 23:51:10.796631 kernel: pci_bus fbce:00: busn_res: [bus 00-ff] end is updated to 00 Sep 5 23:51:10.805545 kernel: pci fbce:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 5 23:51:10.846775 kernel: mlx5_core fbce:00:02.0: enabling device (0000 -> 0002) Sep 5 23:51:10.852874 kernel: mlx5_core fbce:00:02.0: firmware version: 16.31.2424 Sep 5 23:51:11.133806 kernel: hv_netvsc 00224879-8185-0022-4879-818500224879 eth0: VF registering: eth1 Sep 5 23:51:11.134011 kernel: mlx5_core fbce:00:02.0 eth1: joined to eth0 Sep 5 23:51:11.143024 kernel: mlx5_core fbce:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 5 23:51:11.153878 kernel: mlx5_core fbce:00:02.0 enP64462s1: renamed from eth1 Sep 5 23:51:11.362926 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (488) Sep 5 23:51:11.377483 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 5 23:51:11.409437 kernel: BTRFS: device fsid 045c118e-b098-46f0-884a-43665575c70e devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (485) Sep 5 23:51:11.422451 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 5 23:51:11.429626 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 5 23:51:11.453095 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 23:51:11.468990 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 5 23:51:11.490974 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 5 23:51:11.511118 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:51:12.519892 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 5 23:51:12.520662 disk-uuid[600]: The operation has completed successfully. Sep 5 23:51:12.575241 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 23:51:12.576885 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 23:51:12.609992 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 23:51:12.622457 sh[713]: Success Sep 5 23:51:12.654979 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 5 23:51:12.860270 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 23:51:12.869290 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 23:51:12.879717 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 23:51:12.914900 kernel: BTRFS info (device dm-0): first mount of filesystem 045c118e-b098-46f0-884a-43665575c70e Sep 5 23:51:12.914938 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:51:12.921236 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 23:51:12.925990 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 23:51:12.929803 kernel: BTRFS info (device dm-0): using free space tree Sep 5 23:51:13.205546 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 23:51:13.210814 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 23:51:13.229136 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 23:51:13.236125 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 23:51:13.272473 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:51:13.272519 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:51:13.276645 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:51:13.318183 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:51:13.327158 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 23:51:13.337869 kernel: BTRFS info (device sda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:51:13.345697 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 23:51:13.358117 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 23:51:13.391093 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:51:13.406987 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:51:13.437251 systemd-networkd[897]: lo: Link UP Sep 5 23:51:13.440375 systemd-networkd[897]: lo: Gained carrier Sep 5 23:51:13.442057 systemd-networkd[897]: Enumeration completed Sep 5 23:51:13.442663 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:51:13.442666 systemd-networkd[897]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:51:13.444442 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:51:13.451370 systemd[1]: Reached target network.target - Network. Sep 5 23:51:13.513882 kernel: mlx5_core fbce:00:02.0 enP64462s1: Link up Sep 5 23:51:13.594812 systemd-networkd[897]: enP64462s1: Link UP Sep 5 23:51:13.598415 kernel: hv_netvsc 00224879-8185-0022-4879-818500224879 eth0: Data path switched to VF: enP64462s1 Sep 5 23:51:13.595066 systemd-networkd[897]: eth0: Link UP Sep 5 23:51:13.595431 systemd-networkd[897]: eth0: Gained carrier Sep 5 23:51:13.595441 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:51:13.607072 systemd-networkd[897]: enP64462s1: Gained carrier Sep 5 23:51:13.630929 systemd-networkd[897]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 5 23:51:14.410352 ignition[866]: Ignition 2.19.0 Sep 5 23:51:14.410363 ignition[866]: Stage: fetch-offline Sep 5 23:51:14.412391 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:51:14.410421 ignition[866]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:51:14.430989 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 5 23:51:14.410430 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:51:14.410525 ignition[866]: parsed url from cmdline: "" Sep 5 23:51:14.410528 ignition[866]: no config URL provided Sep 5 23:51:14.410532 ignition[866]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:51:14.410539 ignition[866]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:51:14.410544 ignition[866]: failed to fetch config: resource requires networking Sep 5 23:51:14.410736 ignition[866]: Ignition finished successfully Sep 5 23:51:14.446810 ignition[907]: Ignition 2.19.0 Sep 5 23:51:14.446817 ignition[907]: Stage: fetch Sep 5 23:51:14.447049 ignition[907]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:51:14.447059 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:51:14.447181 ignition[907]: parsed url from cmdline: "" Sep 5 23:51:14.447184 ignition[907]: no config URL provided Sep 5 23:51:14.447192 ignition[907]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:51:14.447199 ignition[907]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:51:14.447220 ignition[907]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 5 23:51:14.590413 ignition[907]: GET result: OK Sep 5 23:51:14.593243 ignition[907]: config has been read from IMDS userdata Sep 5 23:51:14.593285 ignition[907]: parsing config with SHA512: ad1198a4e79b634a88d4b1def7d1b3f7ef8bc81e369fb2ad7d733ac64476879c452293d37fd887fe631d9c56163399f60172bb823a317c87e51d19536b92bc8f Sep 5 23:51:14.597017 unknown[907]: fetched base config from "system" Sep 5 23:51:14.597375 ignition[907]: fetch: fetch complete Sep 5 23:51:14.597024 unknown[907]: fetched base config from "system" Sep 5 23:51:14.597379 ignition[907]: fetch: fetch passed Sep 5 23:51:14.597029 unknown[907]: fetched user config from "azure" Sep 5 23:51:14.597418 ignition[907]: Ignition finished successfully Sep 5 23:51:14.600844 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 5 23:51:14.617116 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 23:51:14.642319 ignition[914]: Ignition 2.19.0 Sep 5 23:51:14.642325 ignition[914]: Stage: kargs Sep 5 23:51:14.649055 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 23:51:14.642602 ignition[914]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:51:14.642612 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:51:14.645356 ignition[914]: kargs: kargs passed Sep 5 23:51:14.666117 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 23:51:14.645412 ignition[914]: Ignition finished successfully Sep 5 23:51:14.690018 ignition[920]: Ignition 2.19.0 Sep 5 23:51:14.690034 ignition[920]: Stage: disks Sep 5 23:51:14.698287 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 23:51:14.690266 ignition[920]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:51:14.704075 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 23:51:14.690276 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:51:14.714756 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 23:51:14.692112 ignition[920]: disks: disks passed Sep 5 23:51:14.725515 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:51:14.692167 ignition[920]: Ignition finished successfully Sep 5 23:51:14.736013 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:51:14.746960 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:51:14.779030 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 23:51:14.842014 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 5 23:51:14.851819 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 23:51:14.869103 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 23:51:14.921876 kernel: EXT4-fs (sda9): mounted filesystem 72e55cb0-8368-4871-a3a0-8637412e72e8 r/w with ordered data mode. Quota mode: none. Sep 5 23:51:14.922559 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 23:51:14.927254 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 23:51:14.986939 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:51:14.993994 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 23:51:15.005357 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 5 23:51:15.033535 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Sep 5 23:51:15.033558 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:51:15.027203 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 23:51:15.064949 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:51:15.064970 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:51:15.064980 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:51:15.027234 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:51:15.061372 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 23:51:15.076807 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:51:15.092136 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 23:51:15.549513 coreos-metadata[942]: Sep 05 23:51:15.549 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 5 23:51:15.557324 coreos-metadata[942]: Sep 05 23:51:15.557 INFO Fetch successful Sep 5 23:51:15.557324 coreos-metadata[942]: Sep 05 23:51:15.557 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 5 23:51:15.573473 coreos-metadata[942]: Sep 05 23:51:15.572 INFO Fetch successful Sep 5 23:51:15.580054 coreos-metadata[942]: Sep 05 23:51:15.574 INFO wrote hostname ci-4081.3.5-n-29d70f4830 to /sysroot/etc/hostname Sep 5 23:51:15.578943 systemd-networkd[897]: eth0: Gained IPv6LL Sep 5 23:51:15.579642 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 5 23:51:15.923717 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 23:51:15.947703 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Sep 5 23:51:15.971392 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 23:51:15.979618 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 23:51:16.931198 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 23:51:16.953173 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 23:51:16.967143 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 23:51:16.985881 kernel: BTRFS info (device sda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:51:16.981529 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 23:51:17.004783 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 23:51:17.022497 ignition[1061]: INFO : Ignition 2.19.0 Sep 5 23:51:17.027158 ignition[1061]: INFO : Stage: mount Sep 5 23:51:17.031233 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:51:17.031233 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:51:17.031233 ignition[1061]: INFO : mount: mount passed Sep 5 23:51:17.031233 ignition[1061]: INFO : Ignition finished successfully Sep 5 23:51:17.031909 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 23:51:17.057083 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 23:51:17.077845 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:51:17.109668 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1072) Sep 5 23:51:17.109717 kernel: BTRFS info (device sda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:51:17.119593 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:51:17.119622 kernel: BTRFS info (device sda6): using free space tree Sep 5 23:51:17.126882 kernel: BTRFS info (device sda6): auto enabling async discard Sep 5 23:51:17.127649 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:51:17.151890 ignition[1090]: INFO : Ignition 2.19.0 Sep 5 23:51:17.151890 ignition[1090]: INFO : Stage: files Sep 5 23:51:17.151890 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:51:17.151890 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:51:17.171746 ignition[1090]: DEBUG : files: compiled without relabeling support, skipping Sep 5 23:51:17.171746 ignition[1090]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 23:51:17.171746 ignition[1090]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 23:51:17.238023 ignition[1090]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 23:51:17.245052 ignition[1090]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 23:51:17.245052 ignition[1090]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 23:51:17.239390 unknown[1090]: wrote ssh authorized keys file for user: core Sep 5 23:51:17.275891 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 5 23:51:17.286655 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 5 23:51:17.317854 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 23:51:17.411210 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 23:51:17.421371 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 5 23:51:17.974408 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 23:51:19.179702 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 23:51:19.179702 ignition[1090]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 23:51:19.215822 ignition[1090]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:51:19.227170 ignition[1090]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:51:19.227170 ignition[1090]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 23:51:19.227170 ignition[1090]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 5 23:51:19.227170 ignition[1090]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 23:51:19.227170 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:51:19.227170 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:51:19.227170 ignition[1090]: INFO : files: files passed Sep 5 23:51:19.227170 ignition[1090]: INFO : Ignition finished successfully Sep 5 23:51:19.232743 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 23:51:19.269150 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 23:51:19.286026 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 23:51:19.344389 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:51:19.344389 initrd-setup-root-after-ignition[1116]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:51:19.307214 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 23:51:19.366933 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:51:19.307303 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 23:51:19.316112 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:51:19.326835 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 23:51:19.361134 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 23:51:19.397966 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 23:51:19.398095 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 23:51:19.410751 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 23:51:19.422511 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 23:51:19.432399 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 23:51:19.451017 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 23:51:19.476915 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:51:19.496113 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 23:51:19.513043 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:51:19.525001 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:51:19.531215 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 23:51:19.541588 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 23:51:19.541707 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:51:19.556375 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 23:51:19.561740 systemd[1]: Stopped target basic.target - Basic System. Sep 5 23:51:19.572817 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 23:51:19.583446 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:51:19.594185 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 23:51:19.606329 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 23:51:19.617435 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:51:19.629778 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 23:51:19.640175 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 23:51:19.652840 systemd[1]: Stopped target swap.target - Swaps. Sep 5 23:51:19.662377 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 23:51:19.662496 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:51:19.677071 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:51:19.683134 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:51:19.694339 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 23:51:19.697882 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:51:19.707400 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 23:51:19.707521 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 23:51:19.725601 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 23:51:19.725720 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:51:19.732292 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 23:51:19.732381 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 23:51:19.806890 ignition[1141]: INFO : Ignition 2.19.0 Sep 5 23:51:19.806890 ignition[1141]: INFO : Stage: umount Sep 5 23:51:19.806890 ignition[1141]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:51:19.806890 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 5 23:51:19.742962 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 5 23:51:19.867978 ignition[1141]: INFO : umount: umount passed Sep 5 23:51:19.867978 ignition[1141]: INFO : Ignition finished successfully Sep 5 23:51:19.743055 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 5 23:51:19.778109 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 23:51:19.808115 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 23:51:19.818334 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 23:51:19.818503 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:51:19.836089 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 23:51:19.836203 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:51:19.854686 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 23:51:19.855294 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 23:51:19.855408 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 23:51:19.861642 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 23:51:19.861875 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 23:51:19.873292 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 23:51:19.873361 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 23:51:19.882399 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 5 23:51:19.882441 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 5 23:51:19.892160 systemd[1]: Stopped target network.target - Network. Sep 5 23:51:19.903171 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 23:51:19.903227 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:51:19.914030 systemd[1]: Stopped target paths.target - Path Units. Sep 5 23:51:19.923485 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 23:51:19.934920 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:51:19.948549 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 23:51:19.958721 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 23:51:19.968070 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 23:51:19.968141 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:51:19.978103 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 23:51:19.978154 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:51:19.988446 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 23:51:19.988504 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 23:51:19.998661 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 23:51:19.998707 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 23:51:20.009126 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 23:51:20.019111 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 23:51:20.029491 systemd-networkd[897]: eth0: DHCPv6 lease lost Sep 5 23:51:20.030886 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 23:51:20.031005 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 23:51:20.046100 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 23:51:20.047444 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 23:51:20.057896 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 23:51:20.059883 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 23:51:20.073238 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 23:51:20.073301 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:51:20.111077 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 23:51:20.120012 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 23:51:20.120116 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:51:20.307847 kernel: hv_netvsc 00224879-8185-0022-4879-818500224879 eth0: Data path switched from VF: enP64462s1 Sep 5 23:51:20.130971 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 23:51:20.131028 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:51:20.140491 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 23:51:20.140537 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 23:51:20.153510 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 23:51:20.153553 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:51:20.165177 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:51:20.205802 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 23:51:20.205992 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:51:20.217500 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 23:51:20.217539 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 23:51:20.223447 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 23:51:20.223474 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:51:20.234237 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 23:51:20.234286 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:51:20.250914 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 23:51:20.250967 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 23:51:20.265986 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:51:20.266035 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:51:20.308048 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 23:51:20.321585 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 23:51:20.321666 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:51:20.334100 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 5 23:51:20.334157 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:51:20.345968 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 23:51:20.346015 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:51:20.357927 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:51:20.357975 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:51:20.370014 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 23:51:20.370111 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 23:51:20.390148 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 23:51:20.390281 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 23:51:20.729170 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 23:51:20.729322 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 23:51:20.739081 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 23:51:20.749115 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 23:51:20.749172 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 23:51:20.775086 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 23:51:20.819398 systemd[1]: Switching root. Sep 5 23:51:20.881376 systemd-journald[217]: Journal stopped Sep 5 23:51:25.777708 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Sep 5 23:51:25.777731 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 23:51:25.777742 kernel: SELinux: policy capability open_perms=1 Sep 5 23:51:25.777752 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 23:51:25.777760 kernel: SELinux: policy capability always_check_network=0 Sep 5 23:51:25.777768 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 23:51:25.777777 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 23:51:25.777785 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 23:51:25.777793 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 23:51:25.777801 kernel: audit: type=1403 audit(1757116282.381:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 23:51:25.777812 systemd[1]: Successfully loaded SELinux policy in 183.088ms. Sep 5 23:51:25.777822 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.931ms. Sep 5 23:51:25.777834 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:51:25.777843 systemd[1]: Detected virtualization microsoft. Sep 5 23:51:25.777853 systemd[1]: Detected architecture arm64. Sep 5 23:51:25.777882 systemd[1]: Detected first boot. Sep 5 23:51:25.777892 systemd[1]: Hostname set to . Sep 5 23:51:25.777902 systemd[1]: Initializing machine ID from random generator. Sep 5 23:51:25.777911 zram_generator::config[1183]: No configuration found. Sep 5 23:51:25.777922 systemd[1]: Populated /etc with preset unit settings. Sep 5 23:51:25.777931 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 23:51:25.777942 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 23:51:25.777952 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 23:51:25.777962 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 23:51:25.777972 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 23:51:25.777982 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 23:51:25.777992 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 23:51:25.778001 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 23:51:25.778013 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 23:51:25.778023 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 23:51:25.778033 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 23:51:25.778042 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:51:25.778052 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:51:25.778063 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 23:51:25.778073 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 23:51:25.778083 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 23:51:25.778093 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:51:25.778104 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 5 23:51:25.778113 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:51:25.778123 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 23:51:25.778135 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 23:51:25.778145 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 23:51:25.778155 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 23:51:25.778165 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:51:25.778176 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:51:25.778186 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:51:25.778196 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:51:25.778206 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 23:51:25.778216 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 23:51:25.778225 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:51:25.778235 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:51:25.778247 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:51:25.778257 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 23:51:25.778267 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 23:51:25.778278 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 23:51:25.778288 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 23:51:25.778298 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 23:51:25.778310 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 23:51:25.778320 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 23:51:25.778330 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 23:51:25.778341 systemd[1]: Reached target machines.target - Containers. Sep 5 23:51:25.778351 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 23:51:25.778361 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:51:25.778371 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:51:25.778381 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 23:51:25.778392 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:51:25.778402 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:51:25.778412 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:51:25.778422 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 23:51:25.778432 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:51:25.778443 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 23:51:25.778453 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 23:51:25.778463 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 23:51:25.778472 kernel: fuse: init (API version 7.39) Sep 5 23:51:25.778483 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 23:51:25.778494 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 23:51:25.778504 kernel: loop: module loaded Sep 5 23:51:25.778513 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:51:25.778523 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:51:25.778533 kernel: ACPI: bus type drm_connector registered Sep 5 23:51:25.778542 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 23:51:25.778567 systemd-journald[1269]: Collecting audit messages is disabled. Sep 5 23:51:25.778590 systemd-journald[1269]: Journal started Sep 5 23:51:25.778611 systemd-journald[1269]: Runtime Journal (/run/log/journal/573700d813ee40ad8f6ff31c075cf63f) is 8.0M, max 78.5M, 70.5M free. Sep 5 23:51:24.776207 systemd[1]: Queued start job for default target multi-user.target. Sep 5 23:51:24.974526 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 5 23:51:24.974865 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 23:51:24.975163 systemd[1]: systemd-journald.service: Consumed 2.989s CPU time. Sep 5 23:51:25.800563 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 23:51:25.816890 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:51:25.827852 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 23:51:25.828192 systemd[1]: Stopped verity-setup.service. Sep 5 23:51:25.843888 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:51:25.844666 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 23:51:25.850518 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 23:51:25.857246 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 23:51:25.862507 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 23:51:25.868555 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 23:51:25.874450 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 23:51:25.880920 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 23:51:25.888105 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:51:25.895354 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 23:51:25.895485 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 23:51:25.903466 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:51:25.903595 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:51:25.911532 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:51:25.911670 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:51:25.918522 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:51:25.918643 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:51:25.925778 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 23:51:25.925913 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 23:51:25.932287 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:51:25.932407 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:51:25.938084 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:51:25.944069 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 23:51:25.951496 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 23:51:25.958376 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:51:25.975077 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 23:51:25.987964 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 23:51:25.995059 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 23:51:26.001129 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 23:51:26.001167 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:51:26.007585 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 23:51:26.015202 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 23:51:26.024043 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 23:51:26.031877 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:51:26.051001 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 23:51:26.058262 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 23:51:26.064748 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:51:26.065775 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 23:51:26.072142 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:51:26.073284 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:51:26.082043 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 23:51:26.090991 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:51:26.098969 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 23:51:26.111670 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 23:51:26.118535 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 23:51:26.126373 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 23:51:26.135611 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 23:51:26.150349 kernel: loop0: detected capacity change from 0 to 211168 Sep 5 23:51:26.150753 systemd-journald[1269]: Time spent on flushing to /var/log/journal/573700d813ee40ad8f6ff31c075cf63f is 74.404ms for 898 entries. Sep 5 23:51:26.150753 systemd-journald[1269]: System Journal (/var/log/journal/573700d813ee40ad8f6ff31c075cf63f) is 11.8M, max 2.6G, 2.6G free. Sep 5 23:51:26.312195 systemd-journald[1269]: Received client request to flush runtime journal. Sep 5 23:51:26.312251 systemd-journald[1269]: /var/log/journal/573700d813ee40ad8f6ff31c075cf63f/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Sep 5 23:51:26.312276 systemd-journald[1269]: Rotating system journal. Sep 5 23:51:26.312301 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 23:51:26.156640 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 23:51:26.174130 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 23:51:26.192945 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:51:26.206653 udevadm[1320]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 5 23:51:26.233778 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Sep 5 23:51:26.233789 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Sep 5 23:51:26.239105 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:51:26.260111 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 23:51:26.314777 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 23:51:26.322540 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 23:51:26.323330 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 23:51:26.344877 kernel: loop1: detected capacity change from 0 to 31320 Sep 5 23:51:26.417402 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 23:51:26.428017 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:51:26.446348 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 5 23:51:26.446371 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 5 23:51:26.449845 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:51:26.828887 kernel: loop2: detected capacity change from 0 to 114432 Sep 5 23:51:27.262901 kernel: loop3: detected capacity change from 0 to 114328 Sep 5 23:51:27.515095 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 23:51:27.530020 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:51:27.555302 systemd-udevd[1346]: Using default interface naming scheme 'v255'. Sep 5 23:51:27.573885 kernel: loop4: detected capacity change from 0 to 211168 Sep 5 23:51:27.584875 kernel: loop5: detected capacity change from 0 to 31320 Sep 5 23:51:27.593869 kernel: loop6: detected capacity change from 0 to 114432 Sep 5 23:51:27.602875 kernel: loop7: detected capacity change from 0 to 114328 Sep 5 23:51:27.606252 (sd-merge)[1348]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 5 23:51:27.606671 (sd-merge)[1348]: Merged extensions into '/usr'. Sep 5 23:51:27.610189 systemd[1]: Reloading requested from client PID 1317 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 23:51:27.610202 systemd[1]: Reloading... Sep 5 23:51:27.675920 zram_generator::config[1374]: No configuration found. Sep 5 23:51:27.896949 kernel: hv_vmbus: registering driver hv_balloon Sep 5 23:51:27.897046 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 23:51:27.911148 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 5 23:51:27.911257 kernel: hv_vmbus: registering driver hyperv_fb Sep 5 23:51:27.916031 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 5 23:51:27.925874 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 5 23:51:27.925952 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 5 23:51:27.929667 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:51:27.938039 kernel: Console: switching to colour dummy device 80x25 Sep 5 23:51:27.947394 kernel: Console: switching to colour frame buffer device 128x48 Sep 5 23:51:27.996897 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1399) Sep 5 23:51:28.029291 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 5 23:51:28.029716 systemd[1]: Reloading finished in 419 ms. Sep 5 23:51:28.061794 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:51:28.071524 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 23:51:28.108238 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 5 23:51:28.123475 systemd[1]: Starting ensure-sysext.service... Sep 5 23:51:28.131264 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 23:51:28.140523 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:51:28.150151 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:51:28.159139 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:51:28.174180 systemd[1]: Reloading requested from client PID 1502 ('systemctl') (unit ensure-sysext.service)... Sep 5 23:51:28.174197 systemd[1]: Reloading... Sep 5 23:51:28.197931 systemd-tmpfiles[1506]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 23:51:28.198198 systemd-tmpfiles[1506]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 23:51:28.198851 systemd-tmpfiles[1506]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 23:51:28.199589 systemd-tmpfiles[1506]: ACLs are not supported, ignoring. Sep 5 23:51:28.199710 systemd-tmpfiles[1506]: ACLs are not supported, ignoring. Sep 5 23:51:28.233274 systemd-tmpfiles[1506]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:51:28.233286 systemd-tmpfiles[1506]: Skipping /boot Sep 5 23:51:28.241277 systemd-tmpfiles[1506]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:51:28.241422 systemd-tmpfiles[1506]: Skipping /boot Sep 5 23:51:28.256916 zram_generator::config[1544]: No configuration found. Sep 5 23:51:28.361977 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:51:28.436490 systemd[1]: Reloading finished in 262 ms. Sep 5 23:51:28.453288 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 23:51:28.467308 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 23:51:28.476513 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:51:28.497085 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:51:28.505437 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 23:51:28.515367 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 23:51:28.532478 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 23:51:28.542166 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:51:28.550920 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 23:51:28.566802 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 23:51:28.577389 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:51:28.593091 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:51:28.600662 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:51:28.611971 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:51:28.619426 lvm[1604]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:51:28.630093 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:51:28.636929 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:51:28.638802 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 23:51:28.657439 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 23:51:28.672085 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:51:28.672975 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:51:28.682895 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:51:28.683048 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:51:28.693911 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:51:28.695088 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:51:28.708034 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 23:51:28.721729 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:51:28.730150 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:51:28.737425 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 23:51:28.746901 lvm[1637]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:51:28.748154 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:51:28.764447 augenrules[1639]: No rules Sep 5 23:51:28.773271 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:51:28.782140 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:51:28.793554 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:51:28.800736 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:51:28.808849 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 23:51:28.817263 systemd-resolved[1611]: Positive Trust Anchors: Sep 5 23:51:28.818443 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 23:51:28.819454 systemd-resolved[1611]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:51:28.819543 systemd-resolved[1611]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:51:28.828430 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:51:28.828582 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:51:28.830556 systemd-networkd[1504]: lo: Link UP Sep 5 23:51:28.830868 systemd-networkd[1504]: lo: Gained carrier Sep 5 23:51:28.832943 systemd-networkd[1504]: Enumeration completed Sep 5 23:51:28.833388 systemd-networkd[1504]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:51:28.833488 systemd-networkd[1504]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:51:28.837132 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:51:28.839127 systemd-resolved[1611]: Using system hostname 'ci-4081.3.5-n-29d70f4830'. Sep 5 23:51:28.844530 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:51:28.844704 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:51:28.854661 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:51:28.854805 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:51:28.870426 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:51:28.875102 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:51:28.884163 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:51:28.896140 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:51:28.903873 kernel: mlx5_core fbce:00:02.0 enP64462s1: Link up Sep 5 23:51:28.909926 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:51:28.916892 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:51:28.923133 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 23:51:28.931092 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 23:51:28.938986 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:51:28.939288 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:51:28.954899 kernel: hv_netvsc 00224879-8185-0022-4879-818500224879 eth0: Data path switched to VF: enP64462s1 Sep 5 23:51:28.955268 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:51:28.956875 systemd-networkd[1504]: enP64462s1: Link UP Sep 5 23:51:28.956981 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:51:28.959256 systemd-networkd[1504]: eth0: Link UP Sep 5 23:51:28.959278 systemd-networkd[1504]: eth0: Gained carrier Sep 5 23:51:28.959303 systemd-networkd[1504]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:51:28.964670 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:51:28.972137 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:51:28.972370 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:51:28.980967 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:51:28.981241 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:51:28.982574 systemd-networkd[1504]: enP64462s1: Gained carrier Sep 5 23:51:28.991900 systemd[1]: Finished ensure-sysext.service. Sep 5 23:51:28.996951 systemd-networkd[1504]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 5 23:51:29.000587 systemd[1]: Reached target network.target - Network. Sep 5 23:51:29.007503 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:51:29.015138 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:51:29.015345 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:51:29.361952 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 23:51:29.369896 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 23:51:30.743046 systemd-networkd[1504]: eth0: Gained IPv6LL Sep 5 23:51:30.748503 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 23:51:30.756645 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 23:51:32.503027 ldconfig[1312]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 23:51:32.516450 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 23:51:32.527078 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 23:51:32.540417 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 23:51:32.546461 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:51:32.551991 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 23:51:32.558800 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 23:51:32.565603 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 23:51:32.571541 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 23:51:32.578478 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 23:51:32.585067 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 23:51:32.585099 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:51:32.590087 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:51:32.595513 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 23:51:32.603165 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 23:51:32.612494 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 23:51:32.618453 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 23:51:32.624371 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:51:32.629309 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:51:32.634244 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:51:32.634275 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:51:32.646947 systemd[1]: Starting chronyd.service - NTP client/server... Sep 5 23:51:32.655988 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 23:51:32.671259 (chronyd)[1671]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 5 23:51:32.672043 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 5 23:51:32.680293 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 23:51:32.686487 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 23:51:32.695916 jq[1677]: false Sep 5 23:51:32.696326 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 23:51:32.704209 chronyd[1680]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 5 23:51:32.705081 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 23:51:32.705210 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Sep 5 23:51:32.716038 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 5 23:51:32.721080 KVP[1681]: KVP starting; pid is:1681 Sep 5 23:51:32.722781 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 5 23:51:32.724687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:32.733008 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 23:51:32.740994 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 23:51:32.747003 chronyd[1680]: Timezone right/UTC failed leap second check, ignoring Sep 5 23:51:32.747926 chronyd[1680]: Loaded seccomp filter (level 2) Sep 5 23:51:32.750900 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 23:51:32.756849 extend-filesystems[1678]: Found loop4 Sep 5 23:51:32.756849 extend-filesystems[1678]: Found loop5 Sep 5 23:51:32.756849 extend-filesystems[1678]: Found loop6 Sep 5 23:51:32.756849 extend-filesystems[1678]: Found loop7 Sep 5 23:51:32.756849 extend-filesystems[1678]: Found sda Sep 5 23:51:32.756849 extend-filesystems[1678]: Found sda1 Sep 5 23:51:32.756849 extend-filesystems[1678]: Found sda2 Sep 5 23:51:32.756849 extend-filesystems[1678]: Found sda3 Sep 5 23:51:32.756849 extend-filesystems[1678]: Found usr Sep 5 23:51:32.756849 extend-filesystems[1678]: Found sda4 Sep 5 23:51:32.756849 extend-filesystems[1678]: Found sda6 Sep 5 23:51:32.756849 extend-filesystems[1678]: Found sda7 Sep 5 23:51:32.756849 extend-filesystems[1678]: Found sda9 Sep 5 23:51:32.756849 extend-filesystems[1678]: Checking size of /dev/sda9 Sep 5 23:51:32.914881 kernel: hv_utils: KVP IC version 4.0 Sep 5 23:51:32.778784 KVP[1681]: KVP LIC Version: 3.1 Sep 5 23:51:32.915094 extend-filesystems[1678]: Old size kept for /dev/sda9 Sep 5 23:51:32.915094 extend-filesystems[1678]: Found sr0 Sep 5 23:51:32.761037 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 23:51:32.893351 dbus-daemon[1674]: [system] SELinux support is enabled Sep 5 23:51:32.779047 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 23:51:32.815031 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 23:51:32.831789 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 23:51:32.839112 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 23:51:32.973725 update_engine[1705]: I20250905 23:51:32.959914 1705 main.cc:92] Flatcar Update Engine starting Sep 5 23:51:32.973725 update_engine[1705]: I20250905 23:51:32.961802 1705 update_check_scheduler.cc:74] Next update check in 2m43s Sep 5 23:51:32.846665 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 23:51:32.974117 jq[1713]: true Sep 5 23:51:32.872804 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 23:51:32.884572 systemd[1]: Started chronyd.service - NTP client/server. Sep 5 23:51:32.897873 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 23:51:32.909934 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 23:51:32.910938 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 23:51:32.911228 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 23:51:32.911382 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 23:51:32.943040 systemd-logind[1701]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 5 23:51:32.943217 systemd-logind[1701]: New seat seat0. Sep 5 23:51:32.944302 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 23:51:32.944481 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 23:51:32.966047 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 23:51:32.983689 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 23:51:32.992293 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 23:51:32.992901 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 23:51:33.017168 (ntainerd)[1735]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 23:51:33.019602 jq[1734]: true Sep 5 23:51:33.030368 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 23:51:33.030404 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 23:51:33.037807 dbus-daemon[1674]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 5 23:51:33.038253 coreos-metadata[1673]: Sep 05 23:51:33.038 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 5 23:51:33.039327 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 23:51:33.039352 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 23:51:33.047503 coreos-metadata[1673]: Sep 05 23:51:33.047 INFO Fetch successful Sep 5 23:51:33.047503 coreos-metadata[1673]: Sep 05 23:51:33.047 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 5 23:51:33.050376 systemd[1]: Started update-engine.service - Update Engine. Sep 5 23:51:33.058381 coreos-metadata[1673]: Sep 05 23:51:33.058 INFO Fetch successful Sep 5 23:51:33.058891 coreos-metadata[1673]: Sep 05 23:51:33.058 INFO Fetching http://168.63.129.16/machine/6fe45c74-cdbe-4c42-843b-0d65fc0c695a/841ba8b9%2De1d9%2D4c36%2Daff7%2Df570c785e662.%5Fci%2D4081.3.5%2Dn%2D29d70f4830?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 5 23:51:33.063825 coreos-metadata[1673]: Sep 05 23:51:33.063 INFO Fetch successful Sep 5 23:51:33.064083 coreos-metadata[1673]: Sep 05 23:51:33.063 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 5 23:51:33.064829 tar[1733]: linux-arm64/LICENSE Sep 5 23:51:33.065925 tar[1733]: linux-arm64/helm Sep 5 23:51:33.067256 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 23:51:33.088256 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1720) Sep 5 23:51:33.088338 coreos-metadata[1673]: Sep 05 23:51:33.080 INFO Fetch successful Sep 5 23:51:33.157188 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 5 23:51:33.171496 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 23:51:33.322320 bash[1777]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:51:33.322851 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 23:51:33.335348 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 23:51:33.431108 locksmithd[1753]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 23:51:33.736670 sshd_keygen[1711]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 23:51:33.788277 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 23:51:33.804140 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 23:51:33.814948 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 5 23:51:33.826788 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 23:51:33.827256 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 23:51:33.839173 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 23:51:33.878227 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 5 23:51:33.890603 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 23:51:33.906066 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 23:51:33.921429 tar[1733]: linux-arm64/README.md Sep 5 23:51:33.924226 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 5 23:51:33.932520 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 23:51:33.955445 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 23:51:33.983332 containerd[1735]: time="2025-09-05T23:51:33.982936320Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 23:51:33.993037 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:34.004039 (kubelet)[1837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:51:34.012609 containerd[1735]: time="2025-09-05T23:51:34.012555800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:51:34.013981 containerd[1735]: time="2025-09-05T23:51:34.013942040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:51:34.013981 containerd[1735]: time="2025-09-05T23:51:34.013977120Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 23:51:34.014060 containerd[1735]: time="2025-09-05T23:51:34.013993680Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 23:51:34.014509 containerd[1735]: time="2025-09-05T23:51:34.014154160Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 23:51:34.014509 containerd[1735]: time="2025-09-05T23:51:34.014177480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 23:51:34.014509 containerd[1735]: time="2025-09-05T23:51:34.014239680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:51:34.014509 containerd[1735]: time="2025-09-05T23:51:34.014252720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:51:34.014509 containerd[1735]: time="2025-09-05T23:51:34.014404040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:51:34.014509 containerd[1735]: time="2025-09-05T23:51:34.014419200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 23:51:34.014509 containerd[1735]: time="2025-09-05T23:51:34.014431640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:51:34.014509 containerd[1735]: time="2025-09-05T23:51:34.014440960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 23:51:34.014509 containerd[1735]: time="2025-09-05T23:51:34.014510920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:51:34.014712 containerd[1735]: time="2025-09-05T23:51:34.014688000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:51:34.014805 containerd[1735]: time="2025-09-05T23:51:34.014779480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:51:34.014805 containerd[1735]: time="2025-09-05T23:51:34.014803360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 23:51:34.014915 containerd[1735]: time="2025-09-05T23:51:34.014897560Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 23:51:34.014968 containerd[1735]: time="2025-09-05T23:51:34.014950440Z" level=info msg="metadata content store policy set" policy=shared Sep 5 23:51:34.029214 containerd[1735]: time="2025-09-05T23:51:34.028698200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 23:51:34.029214 containerd[1735]: time="2025-09-05T23:51:34.028756320Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 23:51:34.029214 containerd[1735]: time="2025-09-05T23:51:34.028778480Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 23:51:34.029214 containerd[1735]: time="2025-09-05T23:51:34.028793280Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 23:51:34.029214 containerd[1735]: time="2025-09-05T23:51:34.028807640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 23:51:34.029214 containerd[1735]: time="2025-09-05T23:51:34.028991000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 23:51:34.029214 containerd[1735]: time="2025-09-05T23:51:34.029217280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 23:51:34.029433 containerd[1735]: time="2025-09-05T23:51:34.029310160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 23:51:34.029433 containerd[1735]: time="2025-09-05T23:51:34.029326760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 23:51:34.029433 containerd[1735]: time="2025-09-05T23:51:34.029339440Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 23:51:34.029433 containerd[1735]: time="2025-09-05T23:51:34.029352560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 23:51:34.029433 containerd[1735]: time="2025-09-05T23:51:34.029365560Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 23:51:34.029433 containerd[1735]: time="2025-09-05T23:51:34.029377680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 23:51:34.029433 containerd[1735]: time="2025-09-05T23:51:34.029390960Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 23:51:34.029433 containerd[1735]: time="2025-09-05T23:51:34.029406240Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 23:51:34.029433 containerd[1735]: time="2025-09-05T23:51:34.029427640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 23:51:34.029625 containerd[1735]: time="2025-09-05T23:51:34.029441600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 23:51:34.029625 containerd[1735]: time="2025-09-05T23:51:34.029453040Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 23:51:34.029625 containerd[1735]: time="2025-09-05T23:51:34.029472760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 23:51:34.029625 containerd[1735]: time="2025-09-05T23:51:34.029487400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 23:51:34.029625 containerd[1735]: time="2025-09-05T23:51:34.029499360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 23:51:34.029625 containerd[1735]: time="2025-09-05T23:51:34.029512080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 23:51:34.029625 containerd[1735]: time="2025-09-05T23:51:34.029523600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 23:51:34.029625 containerd[1735]: time="2025-09-05T23:51:34.029536800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 23:51:34.029625 containerd[1735]: time="2025-09-05T23:51:34.029553040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 23:51:34.029625 containerd[1735]: time="2025-09-05T23:51:34.029566840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 23:51:34.029625 containerd[1735]: time="2025-09-05T23:51:34.029579880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 23:51:34.029625 containerd[1735]: time="2025-09-05T23:51:34.029595080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 23:51:34.029625 containerd[1735]: time="2025-09-05T23:51:34.029606400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 23:51:34.029625 containerd[1735]: time="2025-09-05T23:51:34.029618000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 23:51:34.030816 containerd[1735]: time="2025-09-05T23:51:34.029632040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 23:51:34.030816 containerd[1735]: time="2025-09-05T23:51:34.029647280Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 23:51:34.030816 containerd[1735]: time="2025-09-05T23:51:34.029671280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 23:51:34.030816 containerd[1735]: time="2025-09-05T23:51:34.029683000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 23:51:34.030816 containerd[1735]: time="2025-09-05T23:51:34.029693280Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 23:51:34.030816 containerd[1735]: time="2025-09-05T23:51:34.030400960Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 23:51:34.030816 containerd[1735]: time="2025-09-05T23:51:34.030436560Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 23:51:34.030816 containerd[1735]: time="2025-09-05T23:51:34.030448480Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 23:51:34.030816 containerd[1735]: time="2025-09-05T23:51:34.030460560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 23:51:34.030816 containerd[1735]: time="2025-09-05T23:51:34.030469560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 23:51:34.030816 containerd[1735]: time="2025-09-05T23:51:34.030484040Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 23:51:34.030816 containerd[1735]: time="2025-09-05T23:51:34.030494240Z" level=info msg="NRI interface is disabled by configuration." Sep 5 23:51:34.030816 containerd[1735]: time="2025-09-05T23:51:34.030504120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 23:51:34.031077 containerd[1735]: time="2025-09-05T23:51:34.030767040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 23:51:34.031077 containerd[1735]: time="2025-09-05T23:51:34.030831200Z" level=info msg="Connect containerd service" Sep 5 23:51:34.031077 containerd[1735]: time="2025-09-05T23:51:34.030878520Z" level=info msg="using legacy CRI server" Sep 5 23:51:34.031077 containerd[1735]: time="2025-09-05T23:51:34.030887000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 23:51:34.031077 containerd[1735]: time="2025-09-05T23:51:34.030971400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 23:51:34.031599 containerd[1735]: time="2025-09-05T23:51:34.031570960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:51:34.031923 containerd[1735]: time="2025-09-05T23:51:34.031885040Z" level=info msg="Start subscribing containerd event" Sep 5 23:51:34.031954 containerd[1735]: time="2025-09-05T23:51:34.031933880Z" level=info msg="Start recovering state" Sep 5 23:51:34.032391 containerd[1735]: time="2025-09-05T23:51:34.031997280Z" level=info msg="Start event monitor" Sep 5 23:51:34.032391 containerd[1735]: time="2025-09-05T23:51:34.032012240Z" level=info msg="Start snapshots syncer" Sep 5 23:51:34.032391 containerd[1735]: time="2025-09-05T23:51:34.032037200Z" level=info msg="Start cni network conf syncer for default" Sep 5 23:51:34.032391 containerd[1735]: time="2025-09-05T23:51:34.032045680Z" level=info msg="Start streaming server" Sep 5 23:51:34.033849 containerd[1735]: time="2025-09-05T23:51:34.033813440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 23:51:34.036562 containerd[1735]: time="2025-09-05T23:51:34.034099840Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 23:51:34.036562 containerd[1735]: time="2025-09-05T23:51:34.034187200Z" level=info msg="containerd successfully booted in 0.051964s" Sep 5 23:51:34.034271 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 23:51:34.040724 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 23:51:34.049933 systemd[1]: Startup finished in 646ms (kernel) + 13.626s (initrd) + 11.849s (userspace) = 26.122s. Sep 5 23:51:34.420490 login[1825]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:51:34.425648 login[1826]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:51:34.437700 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 23:51:34.445186 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 23:51:34.452950 systemd-logind[1701]: New session 1 of user core. Sep 5 23:51:34.459044 systemd-logind[1701]: New session 2 of user core. Sep 5 23:51:34.464518 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 23:51:34.472716 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 23:51:34.476700 (systemd)[1852]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 23:51:34.483889 kubelet[1837]: E0905 23:51:34.483378 1837 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:51:34.490996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:51:34.491269 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:51:34.643474 systemd[1852]: Queued start job for default target default.target. Sep 5 23:51:34.650023 systemd[1852]: Created slice app.slice - User Application Slice. Sep 5 23:51:34.650053 systemd[1852]: Reached target paths.target - Paths. Sep 5 23:51:34.650066 systemd[1852]: Reached target timers.target - Timers. Sep 5 23:51:34.651216 systemd[1852]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 23:51:34.661827 systemd[1852]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 23:51:34.662126 systemd[1852]: Reached target sockets.target - Sockets. Sep 5 23:51:34.662144 systemd[1852]: Reached target basic.target - Basic System. Sep 5 23:51:34.662190 systemd[1852]: Reached target default.target - Main User Target. Sep 5 23:51:34.662218 systemd[1852]: Startup finished in 180ms. Sep 5 23:51:34.662254 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 23:51:34.669041 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 23:51:34.669727 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 23:51:35.779788 waagent[1823]: 2025-09-05T23:51:35.779686Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 5 23:51:35.785263 waagent[1823]: 2025-09-05T23:51:35.785204Z INFO Daemon Daemon OS: flatcar 4081.3.5 Sep 5 23:51:35.789537 waagent[1823]: 2025-09-05T23:51:35.789491Z INFO Daemon Daemon Python: 3.11.9 Sep 5 23:51:35.794661 waagent[1823]: 2025-09-05T23:51:35.794603Z INFO Daemon Daemon Run daemon Sep 5 23:51:35.799136 waagent[1823]: 2025-09-05T23:51:35.799089Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.5' Sep 5 23:51:35.808098 waagent[1823]: 2025-09-05T23:51:35.807897Z INFO Daemon Daemon Using waagent for provisioning Sep 5 23:51:35.812952 waagent[1823]: 2025-09-05T23:51:35.812905Z INFO Daemon Daemon Activate resource disk Sep 5 23:51:35.817435 waagent[1823]: 2025-09-05T23:51:35.817389Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 5 23:51:35.828128 waagent[1823]: 2025-09-05T23:51:35.828078Z INFO Daemon Daemon Found device: None Sep 5 23:51:35.832489 waagent[1823]: 2025-09-05T23:51:35.832445Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 5 23:51:35.840471 waagent[1823]: 2025-09-05T23:51:35.840428Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 5 23:51:35.852734 waagent[1823]: 2025-09-05T23:51:35.852677Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 5 23:51:35.858316 waagent[1823]: 2025-09-05T23:51:35.858271Z INFO Daemon Daemon Running default provisioning handler Sep 5 23:51:35.870150 waagent[1823]: 2025-09-05T23:51:35.870089Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 5 23:51:35.883322 waagent[1823]: 2025-09-05T23:51:35.883262Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 5 23:51:35.892390 waagent[1823]: 2025-09-05T23:51:35.892345Z INFO Daemon Daemon cloud-init is enabled: False Sep 5 23:51:35.897029 waagent[1823]: 2025-09-05T23:51:35.896988Z INFO Daemon Daemon Copying ovf-env.xml Sep 5 23:51:36.032809 waagent[1823]: 2025-09-05T23:51:36.032661Z INFO Daemon Daemon Successfully mounted dvd Sep 5 23:51:36.068156 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 5 23:51:36.076710 waagent[1823]: 2025-09-05T23:51:36.071837Z INFO Daemon Daemon Detect protocol endpoint Sep 5 23:51:36.077122 waagent[1823]: 2025-09-05T23:51:36.077071Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 5 23:51:36.082987 waagent[1823]: 2025-09-05T23:51:36.082942Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 5 23:51:36.089313 waagent[1823]: 2025-09-05T23:51:36.089269Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 5 23:51:36.095018 waagent[1823]: 2025-09-05T23:51:36.094972Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 5 23:51:36.100251 waagent[1823]: 2025-09-05T23:51:36.100206Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 5 23:51:36.135128 waagent[1823]: 2025-09-05T23:51:36.135077Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 5 23:51:36.142698 waagent[1823]: 2025-09-05T23:51:36.142669Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 5 23:51:36.148579 waagent[1823]: 2025-09-05T23:51:36.148534Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 5 23:51:36.572892 waagent[1823]: 2025-09-05T23:51:36.572759Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 5 23:51:36.579219 waagent[1823]: 2025-09-05T23:51:36.579154Z INFO Daemon Daemon Forcing an update of the goal state. Sep 5 23:51:36.588566 waagent[1823]: 2025-09-05T23:51:36.588514Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 5 23:51:36.633409 waagent[1823]: 2025-09-05T23:51:36.633358Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 5 23:51:36.639833 waagent[1823]: 2025-09-05T23:51:36.639782Z INFO Daemon Sep 5 23:51:36.642546 waagent[1823]: 2025-09-05T23:51:36.642497Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 5b120609-c627-47a3-9bd3-75283340ce77 eTag: 9038949418948222766 source: Fabric] Sep 5 23:51:36.653813 waagent[1823]: 2025-09-05T23:51:36.653765Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 5 23:51:36.661020 waagent[1823]: 2025-09-05T23:51:36.660970Z INFO Daemon Sep 5 23:51:36.663792 waagent[1823]: 2025-09-05T23:51:36.663746Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 5 23:51:36.674743 waagent[1823]: 2025-09-05T23:51:36.674702Z INFO Daemon Daemon Downloading artifacts profile blob Sep 5 23:51:36.752618 waagent[1823]: 2025-09-05T23:51:36.752534Z INFO Daemon Downloaded certificate {'thumbprint': 'F35DFF2E5D82E8D00EAE918D15B1D3C2999AA1F6', 'hasPrivateKey': True} Sep 5 23:51:36.762405 waagent[1823]: 2025-09-05T23:51:36.762348Z INFO Daemon Fetch goal state completed Sep 5 23:51:36.772974 waagent[1823]: 2025-09-05T23:51:36.772911Z INFO Daemon Daemon Starting provisioning Sep 5 23:51:36.777698 waagent[1823]: 2025-09-05T23:51:36.777650Z INFO Daemon Daemon Handle ovf-env.xml. Sep 5 23:51:36.782250 waagent[1823]: 2025-09-05T23:51:36.782207Z INFO Daemon Daemon Set hostname [ci-4081.3.5-n-29d70f4830] Sep 5 23:51:36.804071 waagent[1823]: 2025-09-05T23:51:36.804003Z INFO Daemon Daemon Publish hostname [ci-4081.3.5-n-29d70f4830] Sep 5 23:51:36.810134 waagent[1823]: 2025-09-05T23:51:36.810078Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 5 23:51:36.816055 waagent[1823]: 2025-09-05T23:51:36.816006Z INFO Daemon Daemon Primary interface is [eth0] Sep 5 23:51:36.859475 systemd-networkd[1504]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:51:36.859483 systemd-networkd[1504]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:51:36.859511 systemd-networkd[1504]: eth0: DHCP lease lost Sep 5 23:51:36.860969 waagent[1823]: 2025-09-05T23:51:36.860538Z INFO Daemon Daemon Create user account if not exists Sep 5 23:51:36.866250 waagent[1823]: 2025-09-05T23:51:36.866196Z INFO Daemon Daemon User core already exists, skip useradd Sep 5 23:51:36.871534 waagent[1823]: 2025-09-05T23:51:36.871489Z INFO Daemon Daemon Configure sudoer Sep 5 23:51:36.876154 waagent[1823]: 2025-09-05T23:51:36.876100Z INFO Daemon Daemon Configure sshd Sep 5 23:51:36.880451 waagent[1823]: 2025-09-05T23:51:36.880396Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 5 23:51:36.880534 systemd-networkd[1504]: eth0: DHCPv6 lease lost Sep 5 23:51:36.893058 waagent[1823]: 2025-09-05T23:51:36.893003Z INFO Daemon Daemon Deploy ssh public key. Sep 5 23:51:36.915898 systemd-networkd[1504]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 5 23:51:38.018196 waagent[1823]: 2025-09-05T23:51:38.018122Z INFO Daemon Daemon Provisioning complete Sep 5 23:51:38.037080 waagent[1823]: 2025-09-05T23:51:38.037030Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 5 23:51:38.043301 waagent[1823]: 2025-09-05T23:51:38.043247Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 5 23:51:38.053767 waagent[1823]: 2025-09-05T23:51:38.053700Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 5 23:51:38.189611 waagent[1904]: 2025-09-05T23:51:38.189523Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 5 23:51:38.189954 waagent[1904]: 2025-09-05T23:51:38.189673Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.5 Sep 5 23:51:38.189954 waagent[1904]: 2025-09-05T23:51:38.189727Z INFO ExtHandler ExtHandler Python: 3.11.9 Sep 5 23:51:38.964761 waagent[1904]: 2025-09-05T23:51:38.964660Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.5; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 5 23:51:38.964974 waagent[1904]: 2025-09-05T23:51:38.964932Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 5 23:51:38.965047 waagent[1904]: 2025-09-05T23:51:38.965012Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 5 23:51:38.973435 waagent[1904]: 2025-09-05T23:51:38.973371Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 5 23:51:38.979082 waagent[1904]: 2025-09-05T23:51:38.979040Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 5 23:51:38.979546 waagent[1904]: 2025-09-05T23:51:38.979501Z INFO ExtHandler Sep 5 23:51:38.979619 waagent[1904]: 2025-09-05T23:51:38.979587Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: c7e7166d-179c-4fe2-9ebd-8f28e3117bf2 eTag: 9038949418948222766 source: Fabric] Sep 5 23:51:38.979932 waagent[1904]: 2025-09-05T23:51:38.979887Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 5 23:51:38.986001 waagent[1904]: 2025-09-05T23:51:38.985922Z INFO ExtHandler Sep 5 23:51:38.986085 waagent[1904]: 2025-09-05T23:51:38.986055Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 5 23:51:38.989896 waagent[1904]: 2025-09-05T23:51:38.989837Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 5 23:51:39.070123 waagent[1904]: 2025-09-05T23:51:39.070037Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F35DFF2E5D82E8D00EAE918D15B1D3C2999AA1F6', 'hasPrivateKey': True} Sep 5 23:51:39.070635 waagent[1904]: 2025-09-05T23:51:39.070585Z INFO ExtHandler Fetch goal state completed Sep 5 23:51:39.085706 waagent[1904]: 2025-09-05T23:51:39.085650Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1904 Sep 5 23:51:39.085849 waagent[1904]: 2025-09-05T23:51:39.085814Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 5 23:51:39.087488 waagent[1904]: 2025-09-05T23:51:39.087441Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.5', '', 'Flatcar Container Linux by Kinvolk'] Sep 5 23:51:39.087846 waagent[1904]: 2025-09-05T23:51:39.087806Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 5 23:51:39.154016 waagent[1904]: 2025-09-05T23:51:39.153970Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 5 23:51:39.154213 waagent[1904]: 2025-09-05T23:51:39.154174Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 5 23:51:39.160770 waagent[1904]: 2025-09-05T23:51:39.160726Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 5 23:51:39.167295 systemd[1]: Reloading requested from client PID 1917 ('systemctl') (unit waagent.service)... Sep 5 23:51:39.167308 systemd[1]: Reloading... Sep 5 23:51:39.245891 zram_generator::config[1954]: No configuration found. Sep 5 23:51:39.344797 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:51:39.419228 systemd[1]: Reloading finished in 251 ms. Sep 5 23:51:39.444393 waagent[1904]: 2025-09-05T23:51:39.440257Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 5 23:51:39.446336 systemd[1]: Reloading requested from client PID 2005 ('systemctl') (unit waagent.service)... Sep 5 23:51:39.446438 systemd[1]: Reloading... Sep 5 23:51:39.518909 zram_generator::config[2039]: No configuration found. Sep 5 23:51:39.622398 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:51:39.697132 systemd[1]: Reloading finished in 250 ms. Sep 5 23:51:39.718786 waagent[1904]: 2025-09-05T23:51:39.718040Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 5 23:51:39.718786 waagent[1904]: 2025-09-05T23:51:39.718202Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 5 23:51:40.200891 waagent[1904]: 2025-09-05T23:51:40.200521Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 5 23:51:40.201198 waagent[1904]: 2025-09-05T23:51:40.201149Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 5 23:51:40.202012 waagent[1904]: 2025-09-05T23:51:40.201922Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 5 23:51:40.202493 waagent[1904]: 2025-09-05T23:51:40.202325Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 5 23:51:40.203533 waagent[1904]: 2025-09-05T23:51:40.202704Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 5 23:51:40.203533 waagent[1904]: 2025-09-05T23:51:40.202790Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 5 23:51:40.203533 waagent[1904]: 2025-09-05T23:51:40.202944Z INFO EnvHandler ExtHandler Configure routes Sep 5 23:51:40.203533 waagent[1904]: 2025-09-05T23:51:40.203026Z INFO EnvHandler ExtHandler Gateway:None Sep 5 23:51:40.203533 waagent[1904]: 2025-09-05T23:51:40.203072Z INFO EnvHandler ExtHandler Routes:None Sep 5 23:51:40.203836 waagent[1904]: 2025-09-05T23:51:40.203772Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 5 23:51:40.204016 waagent[1904]: 2025-09-05T23:51:40.203953Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 5 23:51:40.204156 waagent[1904]: 2025-09-05T23:51:40.204112Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 5 23:51:40.204431 waagent[1904]: 2025-09-05T23:51:40.204382Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 5 23:51:40.204835 waagent[1904]: 2025-09-05T23:51:40.204783Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 5 23:51:40.205214 waagent[1904]: 2025-09-05T23:51:40.205165Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 5 23:51:40.205214 waagent[1904]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 5 23:51:40.205214 waagent[1904]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 5 23:51:40.205214 waagent[1904]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 5 23:51:40.205214 waagent[1904]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 5 23:51:40.205214 waagent[1904]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 5 23:51:40.205214 waagent[1904]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 5 23:51:40.205531 waagent[1904]: 2025-09-05T23:51:40.205483Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 5 23:51:40.205981 waagent[1904]: 2025-09-05T23:51:40.205937Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 5 23:51:40.208710 waagent[1904]: 2025-09-05T23:51:40.208044Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 5 23:51:40.217148 waagent[1904]: 2025-09-05T23:51:40.216995Z INFO ExtHandler ExtHandler Sep 5 23:51:40.218883 waagent[1904]: 2025-09-05T23:51:40.217310Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 1c000dd0-4cb9-42e1-b8df-08e9f08c452c correlation d9fbfb5c-4c20-43ea-b5c7-d39ab5cacc34 created: 2025-09-05T23:50:21.576141Z] Sep 5 23:51:40.218883 waagent[1904]: 2025-09-05T23:51:40.217679Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 5 23:51:40.218883 waagent[1904]: 2025-09-05T23:51:40.218227Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 5 23:51:40.252323 waagent[1904]: 2025-09-05T23:51:40.252260Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 4B8A6F54-AEEA-48E8-9EBE-F305381C2BF3;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 5 23:51:40.277113 waagent[1904]: 2025-09-05T23:51:40.277024Z INFO MonitorHandler ExtHandler Network interfaces: Sep 5 23:51:40.277113 waagent[1904]: Executing ['ip', '-a', '-o', 'link']: Sep 5 23:51:40.277113 waagent[1904]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 5 23:51:40.277113 waagent[1904]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:79:81:85 brd ff:ff:ff:ff:ff:ff Sep 5 23:51:40.277113 waagent[1904]: 3: enP64462s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:79:81:85 brd ff:ff:ff:ff:ff:ff\ altname enP64462p0s2 Sep 5 23:51:40.277113 waagent[1904]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 5 23:51:40.277113 waagent[1904]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 5 23:51:40.277113 waagent[1904]: 2: eth0 inet 10.200.20.38/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 5 23:51:40.277113 waagent[1904]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 5 23:51:40.277113 waagent[1904]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 5 23:51:40.277113 waagent[1904]: 2: eth0 inet6 fe80::222:48ff:fe79:8185/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 5 23:51:40.304113 waagent[1904]: 2025-09-05T23:51:40.304049Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 5 23:51:40.304113 waagent[1904]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 5 23:51:40.304113 waagent[1904]: pkts bytes target prot opt in out source destination Sep 5 23:51:40.304113 waagent[1904]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 5 23:51:40.304113 waagent[1904]: pkts bytes target prot opt in out source destination Sep 5 23:51:40.304113 waagent[1904]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 5 23:51:40.304113 waagent[1904]: pkts bytes target prot opt in out source destination Sep 5 23:51:40.304113 waagent[1904]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 5 23:51:40.304113 waagent[1904]: 6 509 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 5 23:51:40.304113 waagent[1904]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 5 23:51:40.307674 waagent[1904]: 2025-09-05T23:51:40.307364Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 5 23:51:40.307674 waagent[1904]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 5 23:51:40.307674 waagent[1904]: pkts bytes target prot opt in out source destination Sep 5 23:51:40.307674 waagent[1904]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 5 23:51:40.307674 waagent[1904]: pkts bytes target prot opt in out source destination Sep 5 23:51:40.307674 waagent[1904]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 5 23:51:40.307674 waagent[1904]: pkts bytes target prot opt in out source destination Sep 5 23:51:40.307674 waagent[1904]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 5 23:51:40.307674 waagent[1904]: 10 1102 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 5 23:51:40.307674 waagent[1904]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 5 23:51:40.307975 waagent[1904]: 2025-09-05T23:51:40.307930Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 5 23:51:44.534880 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 23:51:44.543034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:44.636581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:44.640686 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:51:44.777639 kubelet[2132]: E0905 23:51:44.777589 2132 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:51:44.780973 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:51:44.781110 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:51:54.785040 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 23:51:54.795306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:51:54.915830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:51:54.919336 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:51:55.002741 kubelet[2147]: E0905 23:51:55.002648 2147 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:51:55.005388 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:51:55.005531 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:51:56.548129 chronyd[1680]: Selected source PHC0 Sep 5 23:52:05.034968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 5 23:52:05.043032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:52:05.145354 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:52:05.149768 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:52:05.228690 kubelet[2162]: E0905 23:52:05.228644 2162 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:52:05.231367 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:52:05.231610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:52:08.513765 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 23:52:08.514903 systemd[1]: Started sshd@0-10.200.20.38:22-10.200.16.10:34772.service - OpenSSH per-connection server daemon (10.200.16.10:34772). Sep 5 23:52:09.016303 sshd[2170]: Accepted publickey for core from 10.200.16.10 port 34772 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:52:09.017582 sshd[2170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:52:09.021193 systemd-logind[1701]: New session 3 of user core. Sep 5 23:52:09.028990 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 23:52:09.438631 systemd[1]: Started sshd@1-10.200.20.38:22-10.200.16.10:34774.service - OpenSSH per-connection server daemon (10.200.16.10:34774). Sep 5 23:52:09.899459 sshd[2175]: Accepted publickey for core from 10.200.16.10 port 34774 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:52:09.900753 sshd[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:52:09.904397 systemd-logind[1701]: New session 4 of user core. Sep 5 23:52:09.914977 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 23:52:10.243205 sshd[2175]: pam_unix(sshd:session): session closed for user core Sep 5 23:52:10.245848 systemd[1]: sshd@1-10.200.20.38:22-10.200.16.10:34774.service: Deactivated successfully. Sep 5 23:52:10.247427 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 23:52:10.249337 systemd-logind[1701]: Session 4 logged out. Waiting for processes to exit. Sep 5 23:52:10.250223 systemd-logind[1701]: Removed session 4. Sep 5 23:52:10.319672 systemd[1]: Started sshd@2-10.200.20.38:22-10.200.16.10:60242.service - OpenSSH per-connection server daemon (10.200.16.10:60242). Sep 5 23:52:10.781820 sshd[2182]: Accepted publickey for core from 10.200.16.10 port 60242 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:52:10.783090 sshd[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:52:10.786774 systemd-logind[1701]: New session 5 of user core. Sep 5 23:52:10.795020 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 23:52:11.124545 sshd[2182]: pam_unix(sshd:session): session closed for user core Sep 5 23:52:11.127383 systemd-logind[1701]: Session 5 logged out. Waiting for processes to exit. Sep 5 23:52:11.127742 systemd[1]: sshd@2-10.200.20.38:22-10.200.16.10:60242.service: Deactivated successfully. Sep 5 23:52:11.129492 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 23:52:11.131119 systemd-logind[1701]: Removed session 5. Sep 5 23:52:11.201663 systemd[1]: Started sshd@3-10.200.20.38:22-10.200.16.10:60244.service - OpenSSH per-connection server daemon (10.200.16.10:60244). Sep 5 23:52:11.627852 sshd[2189]: Accepted publickey for core from 10.200.16.10 port 60244 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:52:11.629235 sshd[2189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:52:11.633113 systemd-logind[1701]: New session 6 of user core. Sep 5 23:52:11.645026 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 23:52:11.952107 sshd[2189]: pam_unix(sshd:session): session closed for user core Sep 5 23:52:11.955729 systemd[1]: sshd@3-10.200.20.38:22-10.200.16.10:60244.service: Deactivated successfully. Sep 5 23:52:11.957210 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 23:52:11.958460 systemd-logind[1701]: Session 6 logged out. Waiting for processes to exit. Sep 5 23:52:11.959653 systemd-logind[1701]: Removed session 6. Sep 5 23:52:12.039681 systemd[1]: Started sshd@4-10.200.20.38:22-10.200.16.10:60254.service - OpenSSH per-connection server daemon (10.200.16.10:60254). Sep 5 23:52:12.506819 sshd[2196]: Accepted publickey for core from 10.200.16.10 port 60254 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:52:12.508126 sshd[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:52:12.512248 systemd-logind[1701]: New session 7 of user core. Sep 5 23:52:12.522006 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 23:52:12.928207 sudo[2199]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 23:52:12.928477 sudo[2199]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:52:12.960680 sudo[2199]: pam_unix(sudo:session): session closed for user root Sep 5 23:52:13.034666 sshd[2196]: pam_unix(sshd:session): session closed for user core Sep 5 23:52:13.038256 systemd[1]: sshd@4-10.200.20.38:22-10.200.16.10:60254.service: Deactivated successfully. Sep 5 23:52:13.039747 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 23:52:13.041493 systemd-logind[1701]: Session 7 logged out. Waiting for processes to exit. Sep 5 23:52:13.042909 systemd-logind[1701]: Removed session 7. Sep 5 23:52:13.111915 systemd[1]: Started sshd@5-10.200.20.38:22-10.200.16.10:60264.service - OpenSSH per-connection server daemon (10.200.16.10:60264). Sep 5 23:52:13.535140 sshd[2204]: Accepted publickey for core from 10.200.16.10 port 60264 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:52:13.536484 sshd[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:52:13.540146 systemd-logind[1701]: New session 8 of user core. Sep 5 23:52:13.550009 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 23:52:13.777770 sudo[2208]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 23:52:13.778065 sudo[2208]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:52:13.781110 sudo[2208]: pam_unix(sudo:session): session closed for user root Sep 5 23:52:13.785396 sudo[2207]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 23:52:13.785645 sudo[2207]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:52:13.798353 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 23:52:13.799608 auditctl[2211]: No rules Sep 5 23:52:13.800055 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 23:52:13.800218 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 23:52:13.802692 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:52:13.824570 augenrules[2229]: No rules Sep 5 23:52:13.827994 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:52:13.829569 sudo[2207]: pam_unix(sudo:session): session closed for user root Sep 5 23:52:13.912133 sshd[2204]: pam_unix(sshd:session): session closed for user core Sep 5 23:52:13.914739 systemd[1]: sshd@5-10.200.20.38:22-10.200.16.10:60264.service: Deactivated successfully. Sep 5 23:52:13.916312 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 23:52:13.918175 systemd-logind[1701]: Session 8 logged out. Waiting for processes to exit. Sep 5 23:52:13.919042 systemd-logind[1701]: Removed session 8. Sep 5 23:52:14.003836 systemd[1]: Started sshd@6-10.200.20.38:22-10.200.16.10:60272.service - OpenSSH per-connection server daemon (10.200.16.10:60272). Sep 5 23:52:14.426762 sshd[2237]: Accepted publickey for core from 10.200.16.10 port 60272 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:52:14.428063 sshd[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:52:14.431954 systemd-logind[1701]: New session 9 of user core. Sep 5 23:52:14.444029 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 23:52:14.670168 sudo[2240]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 23:52:14.670434 sudo[2240]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:52:15.284905 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 5 23:52:15.295163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:52:15.885555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:52:15.889799 (kubelet)[2259]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:52:15.927537 kubelet[2259]: E0905 23:52:15.927461 2259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:52:15.930175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:52:15.930407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:52:16.042353 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 5 23:52:16.234265 (dockerd)[2272]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 23:52:16.235010 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 23:52:16.939474 dockerd[2272]: time="2025-09-05T23:52:16.938898747Z" level=info msg="Starting up" Sep 5 23:52:17.284516 dockerd[2272]: time="2025-09-05T23:52:17.284096041Z" level=info msg="Loading containers: start." Sep 5 23:52:17.505107 kernel: Initializing XFRM netlink socket Sep 5 23:52:17.629525 systemd-networkd[1504]: docker0: Link UP Sep 5 23:52:17.659469 dockerd[2272]: time="2025-09-05T23:52:17.658924933Z" level=info msg="Loading containers: done." Sep 5 23:52:17.669421 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2395551832-merged.mount: Deactivated successfully. Sep 5 23:52:17.684556 dockerd[2272]: time="2025-09-05T23:52:17.684511531Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 23:52:17.684696 dockerd[2272]: time="2025-09-05T23:52:17.684652491Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 23:52:17.684778 dockerd[2272]: time="2025-09-05T23:52:17.684759331Z" level=info msg="Daemon has completed initialization" Sep 5 23:52:17.744960 dockerd[2272]: time="2025-09-05T23:52:17.744585127Z" level=info msg="API listen on /run/docker.sock" Sep 5 23:52:17.744801 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 23:52:17.929930 update_engine[1705]: I20250905 23:52:17.929283 1705 update_attempter.cc:509] Updating boot flags... Sep 5 23:52:17.981271 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2416) Sep 5 23:52:18.095954 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2420) Sep 5 23:52:18.191924 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2420) Sep 5 23:52:18.769894 containerd[1735]: time="2025-09-05T23:52:18.769839571Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 5 23:52:19.764843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1218639598.mount: Deactivated successfully. Sep 5 23:52:21.520897 containerd[1735]: time="2025-09-05T23:52:21.520721687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:21.526955 containerd[1735]: time="2025-09-05T23:52:21.526900606Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352613" Sep 5 23:52:21.532663 containerd[1735]: time="2025-09-05T23:52:21.532584726Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:21.537198 containerd[1735]: time="2025-09-05T23:52:21.537152325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:21.538459 containerd[1735]: time="2025-09-05T23:52:21.538296245Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 2.768401954s" Sep 5 23:52:21.538459 containerd[1735]: time="2025-09-05T23:52:21.538330445Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 5 23:52:21.539629 containerd[1735]: time="2025-09-05T23:52:21.539589805Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 5 23:52:23.373911 containerd[1735]: time="2025-09-05T23:52:23.373232233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:23.377300 containerd[1735]: time="2025-09-05T23:52:23.377060272Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536977" Sep 5 23:52:23.381816 containerd[1735]: time="2025-09-05T23:52:23.381767192Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:23.390176 containerd[1735]: time="2025-09-05T23:52:23.390103591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:23.391407 containerd[1735]: time="2025-09-05T23:52:23.391266630Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.851550705s" Sep 5 23:52:23.391407 containerd[1735]: time="2025-09-05T23:52:23.391302310Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 5 23:52:23.391978 containerd[1735]: time="2025-09-05T23:52:23.391876630Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 5 23:52:24.944006 containerd[1735]: time="2025-09-05T23:52:24.943947691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:24.947763 containerd[1735]: time="2025-09-05T23:52:24.947728010Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292014" Sep 5 23:52:24.952346 containerd[1735]: time="2025-09-05T23:52:24.952296570Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:24.959649 containerd[1735]: time="2025-09-05T23:52:24.959290329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:24.960375 containerd[1735]: time="2025-09-05T23:52:24.960341369Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.568281899s" Sep 5 23:52:24.960375 containerd[1735]: time="2025-09-05T23:52:24.960373929Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 5 23:52:24.960972 containerd[1735]: time="2025-09-05T23:52:24.960944769Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 5 23:52:26.034829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 5 23:52:26.043032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:52:26.393700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:52:26.398112 (kubelet)[2568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:52:26.431792 kubelet[2568]: E0905 23:52:26.431732 2568 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:52:26.434230 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:52:26.434373 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:52:27.064493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3992949324.mount: Deactivated successfully. Sep 5 23:52:27.406619 containerd[1735]: time="2025-09-05T23:52:27.406115846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:27.410286 containerd[1735]: time="2025-09-05T23:52:27.410254965Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199959" Sep 5 23:52:27.414547 containerd[1735]: time="2025-09-05T23:52:27.414498605Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:27.420433 containerd[1735]: time="2025-09-05T23:52:27.420383964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:27.421415 containerd[1735]: time="2025-09-05T23:52:27.420979364Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 2.459999075s" Sep 5 23:52:27.421415 containerd[1735]: time="2025-09-05T23:52:27.421013044Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 5 23:52:27.421565 containerd[1735]: time="2025-09-05T23:52:27.421537964Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 5 23:52:28.224372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1781116635.mount: Deactivated successfully. Sep 5 23:52:30.665202 containerd[1735]: time="2025-09-05T23:52:30.665156331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:30.668535 containerd[1735]: time="2025-09-05T23:52:30.668499371Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Sep 5 23:52:30.674092 containerd[1735]: time="2025-09-05T23:52:30.674020650Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:30.679423 containerd[1735]: time="2025-09-05T23:52:30.679373888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:30.680718 containerd[1735]: time="2025-09-05T23:52:30.680498968Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 3.258927804s" Sep 5 23:52:30.680718 containerd[1735]: time="2025-09-05T23:52:30.680534008Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 5 23:52:30.681764 containerd[1735]: time="2025-09-05T23:52:30.680967168Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 23:52:31.265027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2694106362.mount: Deactivated successfully. Sep 5 23:52:31.297897 containerd[1735]: time="2025-09-05T23:52:31.297695804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:31.301720 containerd[1735]: time="2025-09-05T23:52:31.301495883Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 5 23:52:31.307168 containerd[1735]: time="2025-09-05T23:52:31.305823802Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:31.311881 containerd[1735]: time="2025-09-05T23:52:31.311360841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:31.312756 containerd[1735]: time="2025-09-05T23:52:31.312054121Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 631.053593ms" Sep 5 23:52:31.312756 containerd[1735]: time="2025-09-05T23:52:31.312088721Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 5 23:52:31.312756 containerd[1735]: time="2025-09-05T23:52:31.312514601Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 5 23:52:32.009515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1684686391.mount: Deactivated successfully. Sep 5 23:52:35.811895 containerd[1735]: time="2025-09-05T23:52:35.811759536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:35.814625 containerd[1735]: time="2025-09-05T23:52:35.814589335Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465295" Sep 5 23:52:35.822468 containerd[1735]: time="2025-09-05T23:52:35.822420694Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:35.830410 containerd[1735]: time="2025-09-05T23:52:35.830338732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:52:35.831842 containerd[1735]: time="2025-09-05T23:52:35.831678372Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 4.519123131s" Sep 5 23:52:35.831842 containerd[1735]: time="2025-09-05T23:52:35.831714292Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 5 23:52:36.496650 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 5 23:52:36.503091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:52:36.649016 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:52:36.653283 (kubelet)[2723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:52:36.687439 kubelet[2723]: E0905 23:52:36.687371 2723 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:52:36.691415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:52:36.691556 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:52:42.255367 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:52:42.269314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:52:42.294371 systemd[1]: Reloading requested from client PID 2737 ('systemctl') (unit session-9.scope)... Sep 5 23:52:42.294391 systemd[1]: Reloading... Sep 5 23:52:42.408891 zram_generator::config[2777]: No configuration found. Sep 5 23:52:42.513193 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:52:42.590094 systemd[1]: Reloading finished in 295 ms. Sep 5 23:52:42.633047 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 23:52:42.633120 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 23:52:42.633371 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:52:42.639392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:52:42.839912 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:52:42.850158 (kubelet)[2844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:52:42.884671 kubelet[2844]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:52:42.885275 kubelet[2844]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 23:52:42.885275 kubelet[2844]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:52:42.885275 kubelet[2844]: I0905 23:52:42.885129 2844 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:52:43.205577 kubelet[2844]: I0905 23:52:43.205478 2844 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 23:52:43.205838 kubelet[2844]: I0905 23:52:43.205700 2844 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:52:43.206896 kubelet[2844]: I0905 23:52:43.206067 2844 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 23:52:43.222055 kubelet[2844]: E0905 23:52:43.222014 2844 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 23:52:43.222503 kubelet[2844]: I0905 23:52:43.222239 2844 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:52:43.232650 kubelet[2844]: E0905 23:52:43.232604 2844 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:52:43.232791 kubelet[2844]: I0905 23:52:43.232778 2844 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:52:43.235910 kubelet[2844]: I0905 23:52:43.235889 2844 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:52:43.237890 kubelet[2844]: I0905 23:52:43.237451 2844 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:52:43.237890 kubelet[2844]: I0905 23:52:43.237490 2844 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-29d70f4830","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 23:52:43.237890 kubelet[2844]: I0905 23:52:43.237656 2844 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:52:43.237890 kubelet[2844]: I0905 23:52:43.237665 2844 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 23:52:43.237890 kubelet[2844]: I0905 23:52:43.237790 2844 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:52:43.240568 kubelet[2844]: I0905 23:52:43.240532 2844 kubelet.go:480] "Attempting to sync node with API server" Sep 5 23:52:43.240663 kubelet[2844]: I0905 23:52:43.240567 2844 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:52:43.240690 kubelet[2844]: I0905 23:52:43.240685 2844 kubelet.go:386] "Adding apiserver pod source" Sep 5 23:52:43.242526 kubelet[2844]: I0905 23:52:43.242499 2844 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:52:43.246052 kubelet[2844]: E0905 23:52:43.246014 2844 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-29d70f4830&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 23:52:43.246293 kubelet[2844]: I0905 23:52:43.246267 2844 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:52:43.246914 kubelet[2844]: I0905 23:52:43.246890 2844 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 23:52:43.248065 kubelet[2844]: W0905 23:52:43.246955 2844 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 23:52:43.249777 kubelet[2844]: I0905 23:52:43.249751 2844 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 23:52:43.249852 kubelet[2844]: I0905 23:52:43.249793 2844 server.go:1289] "Started kubelet" Sep 5 23:52:43.252880 kubelet[2844]: E0905 23:52:43.252841 2844 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 23:52:43.253413 kubelet[2844]: I0905 23:52:43.253073 2844 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:52:43.255424 kubelet[2844]: I0905 23:52:43.255407 2844 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:52:43.257368 kubelet[2844]: E0905 23:52:43.256124 2844 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-n-29d70f4830.18628805d4875bbc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-n-29d70f4830,UID:ci-4081.3.5-n-29d70f4830,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-n-29d70f4830,},FirstTimestamp:2025-09-05 23:52:43.249769404 +0000 UTC m=+0.396362856,LastTimestamp:2025-09-05 23:52:43.249769404 +0000 UTC m=+0.396362856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-n-29d70f4830,}" Sep 5 23:52:43.258534 kubelet[2844]: I0905 23:52:43.253416 2844 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:52:43.258699 kubelet[2844]: I0905 23:52:43.258673 2844 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:52:43.259367 kubelet[2844]: I0905 23:52:43.259329 2844 server.go:317] "Adding debug handlers to kubelet server" Sep 5 23:52:43.260267 kubelet[2844]: I0905 23:52:43.260231 2844 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:52:43.262135 kubelet[2844]: I0905 23:52:43.262117 2844 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 23:52:43.262397 kubelet[2844]: E0905 23:52:43.262379 2844 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-29d70f4830\" not found" Sep 5 23:52:43.262476 kubelet[2844]: E0905 23:52:43.262388 2844 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:52:43.263085 kubelet[2844]: I0905 23:52:43.263053 2844 factory.go:223] Registration of the systemd container factory successfully Sep 5 23:52:43.263751 kubelet[2844]: I0905 23:52:43.263189 2844 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:52:43.264708 kubelet[2844]: I0905 23:52:43.264671 2844 factory.go:223] Registration of the containerd container factory successfully Sep 5 23:52:43.265703 kubelet[2844]: I0905 23:52:43.265685 2844 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 23:52:43.265831 kubelet[2844]: I0905 23:52:43.265822 2844 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:52:43.270259 kubelet[2844]: E0905 23:52:43.270230 2844 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 23:52:43.271127 kubelet[2844]: E0905 23:52:43.271091 2844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-29d70f4830?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="200ms" Sep 5 23:52:43.362754 kubelet[2844]: E0905 23:52:43.362674 2844 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-29d70f4830\" not found" Sep 5 23:52:43.389056 kubelet[2844]: I0905 23:52:43.388997 2844 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 23:52:43.391173 kubelet[2844]: I0905 23:52:43.390979 2844 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 23:52:43.391173 kubelet[2844]: I0905 23:52:43.391001 2844 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 23:52:43.391173 kubelet[2844]: I0905 23:52:43.391021 2844 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 23:52:43.391173 kubelet[2844]: I0905 23:52:43.391029 2844 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 23:52:43.391173 kubelet[2844]: E0905 23:52:43.391067 2844 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:52:43.392938 kubelet[2844]: E0905 23:52:43.392782 2844 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 23:52:43.463687 kubelet[2844]: E0905 23:52:43.462806 2844 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-29d70f4830\" not found" Sep 5 23:52:43.471515 kubelet[2844]: E0905 23:52:43.471476 2844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-29d70f4830?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="400ms" Sep 5 23:52:43.491585 kubelet[2844]: E0905 23:52:43.491554 2844 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 23:52:43.555683 kubelet[2844]: I0905 23:52:43.555655 2844 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 23:52:43.555683 kubelet[2844]: I0905 23:52:43.555682 2844 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 23:52:43.555683 kubelet[2844]: I0905 23:52:43.555698 2844 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:52:43.563193 kubelet[2844]: E0905 23:52:43.563164 2844 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-29d70f4830\" not found" Sep 5 23:52:43.574144 kubelet[2844]: I0905 23:52:43.574107 2844 policy_none.go:49] "None policy: Start" Sep 5 23:52:43.574144 kubelet[2844]: I0905 23:52:43.574132 2844 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 23:52:43.574144 kubelet[2844]: I0905 23:52:43.574145 2844 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:52:43.587228 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 23:52:43.597475 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 23:52:43.601588 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 23:52:43.611697 kubelet[2844]: E0905 23:52:43.611660 2844 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 23:52:43.612253 kubelet[2844]: I0905 23:52:43.611891 2844 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:52:43.612253 kubelet[2844]: I0905 23:52:43.611909 2844 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:52:43.612253 kubelet[2844]: I0905 23:52:43.612146 2844 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:52:43.614461 kubelet[2844]: E0905 23:52:43.614429 2844 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 23:52:43.614601 kubelet[2844]: E0905 23:52:43.614584 2844 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-n-29d70f4830\" not found" Sep 5 23:52:43.705467 systemd[1]: Created slice kubepods-burstable-podead9039b60377823417f31a1caf2616b.slice - libcontainer container kubepods-burstable-podead9039b60377823417f31a1caf2616b.slice. Sep 5 23:52:43.713940 kubelet[2844]: I0905 23:52:43.713484 2844 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:43.714152 kubelet[2844]: E0905 23:52:43.714131 2844 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:43.714753 kubelet[2844]: E0905 23:52:43.714446 2844 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-29d70f4830\" not found" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:43.719292 systemd[1]: Created slice kubepods-burstable-pod45b1194903a910eb95c8473ca5323a69.slice - libcontainer container kubepods-burstable-pod45b1194903a910eb95c8473ca5323a69.slice. Sep 5 23:52:43.729205 kubelet[2844]: E0905 23:52:43.729023 2844 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-29d70f4830\" not found" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:43.732617 systemd[1]: Created slice kubepods-burstable-pode3622f221756f54814a3c30e893f593d.slice - libcontainer container kubepods-burstable-pode3622f221756f54814a3c30e893f593d.slice. Sep 5 23:52:43.734387 kubelet[2844]: E0905 23:52:43.734226 2844 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-29d70f4830\" not found" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:43.768853 kubelet[2844]: I0905 23:52:43.768815 2844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e3622f221756f54814a3c30e893f593d-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-29d70f4830\" (UID: \"e3622f221756f54814a3c30e893f593d\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:43.768955 kubelet[2844]: I0905 23:52:43.768874 2844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ead9039b60377823417f31a1caf2616b-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-29d70f4830\" (UID: \"ead9039b60377823417f31a1caf2616b\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:43.768955 kubelet[2844]: I0905 23:52:43.768898 2844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45b1194903a910eb95c8473ca5323a69-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-29d70f4830\" (UID: \"45b1194903a910eb95c8473ca5323a69\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:43.768955 kubelet[2844]: I0905 23:52:43.768914 2844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45b1194903a910eb95c8473ca5323a69-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-29d70f4830\" (UID: \"45b1194903a910eb95c8473ca5323a69\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:43.768955 kubelet[2844]: I0905 23:52:43.768931 2844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ead9039b60377823417f31a1caf2616b-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-29d70f4830\" (UID: \"ead9039b60377823417f31a1caf2616b\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:43.768955 kubelet[2844]: I0905 23:52:43.768946 2844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ead9039b60377823417f31a1caf2616b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-29d70f4830\" (UID: \"ead9039b60377823417f31a1caf2616b\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:43.769073 kubelet[2844]: I0905 23:52:43.768960 2844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/45b1194903a910eb95c8473ca5323a69-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-29d70f4830\" (UID: \"45b1194903a910eb95c8473ca5323a69\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:43.769073 kubelet[2844]: I0905 23:52:43.768974 2844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45b1194903a910eb95c8473ca5323a69-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-29d70f4830\" (UID: \"45b1194903a910eb95c8473ca5323a69\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:43.769073 kubelet[2844]: I0905 23:52:43.768989 2844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45b1194903a910eb95c8473ca5323a69-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-29d70f4830\" (UID: \"45b1194903a910eb95c8473ca5323a69\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:43.872147 kubelet[2844]: E0905 23:52:43.872093 2844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-29d70f4830?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="800ms" Sep 5 23:52:43.915815 kubelet[2844]: I0905 23:52:43.915773 2844 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:43.916271 kubelet[2844]: E0905 23:52:43.916239 2844 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:44.015702 containerd[1735]: time="2025-09-05T23:52:44.015595478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-29d70f4830,Uid:ead9039b60377823417f31a1caf2616b,Namespace:kube-system,Attempt:0,}" Sep 5 23:52:44.030085 containerd[1735]: time="2025-09-05T23:52:44.030004036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-29d70f4830,Uid:45b1194903a910eb95c8473ca5323a69,Namespace:kube-system,Attempt:0,}" Sep 5 23:52:44.035892 containerd[1735]: time="2025-09-05T23:52:44.035791235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-29d70f4830,Uid:e3622f221756f54814a3c30e893f593d,Namespace:kube-system,Attempt:0,}" Sep 5 23:52:44.146254 kubelet[2844]: E0905 23:52:44.146221 2844 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-29d70f4830&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 23:52:44.318743 kubelet[2844]: I0905 23:52:44.318701 2844 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:44.319116 kubelet[2844]: E0905 23:52:44.319088 2844 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:44.334466 kubelet[2844]: E0905 23:52:44.334439 2844 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 23:52:44.556398 kubelet[2844]: E0905 23:52:44.556272 2844 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-n-29d70f4830.18628805d4875bbc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-n-29d70f4830,UID:ci-4081.3.5-n-29d70f4830,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-n-29d70f4830,},FirstTimestamp:2025-09-05 23:52:43.249769404 +0000 UTC m=+0.396362856,LastTimestamp:2025-09-05 23:52:43.249769404 +0000 UTC m=+0.396362856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-n-29d70f4830,}" Sep 5 23:52:44.577346 kubelet[2844]: E0905 23:52:44.577233 2844 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 23:52:44.673345 kubelet[2844]: E0905 23:52:44.673277 2844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-29d70f4830?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="1.6s" Sep 5 23:52:44.758265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571984183.mount: Deactivated successfully. Sep 5 23:52:44.801231 containerd[1735]: time="2025-09-05T23:52:44.801187209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:52:44.803938 containerd[1735]: time="2025-09-05T23:52:44.803854050Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 5 23:52:44.810038 containerd[1735]: time="2025-09-05T23:52:44.809296531Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:52:44.812070 containerd[1735]: time="2025-09-05T23:52:44.811967691Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:52:44.817473 containerd[1735]: time="2025-09-05T23:52:44.817394652Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:52:44.822316 containerd[1735]: time="2025-09-05T23:52:44.821972933Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:52:44.826934 containerd[1735]: time="2025-09-05T23:52:44.826879214Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:52:44.832157 containerd[1735]: time="2025-09-05T23:52:44.832063895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:52:44.833407 containerd[1735]: time="2025-09-05T23:52:44.832773735Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 802.705419ms" Sep 5 23:52:44.835887 containerd[1735]: time="2025-09-05T23:52:44.834935935Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 799.08006ms" Sep 5 23:52:44.835887 containerd[1735]: time="2025-09-05T23:52:44.835532576Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 819.858778ms" Sep 5 23:52:44.868384 kubelet[2844]: E0905 23:52:44.868327 2844 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 23:52:45.121370 kubelet[2844]: I0905 23:52:45.121266 2844 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:45.121904 kubelet[2844]: E0905 23:52:45.121854 2844 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:45.319388 kubelet[2844]: E0905 23:52:45.319342 2844 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 23:52:45.934899 containerd[1735]: time="2025-09-05T23:52:45.934232490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:45.934899 containerd[1735]: time="2025-09-05T23:52:45.934534530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:45.934899 containerd[1735]: time="2025-09-05T23:52:45.934556610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:45.937330 containerd[1735]: time="2025-09-05T23:52:45.936221090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:45.942477 containerd[1735]: time="2025-09-05T23:52:45.942325691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:45.942477 containerd[1735]: time="2025-09-05T23:52:45.942382571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:45.942477 containerd[1735]: time="2025-09-05T23:52:45.942394571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:45.942617 containerd[1735]: time="2025-09-05T23:52:45.942475531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:45.944237 containerd[1735]: time="2025-09-05T23:52:45.944166131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:45.944368 containerd[1735]: time="2025-09-05T23:52:45.944211851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:45.944368 containerd[1735]: time="2025-09-05T23:52:45.944293331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:45.945146 containerd[1735]: time="2025-09-05T23:52:45.945089612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:45.966791 systemd[1]: run-containerd-runc-k8s.io-b254589686384dcf4a4ba217e6fc009d982edf0a6b697e696be9c0f7afc6caa4-runc.Bmgtf0.mount: Deactivated successfully. Sep 5 23:52:45.976237 systemd[1]: Started cri-containerd-b254589686384dcf4a4ba217e6fc009d982edf0a6b697e696be9c0f7afc6caa4.scope - libcontainer container b254589686384dcf4a4ba217e6fc009d982edf0a6b697e696be9c0f7afc6caa4. Sep 5 23:52:45.980203 systemd[1]: Started cri-containerd-39fb4406234a7663d89b99e41346d217fa34f7e7b9faebf7eec161d62a217598.scope - libcontainer container 39fb4406234a7663d89b99e41346d217fa34f7e7b9faebf7eec161d62a217598. Sep 5 23:52:45.984575 systemd[1]: Started cri-containerd-b198b787c463ece68ff1fede45ae284355e14c778abbf44ff3d3a1e534d782b1.scope - libcontainer container b198b787c463ece68ff1fede45ae284355e14c778abbf44ff3d3a1e534d782b1. Sep 5 23:52:46.034730 containerd[1735]: time="2025-09-05T23:52:46.034499907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-29d70f4830,Uid:45b1194903a910eb95c8473ca5323a69,Namespace:kube-system,Attempt:0,} returns sandbox id \"b254589686384dcf4a4ba217e6fc009d982edf0a6b697e696be9c0f7afc6caa4\"" Sep 5 23:52:46.044807 containerd[1735]: time="2025-09-05T23:52:46.043930429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-29d70f4830,Uid:ead9039b60377823417f31a1caf2616b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b198b787c463ece68ff1fede45ae284355e14c778abbf44ff3d3a1e534d782b1\"" Sep 5 23:52:46.047753 containerd[1735]: time="2025-09-05T23:52:46.047716990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-29d70f4830,Uid:e3622f221756f54814a3c30e893f593d,Namespace:kube-system,Attempt:0,} returns sandbox id \"39fb4406234a7663d89b99e41346d217fa34f7e7b9faebf7eec161d62a217598\"" Sep 5 23:52:46.050202 containerd[1735]: time="2025-09-05T23:52:46.050043190Z" level=info msg="CreateContainer within sandbox \"b254589686384dcf4a4ba217e6fc009d982edf0a6b697e696be9c0f7afc6caa4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 23:52:46.056819 containerd[1735]: time="2025-09-05T23:52:46.056780031Z" level=info msg="CreateContainer within sandbox \"b198b787c463ece68ff1fede45ae284355e14c778abbf44ff3d3a1e534d782b1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 23:52:46.061258 containerd[1735]: time="2025-09-05T23:52:46.061137072Z" level=info msg="CreateContainer within sandbox \"39fb4406234a7663d89b99e41346d217fa34f7e7b9faebf7eec161d62a217598\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 23:52:46.102387 containerd[1735]: time="2025-09-05T23:52:46.102342479Z" level=info msg="CreateContainer within sandbox \"b254589686384dcf4a4ba217e6fc009d982edf0a6b697e696be9c0f7afc6caa4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"26c0c485f386c775620eb427f5fedcc3b1a3dcc17102cd0fb1185edc7da38f34\"" Sep 5 23:52:46.103398 containerd[1735]: time="2025-09-05T23:52:46.103368120Z" level=info msg="StartContainer for \"26c0c485f386c775620eb427f5fedcc3b1a3dcc17102cd0fb1185edc7da38f34\"" Sep 5 23:52:46.127033 systemd[1]: Started cri-containerd-26c0c485f386c775620eb427f5fedcc3b1a3dcc17102cd0fb1185edc7da38f34.scope - libcontainer container 26c0c485f386c775620eb427f5fedcc3b1a3dcc17102cd0fb1185edc7da38f34. Sep 5 23:52:46.140153 containerd[1735]: time="2025-09-05T23:52:46.139993286Z" level=info msg="CreateContainer within sandbox \"b198b787c463ece68ff1fede45ae284355e14c778abbf44ff3d3a1e534d782b1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7edeb823c37526b17a4a6d55343ca915f63561bf0ad1aea570ab15fca1571bfb\"" Sep 5 23:52:46.140745 containerd[1735]: time="2025-09-05T23:52:46.140723526Z" level=info msg="StartContainer for \"7edeb823c37526b17a4a6d55343ca915f63561bf0ad1aea570ab15fca1571bfb\"" Sep 5 23:52:46.146040 containerd[1735]: time="2025-09-05T23:52:46.145945487Z" level=info msg="CreateContainer within sandbox \"39fb4406234a7663d89b99e41346d217fa34f7e7b9faebf7eec161d62a217598\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f9d20c086ee13460d5fbdcb32d10dda427d74a74e92c42a9e00b848daa1ff3a6\"" Sep 5 23:52:46.147264 containerd[1735]: time="2025-09-05T23:52:46.147234007Z" level=info msg="StartContainer for \"f9d20c086ee13460d5fbdcb32d10dda427d74a74e92c42a9e00b848daa1ff3a6\"" Sep 5 23:52:46.174068 systemd[1]: Started cri-containerd-7edeb823c37526b17a4a6d55343ca915f63561bf0ad1aea570ab15fca1571bfb.scope - libcontainer container 7edeb823c37526b17a4a6d55343ca915f63561bf0ad1aea570ab15fca1571bfb. Sep 5 23:52:46.184306 containerd[1735]: time="2025-09-05T23:52:46.184259614Z" level=info msg="StartContainer for \"26c0c485f386c775620eb427f5fedcc3b1a3dcc17102cd0fb1185edc7da38f34\" returns successfully" Sep 5 23:52:46.188014 systemd[1]: Started cri-containerd-f9d20c086ee13460d5fbdcb32d10dda427d74a74e92c42a9e00b848daa1ff3a6.scope - libcontainer container f9d20c086ee13460d5fbdcb32d10dda427d74a74e92c42a9e00b848daa1ff3a6. Sep 5 23:52:46.238381 containerd[1735]: time="2025-09-05T23:52:46.238341343Z" level=info msg="StartContainer for \"f9d20c086ee13460d5fbdcb32d10dda427d74a74e92c42a9e00b848daa1ff3a6\" returns successfully" Sep 5 23:52:46.274887 kubelet[2844]: E0905 23:52:46.274429 2844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-29d70f4830?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="3.2s" Sep 5 23:52:46.320803 containerd[1735]: time="2025-09-05T23:52:46.320753998Z" level=info msg="StartContainer for \"7edeb823c37526b17a4a6d55343ca915f63561bf0ad1aea570ab15fca1571bfb\" returns successfully" Sep 5 23:52:46.407318 kubelet[2844]: E0905 23:52:46.406967 2844 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-29d70f4830\" not found" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:46.413388 kubelet[2844]: E0905 23:52:46.413259 2844 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-29d70f4830\" not found" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:46.415952 kubelet[2844]: E0905 23:52:46.415911 2844 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-29d70f4830\" not found" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:46.724699 kubelet[2844]: I0905 23:52:46.724667 2844 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:47.417660 kubelet[2844]: E0905 23:52:47.417395 2844 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-29d70f4830\" not found" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:47.417660 kubelet[2844]: E0905 23:52:47.417535 2844 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-29d70f4830\" not found" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:48.828656 kubelet[2844]: E0905 23:52:48.828618 2844 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-29d70f4830\" not found" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:49.052695 kubelet[2844]: I0905 23:52:49.052653 2844 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:49.052695 kubelet[2844]: E0905 23:52:49.052696 2844 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.5-n-29d70f4830\": node \"ci-4081.3.5-n-29d70f4830\" not found" Sep 5 23:52:49.064327 kubelet[2844]: I0905 23:52:49.064287 2844 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:49.101379 kubelet[2844]: E0905 23:52:49.100996 2844 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-n-29d70f4830\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:49.101379 kubelet[2844]: I0905 23:52:49.101025 2844 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:49.115848 kubelet[2844]: E0905 23:52:49.115811 2844 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.5-n-29d70f4830\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:49.115848 kubelet[2844]: I0905 23:52:49.115837 2844 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:49.118626 kubelet[2844]: E0905 23:52:49.118598 2844 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-n-29d70f4830\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:49.255651 kubelet[2844]: I0905 23:52:49.255618 2844 apiserver.go:52] "Watching apiserver" Sep 5 23:52:49.266848 kubelet[2844]: I0905 23:52:49.266815 2844 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 23:52:50.927453 systemd[1]: Reloading requested from client PID 3126 ('systemctl') (unit session-9.scope)... Sep 5 23:52:50.927470 systemd[1]: Reloading... Sep 5 23:52:51.019899 zram_generator::config[3166]: No configuration found. Sep 5 23:52:51.123691 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:52:51.219957 systemd[1]: Reloading finished in 292 ms. Sep 5 23:52:51.255376 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:52:51.269081 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 23:52:51.269357 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:52:51.275083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:52:51.441183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:52:51.448170 (kubelet)[3230]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:52:51.486192 kubelet[3230]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:52:51.486192 kubelet[3230]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 23:52:51.486192 kubelet[3230]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:52:51.486516 kubelet[3230]: I0905 23:52:51.486199 3230 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:52:51.495657 kubelet[3230]: I0905 23:52:51.494462 3230 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 23:52:51.495657 kubelet[3230]: I0905 23:52:51.494499 3230 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:52:51.495657 kubelet[3230]: I0905 23:52:51.494728 3230 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 23:52:51.497005 kubelet[3230]: I0905 23:52:51.496986 3230 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 5 23:52:51.499391 kubelet[3230]: I0905 23:52:51.499355 3230 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:52:51.502473 kubelet[3230]: E0905 23:52:51.502450 3230 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:52:51.502577 kubelet[3230]: I0905 23:52:51.502565 3230 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:52:51.505574 kubelet[3230]: I0905 23:52:51.505553 3230 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:52:51.506462 kubelet[3230]: I0905 23:52:51.505932 3230 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:52:51.506462 kubelet[3230]: I0905 23:52:51.506064 3230 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-29d70f4830","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 23:52:51.506462 kubelet[3230]: I0905 23:52:51.506352 3230 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:52:51.506462 kubelet[3230]: I0905 23:52:51.506364 3230 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 23:52:51.506462 kubelet[3230]: I0905 23:52:51.506418 3230 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:52:51.506931 kubelet[3230]: I0905 23:52:51.506909 3230 kubelet.go:480] "Attempting to sync node with API server" Sep 5 23:52:51.507027 kubelet[3230]: I0905 23:52:51.507016 3230 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:52:51.507096 kubelet[3230]: I0905 23:52:51.507089 3230 kubelet.go:386] "Adding apiserver pod source" Sep 5 23:52:51.507158 kubelet[3230]: I0905 23:52:51.507150 3230 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:52:51.512990 kubelet[3230]: I0905 23:52:51.512961 3230 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:52:51.513553 kubelet[3230]: I0905 23:52:51.513520 3230 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 23:52:51.519266 kubelet[3230]: I0905 23:52:51.519242 3230 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 23:52:51.519344 kubelet[3230]: I0905 23:52:51.519285 3230 server.go:1289] "Started kubelet" Sep 5 23:52:51.520899 kubelet[3230]: I0905 23:52:51.519407 3230 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:52:51.520899 kubelet[3230]: I0905 23:52:51.520085 3230 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:52:51.520899 kubelet[3230]: I0905 23:52:51.520236 3230 server.go:317] "Adding debug handlers to kubelet server" Sep 5 23:52:51.520899 kubelet[3230]: I0905 23:52:51.520304 3230 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:52:51.523794 kubelet[3230]: I0905 23:52:51.523772 3230 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:52:51.533163 kubelet[3230]: I0905 23:52:51.533135 3230 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:52:51.536932 kubelet[3230]: I0905 23:52:51.536897 3230 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 23:52:51.537178 kubelet[3230]: E0905 23:52:51.537153 3230 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-29d70f4830\" not found" Sep 5 23:52:51.540042 kubelet[3230]: I0905 23:52:51.539680 3230 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 23:52:51.540042 kubelet[3230]: I0905 23:52:51.540011 3230 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:52:51.552882 kubelet[3230]: I0905 23:52:51.551689 3230 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 23:52:51.552882 kubelet[3230]: I0905 23:52:51.552813 3230 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 23:52:51.552882 kubelet[3230]: I0905 23:52:51.552828 3230 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 23:52:51.552882 kubelet[3230]: I0905 23:52:51.552872 3230 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 23:52:51.552882 kubelet[3230]: I0905 23:52:51.552880 3230 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 23:52:51.553072 kubelet[3230]: E0905 23:52:51.552942 3230 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:52:51.555013 kubelet[3230]: I0905 23:52:51.554989 3230 factory.go:223] Registration of the systemd container factory successfully Sep 5 23:52:51.555258 kubelet[3230]: I0905 23:52:51.555224 3230 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:52:51.563896 kubelet[3230]: E0905 23:52:51.562568 3230 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:52:51.563896 kubelet[3230]: I0905 23:52:51.563257 3230 factory.go:223] Registration of the containerd container factory successfully Sep 5 23:52:51.623397 kubelet[3230]: I0905 23:52:51.622473 3230 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 23:52:51.623397 kubelet[3230]: I0905 23:52:51.622492 3230 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 23:52:51.623397 kubelet[3230]: I0905 23:52:51.622513 3230 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:52:51.623397 kubelet[3230]: I0905 23:52:51.622635 3230 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 23:52:51.623397 kubelet[3230]: I0905 23:52:51.622645 3230 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 23:52:51.623397 kubelet[3230]: I0905 23:52:51.622666 3230 policy_none.go:49] "None policy: Start" Sep 5 23:52:51.623397 kubelet[3230]: I0905 23:52:51.622675 3230 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 23:52:51.623397 kubelet[3230]: I0905 23:52:51.622682 3230 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:52:51.623397 kubelet[3230]: I0905 23:52:51.622760 3230 state_mem.go:75] "Updated machine memory state" Sep 5 23:52:51.626895 kubelet[3230]: E0905 23:52:51.626874 3230 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 23:52:51.628263 kubelet[3230]: I0905 23:52:51.627138 3230 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:52:51.628263 kubelet[3230]: I0905 23:52:51.627155 3230 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:52:51.628263 kubelet[3230]: I0905 23:52:51.628254 3230 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:52:51.630285 kubelet[3230]: E0905 23:52:51.629449 3230 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 23:52:51.654025 kubelet[3230]: I0905 23:52:51.653981 3230 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:51.655320 kubelet[3230]: I0905 23:52:51.655014 3230 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:51.655947 kubelet[3230]: I0905 23:52:51.655907 3230 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:51.663345 kubelet[3230]: I0905 23:52:51.663293 3230 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 5 23:52:51.667729 kubelet[3230]: I0905 23:52:51.667702 3230 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 5 23:52:51.668239 kubelet[3230]: I0905 23:52:51.667940 3230 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 5 23:52:51.729604 kubelet[3230]: I0905 23:52:51.729581 3230 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:51.740932 kubelet[3230]: I0905 23:52:51.740814 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45b1194903a910eb95c8473ca5323a69-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-29d70f4830\" (UID: \"45b1194903a910eb95c8473ca5323a69\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:51.740932 kubelet[3230]: I0905 23:52:51.740879 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e3622f221756f54814a3c30e893f593d-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-29d70f4830\" (UID: \"e3622f221756f54814a3c30e893f593d\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:51.740932 kubelet[3230]: I0905 23:52:51.740900 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ead9039b60377823417f31a1caf2616b-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-29d70f4830\" (UID: \"ead9039b60377823417f31a1caf2616b\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:51.740932 kubelet[3230]: I0905 23:52:51.740918 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ead9039b60377823417f31a1caf2616b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-29d70f4830\" (UID: \"ead9039b60377823417f31a1caf2616b\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:51.741094 kubelet[3230]: I0905 23:52:51.740948 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45b1194903a910eb95c8473ca5323a69-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-29d70f4830\" (UID: \"45b1194903a910eb95c8473ca5323a69\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:51.741094 kubelet[3230]: I0905 23:52:51.740969 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45b1194903a910eb95c8473ca5323a69-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-29d70f4830\" (UID: \"45b1194903a910eb95c8473ca5323a69\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:51.741094 kubelet[3230]: I0905 23:52:51.740983 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45b1194903a910eb95c8473ca5323a69-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-29d70f4830\" (UID: \"45b1194903a910eb95c8473ca5323a69\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:51.741094 kubelet[3230]: I0905 23:52:51.740996 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ead9039b60377823417f31a1caf2616b-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-29d70f4830\" (UID: \"ead9039b60377823417f31a1caf2616b\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:51.741094 kubelet[3230]: I0905 23:52:51.741022 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/45b1194903a910eb95c8473ca5323a69-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-29d70f4830\" (UID: \"45b1194903a910eb95c8473ca5323a69\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:51.746060 kubelet[3230]: I0905 23:52:51.746022 3230 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:51.746138 kubelet[3230]: I0905 23:52:51.746118 3230 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-n-29d70f4830" Sep 5 23:52:52.511133 kubelet[3230]: I0905 23:52:52.510869 3230 apiserver.go:52] "Watching apiserver" Sep 5 23:52:52.540515 kubelet[3230]: I0905 23:52:52.540475 3230 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 23:52:52.607506 kubelet[3230]: I0905 23:52:52.607466 3230 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:52.608713 kubelet[3230]: I0905 23:52:52.608687 3230 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:52.621248 kubelet[3230]: I0905 23:52:52.621218 3230 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 5 23:52:52.621356 kubelet[3230]: E0905 23:52:52.621268 3230 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-n-29d70f4830\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:52.626888 kubelet[3230]: I0905 23:52:52.625776 3230 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 5 23:52:52.626888 kubelet[3230]: E0905 23:52:52.625817 3230 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-n-29d70f4830\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.5-n-29d70f4830" Sep 5 23:52:52.631108 kubelet[3230]: I0905 23:52:52.630706 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-n-29d70f4830" podStartSLOduration=1.630694966 podStartE2EDuration="1.630694966s" podCreationTimestamp="2025-09-05 23:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:52:52.629642206 +0000 UTC m=+1.176477402" watchObservedRunningTime="2025-09-05 23:52:52.630694966 +0000 UTC m=+1.177530202" Sep 5 23:52:52.645463 kubelet[3230]: I0905 23:52:52.645333 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-n-29d70f4830" podStartSLOduration=1.645319244 podStartE2EDuration="1.645319244s" podCreationTimestamp="2025-09-05 23:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:52:52.644847524 +0000 UTC m=+1.191682720" watchObservedRunningTime="2025-09-05 23:52:52.645319244 +0000 UTC m=+1.192154480" Sep 5 23:52:52.657116 kubelet[3230]: I0905 23:52:52.657060 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-29d70f4830" podStartSLOduration=1.657048243 podStartE2EDuration="1.657048243s" podCreationTimestamp="2025-09-05 23:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:52:52.656556923 +0000 UTC m=+1.203392159" watchObservedRunningTime="2025-09-05 23:52:52.657048243 +0000 UTC m=+1.203883479" Sep 5 23:52:55.969043 kubelet[3230]: I0905 23:52:55.968845 3230 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 23:52:55.969419 containerd[1735]: time="2025-09-05T23:52:55.969290314Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 23:52:55.969623 kubelet[3230]: I0905 23:52:55.969455 3230 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 23:52:56.834254 systemd[1]: Created slice kubepods-besteffort-pod347dfa54_e52c_4a30_8ad9_95161f5f52fc.slice - libcontainer container kubepods-besteffort-pod347dfa54_e52c_4a30_8ad9_95161f5f52fc.slice. Sep 5 23:52:56.869814 kubelet[3230]: I0905 23:52:56.869768 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klk56\" (UniqueName: \"kubernetes.io/projected/347dfa54-e52c-4a30-8ad9-95161f5f52fc-kube-api-access-klk56\") pod \"kube-proxy-4dx7d\" (UID: \"347dfa54-e52c-4a30-8ad9-95161f5f52fc\") " pod="kube-system/kube-proxy-4dx7d" Sep 5 23:52:56.869814 kubelet[3230]: I0905 23:52:56.869818 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/347dfa54-e52c-4a30-8ad9-95161f5f52fc-kube-proxy\") pod \"kube-proxy-4dx7d\" (UID: \"347dfa54-e52c-4a30-8ad9-95161f5f52fc\") " pod="kube-system/kube-proxy-4dx7d" Sep 5 23:52:56.870068 kubelet[3230]: I0905 23:52:56.869837 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/347dfa54-e52c-4a30-8ad9-95161f5f52fc-xtables-lock\") pod \"kube-proxy-4dx7d\" (UID: \"347dfa54-e52c-4a30-8ad9-95161f5f52fc\") " pod="kube-system/kube-proxy-4dx7d" Sep 5 23:52:56.870068 kubelet[3230]: I0905 23:52:56.869853 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/347dfa54-e52c-4a30-8ad9-95161f5f52fc-lib-modules\") pod \"kube-proxy-4dx7d\" (UID: \"347dfa54-e52c-4a30-8ad9-95161f5f52fc\") " pod="kube-system/kube-proxy-4dx7d" Sep 5 23:52:57.144990 containerd[1735]: time="2025-09-05T23:52:57.144653770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4dx7d,Uid:347dfa54-e52c-4a30-8ad9-95161f5f52fc,Namespace:kube-system,Attempt:0,}" Sep 5 23:52:57.164328 systemd[1]: Created slice kubepods-besteffort-podb5044eea_c1a2_425c_8143_865642d56b2b.slice - libcontainer container kubepods-besteffort-podb5044eea_c1a2_425c_8143_865642d56b2b.slice. Sep 5 23:52:57.171936 kubelet[3230]: I0905 23:52:57.171817 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b5044eea-c1a2-425c-8143-865642d56b2b-var-lib-calico\") pod \"tigera-operator-755d956888-6lbxn\" (UID: \"b5044eea-c1a2-425c-8143-865642d56b2b\") " pod="tigera-operator/tigera-operator-755d956888-6lbxn" Sep 5 23:52:57.171936 kubelet[3230]: I0905 23:52:57.171900 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwdgw\" (UniqueName: \"kubernetes.io/projected/b5044eea-c1a2-425c-8143-865642d56b2b-kube-api-access-pwdgw\") pod \"tigera-operator-755d956888-6lbxn\" (UID: \"b5044eea-c1a2-425c-8143-865642d56b2b\") " pod="tigera-operator/tigera-operator-755d956888-6lbxn" Sep 5 23:52:57.194287 containerd[1735]: time="2025-09-05T23:52:57.194091523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:57.194287 containerd[1735]: time="2025-09-05T23:52:57.194145283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:57.194287 containerd[1735]: time="2025-09-05T23:52:57.194163683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:57.194287 containerd[1735]: time="2025-09-05T23:52:57.194243683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:57.212810 systemd[1]: run-containerd-runc-k8s.io-114ccc46af73f706b1242bf2ccfe4c03eb9b51e9fb03388251189887c3a8fde1-runc.TQFTTW.mount: Deactivated successfully. Sep 5 23:52:57.228050 systemd[1]: Started cri-containerd-114ccc46af73f706b1242bf2ccfe4c03eb9b51e9fb03388251189887c3a8fde1.scope - libcontainer container 114ccc46af73f706b1242bf2ccfe4c03eb9b51e9fb03388251189887c3a8fde1. Sep 5 23:52:57.249470 containerd[1735]: time="2025-09-05T23:52:57.249427197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4dx7d,Uid:347dfa54-e52c-4a30-8ad9-95161f5f52fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"114ccc46af73f706b1242bf2ccfe4c03eb9b51e9fb03388251189887c3a8fde1\"" Sep 5 23:52:57.257798 containerd[1735]: time="2025-09-05T23:52:57.257688916Z" level=info msg="CreateContainer within sandbox \"114ccc46af73f706b1242bf2ccfe4c03eb9b51e9fb03388251189887c3a8fde1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 23:52:57.310668 containerd[1735]: time="2025-09-05T23:52:57.310621349Z" level=info msg="CreateContainer within sandbox \"114ccc46af73f706b1242bf2ccfe4c03eb9b51e9fb03388251189887c3a8fde1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"94e93b94480e17cacc760b63ff3630e937801885270b04d0bfd3b9791203809b\"" Sep 5 23:52:57.312534 containerd[1735]: time="2025-09-05T23:52:57.311189429Z" level=info msg="StartContainer for \"94e93b94480e17cacc760b63ff3630e937801885270b04d0bfd3b9791203809b\"" Sep 5 23:52:57.342123 systemd[1]: Started cri-containerd-94e93b94480e17cacc760b63ff3630e937801885270b04d0bfd3b9791203809b.scope - libcontainer container 94e93b94480e17cacc760b63ff3630e937801885270b04d0bfd3b9791203809b. Sep 5 23:52:57.379756 containerd[1735]: time="2025-09-05T23:52:57.379704141Z" level=info msg="StartContainer for \"94e93b94480e17cacc760b63ff3630e937801885270b04d0bfd3b9791203809b\" returns successfully" Sep 5 23:52:57.470714 containerd[1735]: time="2025-09-05T23:52:57.470279729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-6lbxn,Uid:b5044eea-c1a2-425c-8143-865642d56b2b,Namespace:tigera-operator,Attempt:0,}" Sep 5 23:52:57.532911 containerd[1735]: time="2025-09-05T23:52:57.532671922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:52:57.532911 containerd[1735]: time="2025-09-05T23:52:57.532733402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:52:57.532911 containerd[1735]: time="2025-09-05T23:52:57.532767002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:57.533503 containerd[1735]: time="2025-09-05T23:52:57.532854162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:52:57.552086 systemd[1]: Started cri-containerd-1da85c2b9d32777aadd157d45a9c71f3c81ee2043c7d3725e53f86c00c562b4b.scope - libcontainer container 1da85c2b9d32777aadd157d45a9c71f3c81ee2043c7d3725e53f86c00c562b4b. Sep 5 23:52:57.581699 containerd[1735]: time="2025-09-05T23:52:57.581657116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-6lbxn,Uid:b5044eea-c1a2-425c-8143-865642d56b2b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1da85c2b9d32777aadd157d45a9c71f3c81ee2043c7d3725e53f86c00c562b4b\"" Sep 5 23:52:57.584689 containerd[1735]: time="2025-09-05T23:52:57.583425275Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 23:53:00.130435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4015328401.mount: Deactivated successfully. Sep 5 23:53:00.573198 containerd[1735]: time="2025-09-05T23:53:00.573154707Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:00.577074 containerd[1735]: time="2025-09-05T23:53:00.576979866Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 5 23:53:00.582462 containerd[1735]: time="2025-09-05T23:53:00.582409506Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:00.588624 containerd[1735]: time="2025-09-05T23:53:00.587880265Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:00.588624 containerd[1735]: time="2025-09-05T23:53:00.588518105Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 3.00370499s" Sep 5 23:53:00.588624 containerd[1735]: time="2025-09-05T23:53:00.588545705Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 5 23:53:00.597439 containerd[1735]: time="2025-09-05T23:53:00.597399824Z" level=info msg="CreateContainer within sandbox \"1da85c2b9d32777aadd157d45a9c71f3c81ee2043c7d3725e53f86c00c562b4b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 23:53:00.643794 containerd[1735]: time="2025-09-05T23:53:00.643743338Z" level=info msg="CreateContainer within sandbox \"1da85c2b9d32777aadd157d45a9c71f3c81ee2043c7d3725e53f86c00c562b4b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1ec8f11a62cb9c5423cfa6b69ce1273867389bd925b2b8c8619f3a2f38db2f3f\"" Sep 5 23:53:00.644899 containerd[1735]: time="2025-09-05T23:53:00.644425498Z" level=info msg="StartContainer for \"1ec8f11a62cb9c5423cfa6b69ce1273867389bd925b2b8c8619f3a2f38db2f3f\"" Sep 5 23:53:00.673021 systemd[1]: Started cri-containerd-1ec8f11a62cb9c5423cfa6b69ce1273867389bd925b2b8c8619f3a2f38db2f3f.scope - libcontainer container 1ec8f11a62cb9c5423cfa6b69ce1273867389bd925b2b8c8619f3a2f38db2f3f. Sep 5 23:53:00.701163 containerd[1735]: time="2025-09-05T23:53:00.701106772Z" level=info msg="StartContainer for \"1ec8f11a62cb9c5423cfa6b69ce1273867389bd925b2b8c8619f3a2f38db2f3f\" returns successfully" Sep 5 23:53:01.634388 kubelet[3230]: I0905 23:53:01.634049 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4dx7d" podStartSLOduration=5.634032142 podStartE2EDuration="5.634032142s" podCreationTimestamp="2025-09-05 23:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:52:57.62816323 +0000 UTC m=+6.174998466" watchObservedRunningTime="2025-09-05 23:53:01.634032142 +0000 UTC m=+10.180867378" Sep 5 23:53:02.208500 kubelet[3230]: I0905 23:53:02.208081 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-6lbxn" podStartSLOduration=2.201597965 podStartE2EDuration="5.208065274s" podCreationTimestamp="2025-09-05 23:52:57 +0000 UTC" firstStartedPulling="2025-09-05 23:52:57.583034036 +0000 UTC m=+6.129869272" lastFinishedPulling="2025-09-05 23:53:00.589501345 +0000 UTC m=+9.136336581" observedRunningTime="2025-09-05 23:53:01.634273342 +0000 UTC m=+10.181108538" watchObservedRunningTime="2025-09-05 23:53:02.208065274 +0000 UTC m=+10.754900510" Sep 5 23:53:06.856527 sudo[2240]: pam_unix(sudo:session): session closed for user root Sep 5 23:53:06.981775 sshd[2237]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:06.984832 systemd[1]: sshd@6-10.200.20.38:22-10.200.16.10:60272.service: Deactivated successfully. Sep 5 23:53:06.989114 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 23:53:06.990924 systemd[1]: session-9.scope: Consumed 7.771s CPU time, 152.2M memory peak, 0B memory swap peak. Sep 5 23:53:06.994009 systemd-logind[1701]: Session 9 logged out. Waiting for processes to exit. Sep 5 23:53:06.996276 systemd-logind[1701]: Removed session 9. Sep 5 23:53:14.336937 systemd[1]: Created slice kubepods-besteffort-pod2b9ed2be_a56f_4918_add4_5ed8def47572.slice - libcontainer container kubepods-besteffort-pod2b9ed2be_a56f_4918_add4_5ed8def47572.slice. Sep 5 23:53:14.383946 kubelet[3230]: I0905 23:53:14.383666 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b9ed2be-a56f-4918-add4-5ed8def47572-tigera-ca-bundle\") pod \"calico-typha-84987899fc-4kgrd\" (UID: \"2b9ed2be-a56f-4918-add4-5ed8def47572\") " pod="calico-system/calico-typha-84987899fc-4kgrd" Sep 5 23:53:14.384849 kubelet[3230]: I0905 23:53:14.384367 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2b9ed2be-a56f-4918-add4-5ed8def47572-typha-certs\") pod \"calico-typha-84987899fc-4kgrd\" (UID: \"2b9ed2be-a56f-4918-add4-5ed8def47572\") " pod="calico-system/calico-typha-84987899fc-4kgrd" Sep 5 23:53:14.384849 kubelet[3230]: I0905 23:53:14.384398 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh2ft\" (UniqueName: \"kubernetes.io/projected/2b9ed2be-a56f-4918-add4-5ed8def47572-kube-api-access-wh2ft\") pod \"calico-typha-84987899fc-4kgrd\" (UID: \"2b9ed2be-a56f-4918-add4-5ed8def47572\") " pod="calico-system/calico-typha-84987899fc-4kgrd" Sep 5 23:53:14.610936 systemd[1]: Created slice kubepods-besteffort-pod35a449de_da51_4227_9be4_80cd6cd15507.slice - libcontainer container kubepods-besteffort-pod35a449de_da51_4227_9be4_80cd6cd15507.slice. Sep 5 23:53:14.648012 containerd[1735]: time="2025-09-05T23:53:14.647413980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84987899fc-4kgrd,Uid:2b9ed2be-a56f-4918-add4-5ed8def47572,Namespace:calico-system,Attempt:0,}" Sep 5 23:53:14.687335 kubelet[3230]: I0905 23:53:14.686539 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/35a449de-da51-4227-9be4-80cd6cd15507-node-certs\") pod \"calico-node-2t7z7\" (UID: \"35a449de-da51-4227-9be4-80cd6cd15507\") " pod="calico-system/calico-node-2t7z7" Sep 5 23:53:14.687335 kubelet[3230]: I0905 23:53:14.686582 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/35a449de-da51-4227-9be4-80cd6cd15507-var-lib-calico\") pod \"calico-node-2t7z7\" (UID: \"35a449de-da51-4227-9be4-80cd6cd15507\") " pod="calico-system/calico-node-2t7z7" Sep 5 23:53:14.687335 kubelet[3230]: I0905 23:53:14.686598 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sddqb\" (UniqueName: \"kubernetes.io/projected/35a449de-da51-4227-9be4-80cd6cd15507-kube-api-access-sddqb\") pod \"calico-node-2t7z7\" (UID: \"35a449de-da51-4227-9be4-80cd6cd15507\") " pod="calico-system/calico-node-2t7z7" Sep 5 23:53:14.687335 kubelet[3230]: I0905 23:53:14.686621 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/35a449de-da51-4227-9be4-80cd6cd15507-cni-bin-dir\") pod \"calico-node-2t7z7\" (UID: \"35a449de-da51-4227-9be4-80cd6cd15507\") " pod="calico-system/calico-node-2t7z7" Sep 5 23:53:14.687335 kubelet[3230]: I0905 23:53:14.686636 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/35a449de-da51-4227-9be4-80cd6cd15507-cni-log-dir\") pod \"calico-node-2t7z7\" (UID: \"35a449de-da51-4227-9be4-80cd6cd15507\") " pod="calico-system/calico-node-2t7z7" Sep 5 23:53:14.687553 kubelet[3230]: I0905 23:53:14.686650 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35a449de-da51-4227-9be4-80cd6cd15507-lib-modules\") pod \"calico-node-2t7z7\" (UID: \"35a449de-da51-4227-9be4-80cd6cd15507\") " pod="calico-system/calico-node-2t7z7" Sep 5 23:53:14.687553 kubelet[3230]: I0905 23:53:14.686668 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35a449de-da51-4227-9be4-80cd6cd15507-xtables-lock\") pod \"calico-node-2t7z7\" (UID: \"35a449de-da51-4227-9be4-80cd6cd15507\") " pod="calico-system/calico-node-2t7z7" Sep 5 23:53:14.687553 kubelet[3230]: I0905 23:53:14.686683 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/35a449de-da51-4227-9be4-80cd6cd15507-flexvol-driver-host\") pod \"calico-node-2t7z7\" (UID: \"35a449de-da51-4227-9be4-80cd6cd15507\") " pod="calico-system/calico-node-2t7z7" Sep 5 23:53:14.687553 kubelet[3230]: I0905 23:53:14.686699 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/35a449de-da51-4227-9be4-80cd6cd15507-cni-net-dir\") pod \"calico-node-2t7z7\" (UID: \"35a449de-da51-4227-9be4-80cd6cd15507\") " pod="calico-system/calico-node-2t7z7" Sep 5 23:53:14.687553 kubelet[3230]: I0905 23:53:14.686715 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35a449de-da51-4227-9be4-80cd6cd15507-tigera-ca-bundle\") pod \"calico-node-2t7z7\" (UID: \"35a449de-da51-4227-9be4-80cd6cd15507\") " pod="calico-system/calico-node-2t7z7" Sep 5 23:53:14.687665 kubelet[3230]: I0905 23:53:14.686731 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/35a449de-da51-4227-9be4-80cd6cd15507-var-run-calico\") pod \"calico-node-2t7z7\" (UID: \"35a449de-da51-4227-9be4-80cd6cd15507\") " pod="calico-system/calico-node-2t7z7" Sep 5 23:53:14.687665 kubelet[3230]: I0905 23:53:14.686749 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/35a449de-da51-4227-9be4-80cd6cd15507-policysync\") pod \"calico-node-2t7z7\" (UID: \"35a449de-da51-4227-9be4-80cd6cd15507\") " pod="calico-system/calico-node-2t7z7" Sep 5 23:53:14.698719 containerd[1735]: time="2025-09-05T23:53:14.698031936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:14.698719 containerd[1735]: time="2025-09-05T23:53:14.698095736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:14.698719 containerd[1735]: time="2025-09-05T23:53:14.698110696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:14.698719 containerd[1735]: time="2025-09-05T23:53:14.698183056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:14.728469 systemd[1]: Started cri-containerd-6429014aa45f031b8b0fae32293410359750c774d1daabc22ba8fae04f5383fa.scope - libcontainer container 6429014aa45f031b8b0fae32293410359750c774d1daabc22ba8fae04f5383fa. Sep 5 23:53:14.756413 kubelet[3230]: E0905 23:53:14.755591 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gmdjj" podUID="b655b97a-e9ef-4351-a639-e1502a0f30b8" Sep 5 23:53:14.798081 kubelet[3230]: I0905 23:53:14.797818 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b655b97a-e9ef-4351-a639-e1502a0f30b8-kubelet-dir\") pod \"csi-node-driver-gmdjj\" (UID: \"b655b97a-e9ef-4351-a639-e1502a0f30b8\") " pod="calico-system/csi-node-driver-gmdjj" Sep 5 23:53:14.798081 kubelet[3230]: I0905 23:53:14.797881 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b655b97a-e9ef-4351-a639-e1502a0f30b8-varrun\") pod \"csi-node-driver-gmdjj\" (UID: \"b655b97a-e9ef-4351-a639-e1502a0f30b8\") " pod="calico-system/csi-node-driver-gmdjj" Sep 5 23:53:14.798081 kubelet[3230]: I0905 23:53:14.797958 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b655b97a-e9ef-4351-a639-e1502a0f30b8-socket-dir\") pod \"csi-node-driver-gmdjj\" (UID: \"b655b97a-e9ef-4351-a639-e1502a0f30b8\") " pod="calico-system/csi-node-driver-gmdjj" Sep 5 23:53:14.799029 kubelet[3230]: I0905 23:53:14.798051 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29qs9\" (UniqueName: \"kubernetes.io/projected/b655b97a-e9ef-4351-a639-e1502a0f30b8-kube-api-access-29qs9\") pod \"csi-node-driver-gmdjj\" (UID: \"b655b97a-e9ef-4351-a639-e1502a0f30b8\") " pod="calico-system/csi-node-driver-gmdjj" Sep 5 23:53:14.799148 kubelet[3230]: I0905 23:53:14.799132 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b655b97a-e9ef-4351-a639-e1502a0f30b8-registration-dir\") pod \"csi-node-driver-gmdjj\" (UID: \"b655b97a-e9ef-4351-a639-e1502a0f30b8\") " pod="calico-system/csi-node-driver-gmdjj" Sep 5 23:53:14.803382 kubelet[3230]: E0905 23:53:14.803359 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.803942 kubelet[3230]: W0905 23:53:14.803920 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.804056 kubelet[3230]: E0905 23:53:14.804041 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.804297 kubelet[3230]: E0905 23:53:14.804285 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.804372 kubelet[3230]: W0905 23:53:14.804360 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.804430 kubelet[3230]: E0905 23:53:14.804419 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.807021 kubelet[3230]: E0905 23:53:14.806797 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.807021 kubelet[3230]: W0905 23:53:14.806813 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.807021 kubelet[3230]: E0905 23:53:14.806826 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.807811 kubelet[3230]: E0905 23:53:14.807557 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.807811 kubelet[3230]: W0905 23:53:14.807575 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.807811 kubelet[3230]: E0905 23:53:14.807588 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.810347 kubelet[3230]: E0905 23:53:14.810196 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.810347 kubelet[3230]: W0905 23:53:14.810212 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.810347 kubelet[3230]: E0905 23:53:14.810224 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.813463 kubelet[3230]: E0905 23:53:14.813410 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.813463 kubelet[3230]: W0905 23:53:14.813424 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.813463 kubelet[3230]: E0905 23:53:14.813437 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.846923 containerd[1735]: time="2025-09-05T23:53:14.845926962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84987899fc-4kgrd,Uid:2b9ed2be-a56f-4918-add4-5ed8def47572,Namespace:calico-system,Attempt:0,} returns sandbox id \"6429014aa45f031b8b0fae32293410359750c774d1daabc22ba8fae04f5383fa\"" Sep 5 23:53:14.849825 containerd[1735]: time="2025-09-05T23:53:14.849776842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 23:53:14.901945 kubelet[3230]: E0905 23:53:14.900350 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.901945 kubelet[3230]: W0905 23:53:14.900374 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.901945 kubelet[3230]: E0905 23:53:14.900394 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.903148 kubelet[3230]: E0905 23:53:14.902886 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.903148 kubelet[3230]: W0905 23:53:14.902905 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.903148 kubelet[3230]: E0905 23:53:14.902919 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.906020 kubelet[3230]: E0905 23:53:14.904649 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.906020 kubelet[3230]: W0905 23:53:14.904667 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.906020 kubelet[3230]: E0905 23:53:14.904683 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.906445 kubelet[3230]: E0905 23:53:14.906408 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.906445 kubelet[3230]: W0905 23:53:14.906421 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.906445 kubelet[3230]: E0905 23:53:14.906434 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.906872 kubelet[3230]: E0905 23:53:14.906810 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.906872 kubelet[3230]: W0905 23:53:14.906822 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.906872 kubelet[3230]: E0905 23:53:14.906833 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.908239 kubelet[3230]: E0905 23:53:14.907682 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.908239 kubelet[3230]: W0905 23:53:14.907696 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.908239 kubelet[3230]: E0905 23:53:14.907708 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.908884 kubelet[3230]: E0905 23:53:14.908449 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.909019 kubelet[3230]: W0905 23:53:14.908852 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.909019 kubelet[3230]: E0905 23:53:14.908998 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.909894 kubelet[3230]: E0905 23:53:14.909762 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.910594 kubelet[3230]: W0905 23:53:14.910474 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.910594 kubelet[3230]: E0905 23:53:14.910507 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.911999 kubelet[3230]: E0905 23:53:14.911921 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.911999 kubelet[3230]: W0905 23:53:14.911936 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.911999 kubelet[3230]: E0905 23:53:14.911949 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.912878 kubelet[3230]: E0905 23:53:14.912377 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.912878 kubelet[3230]: W0905 23:53:14.912391 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.912878 kubelet[3230]: E0905 23:53:14.912403 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.913873 kubelet[3230]: E0905 23:53:14.913524 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.913873 kubelet[3230]: W0905 23:53:14.913540 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.913873 kubelet[3230]: E0905 23:53:14.913553 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.914951 kubelet[3230]: E0905 23:53:14.914462 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.914951 kubelet[3230]: W0905 23:53:14.914477 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.914951 kubelet[3230]: E0905 23:53:14.914489 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.915605 kubelet[3230]: E0905 23:53:14.915553 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.915605 kubelet[3230]: W0905 23:53:14.915569 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.915891 kubelet[3230]: E0905 23:53:14.915581 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.918364 containerd[1735]: time="2025-09-05T23:53:14.917228556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2t7z7,Uid:35a449de-da51-4227-9be4-80cd6cd15507,Namespace:calico-system,Attempt:0,}" Sep 5 23:53:14.918454 kubelet[3230]: E0905 23:53:14.917570 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.918454 kubelet[3230]: W0905 23:53:14.917583 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.918454 kubelet[3230]: E0905 23:53:14.917595 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.918824 kubelet[3230]: E0905 23:53:14.918753 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.918824 kubelet[3230]: W0905 23:53:14.918766 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.918824 kubelet[3230]: E0905 23:53:14.918811 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.919877 kubelet[3230]: E0905 23:53:14.919618 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.919877 kubelet[3230]: W0905 23:53:14.919633 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.919877 kubelet[3230]: E0905 23:53:14.919646 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.921233 kubelet[3230]: E0905 23:53:14.920799 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.921233 kubelet[3230]: W0905 23:53:14.920814 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.921233 kubelet[3230]: E0905 23:53:14.920885 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.922393 kubelet[3230]: E0905 23:53:14.921950 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.922393 kubelet[3230]: W0905 23:53:14.922066 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.922393 kubelet[3230]: E0905 23:53:14.922084 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.923320 kubelet[3230]: E0905 23:53:14.922957 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.923320 kubelet[3230]: W0905 23:53:14.922971 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.923320 kubelet[3230]: E0905 23:53:14.922984 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.924166 kubelet[3230]: E0905 23:53:14.923873 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.924166 kubelet[3230]: W0905 23:53:14.923888 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.924166 kubelet[3230]: E0905 23:53:14.923900 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.927046 kubelet[3230]: E0905 23:53:14.924919 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.927046 kubelet[3230]: W0905 23:53:14.924933 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.927046 kubelet[3230]: E0905 23:53:14.924945 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.927046 kubelet[3230]: E0905 23:53:14.925356 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.927046 kubelet[3230]: W0905 23:53:14.925367 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.927046 kubelet[3230]: E0905 23:53:14.925379 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.927046 kubelet[3230]: E0905 23:53:14.925764 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.927046 kubelet[3230]: W0905 23:53:14.925932 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.927046 kubelet[3230]: E0905 23:53:14.925952 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.927046 kubelet[3230]: E0905 23:53:14.926533 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.927265 kubelet[3230]: W0905 23:53:14.926653 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.927265 kubelet[3230]: E0905 23:53:14.926669 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.928910 kubelet[3230]: E0905 23:53:14.928680 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.928910 kubelet[3230]: W0905 23:53:14.928697 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.928910 kubelet[3230]: E0905 23:53:14.928710 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.943106 kubelet[3230]: E0905 23:53:14.943024 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:14.943106 kubelet[3230]: W0905 23:53:14.943074 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:14.943337 kubelet[3230]: E0905 23:53:14.943260 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:14.975169 containerd[1735]: time="2025-09-05T23:53:14.974514191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:14.975169 containerd[1735]: time="2025-09-05T23:53:14.974594591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:14.975169 containerd[1735]: time="2025-09-05T23:53:14.974608471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:14.975169 containerd[1735]: time="2025-09-05T23:53:14.974706431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:15.001272 systemd[1]: Started cri-containerd-1465ffb36e8a66149338633f6445b11a13c3aef32668376ade820ffd8a84716c.scope - libcontainer container 1465ffb36e8a66149338633f6445b11a13c3aef32668376ade820ffd8a84716c. Sep 5 23:53:15.030417 containerd[1735]: time="2025-09-05T23:53:15.030374666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2t7z7,Uid:35a449de-da51-4227-9be4-80cd6cd15507,Namespace:calico-system,Attempt:0,} returns sandbox id \"1465ffb36e8a66149338633f6445b11a13c3aef32668376ade820ffd8a84716c\"" Sep 5 23:53:16.081170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928508807.mount: Deactivated successfully. Sep 5 23:53:16.553652 kubelet[3230]: E0905 23:53:16.553499 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gmdjj" podUID="b655b97a-e9ef-4351-a639-e1502a0f30b8" Sep 5 23:53:16.752503 containerd[1735]: time="2025-09-05T23:53:16.752462398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:16.755155 containerd[1735]: time="2025-09-05T23:53:16.755116998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 5 23:53:16.759335 containerd[1735]: time="2025-09-05T23:53:16.759289158Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:16.765192 containerd[1735]: time="2025-09-05T23:53:16.765128998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:16.765887 containerd[1735]: time="2025-09-05T23:53:16.765760478Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.915942556s" Sep 5 23:53:16.765887 containerd[1735]: time="2025-09-05T23:53:16.765793518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 5 23:53:16.767475 containerd[1735]: time="2025-09-05T23:53:16.767294598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 23:53:16.788406 containerd[1735]: time="2025-09-05T23:53:16.788357397Z" level=info msg="CreateContainer within sandbox \"6429014aa45f031b8b0fae32293410359750c774d1daabc22ba8fae04f5383fa\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 23:53:16.849075 containerd[1735]: time="2025-09-05T23:53:16.848922996Z" level=info msg="CreateContainer within sandbox \"6429014aa45f031b8b0fae32293410359750c774d1daabc22ba8fae04f5383fa\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a3ab2b73a90ead47309bcd484db0e574b39397d3cd6a901372d6121a64b58a11\"" Sep 5 23:53:16.850907 containerd[1735]: time="2025-09-05T23:53:16.850353956Z" level=info msg="StartContainer for \"a3ab2b73a90ead47309bcd484db0e574b39397d3cd6a901372d6121a64b58a11\"" Sep 5 23:53:16.883027 systemd[1]: Started cri-containerd-a3ab2b73a90ead47309bcd484db0e574b39397d3cd6a901372d6121a64b58a11.scope - libcontainer container a3ab2b73a90ead47309bcd484db0e574b39397d3cd6a901372d6121a64b58a11. Sep 5 23:53:16.919638 containerd[1735]: time="2025-09-05T23:53:16.919593834Z" level=info msg="StartContainer for \"a3ab2b73a90ead47309bcd484db0e574b39397d3cd6a901372d6121a64b58a11\" returns successfully" Sep 5 23:53:17.673375 kubelet[3230]: I0905 23:53:17.673314 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84987899fc-4kgrd" podStartSLOduration=1.756233661 podStartE2EDuration="3.673298137s" podCreationTimestamp="2025-09-05 23:53:14 +0000 UTC" firstStartedPulling="2025-09-05 23:53:14.849530482 +0000 UTC m=+23.396365718" lastFinishedPulling="2025-09-05 23:53:16.766594958 +0000 UTC m=+25.313430194" observedRunningTime="2025-09-05 23:53:17.671956897 +0000 UTC m=+26.218792093" watchObservedRunningTime="2025-09-05 23:53:17.673298137 +0000 UTC m=+26.220133373" Sep 5 23:53:17.692949 kubelet[3230]: E0905 23:53:17.692921 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.693238 kubelet[3230]: W0905 23:53:17.693078 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.693238 kubelet[3230]: E0905 23:53:17.693102 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.693694 kubelet[3230]: E0905 23:53:17.693495 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.693694 kubelet[3230]: W0905 23:53:17.693508 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.693694 kubelet[3230]: E0905 23:53:17.693583 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.694109 kubelet[3230]: E0905 23:53:17.693994 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.694109 kubelet[3230]: W0905 23:53:17.694008 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.694109 kubelet[3230]: E0905 23:53:17.694019 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.694546 kubelet[3230]: E0905 23:53:17.694367 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.694546 kubelet[3230]: W0905 23:53:17.694379 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.694546 kubelet[3230]: E0905 23:53:17.694390 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.695066 kubelet[3230]: E0905 23:53:17.694918 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.695066 kubelet[3230]: W0905 23:53:17.694931 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.695066 kubelet[3230]: E0905 23:53:17.694967 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.695703 kubelet[3230]: E0905 23:53:17.695478 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.695703 kubelet[3230]: W0905 23:53:17.695492 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.695703 kubelet[3230]: E0905 23:53:17.695503 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.696142 kubelet[3230]: E0905 23:53:17.696024 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.696142 kubelet[3230]: W0905 23:53:17.696040 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.696142 kubelet[3230]: E0905 23:53:17.696052 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.696742 kubelet[3230]: E0905 23:53:17.696512 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.696742 kubelet[3230]: W0905 23:53:17.696525 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.696742 kubelet[3230]: E0905 23:53:17.696536 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.697131 kubelet[3230]: E0905 23:53:17.697030 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.697131 kubelet[3230]: W0905 23:53:17.697043 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.697131 kubelet[3230]: E0905 23:53:17.697054 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.697350 kubelet[3230]: E0905 23:53:17.697226 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.697350 kubelet[3230]: W0905 23:53:17.697235 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.697350 kubelet[3230]: E0905 23:53:17.697245 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.697687 kubelet[3230]: E0905 23:53:17.697586 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.697687 kubelet[3230]: W0905 23:53:17.697597 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.697687 kubelet[3230]: E0905 23:53:17.697609 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.698090 kubelet[3230]: E0905 23:53:17.697958 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.698090 kubelet[3230]: W0905 23:53:17.697981 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.698090 kubelet[3230]: E0905 23:53:17.697993 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.698363 kubelet[3230]: E0905 23:53:17.698256 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.698363 kubelet[3230]: W0905 23:53:17.698268 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.698363 kubelet[3230]: E0905 23:53:17.698278 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.698579 kubelet[3230]: E0905 23:53:17.698532 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.698579 kubelet[3230]: W0905 23:53:17.698544 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.698579 kubelet[3230]: E0905 23:53:17.698554 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.698939 kubelet[3230]: E0905 23:53:17.698832 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.698939 kubelet[3230]: W0905 23:53:17.698844 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.698939 kubelet[3230]: E0905 23:53:17.698854 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.736223 kubelet[3230]: E0905 23:53:17.736201 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.736223 kubelet[3230]: W0905 23:53:17.736219 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.736389 kubelet[3230]: E0905 23:53:17.736234 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.736443 kubelet[3230]: E0905 23:53:17.736422 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.736443 kubelet[3230]: W0905 23:53:17.736431 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.736443 kubelet[3230]: E0905 23:53:17.736440 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.736784 kubelet[3230]: E0905 23:53:17.736700 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.736784 kubelet[3230]: W0905 23:53:17.736719 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.736784 kubelet[3230]: E0905 23:53:17.736731 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.737220 kubelet[3230]: E0905 23:53:17.737144 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.737220 kubelet[3230]: W0905 23:53:17.737158 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.737220 kubelet[3230]: E0905 23:53:17.737171 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.737575 kubelet[3230]: E0905 23:53:17.737455 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.737575 kubelet[3230]: W0905 23:53:17.737467 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.737575 kubelet[3230]: E0905 23:53:17.737478 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.737825 kubelet[3230]: E0905 23:53:17.737756 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.737825 kubelet[3230]: W0905 23:53:17.737767 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.737825 kubelet[3230]: E0905 23:53:17.737778 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.738310 kubelet[3230]: E0905 23:53:17.738129 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.738310 kubelet[3230]: W0905 23:53:17.738143 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.738310 kubelet[3230]: E0905 23:53:17.738157 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.738554 kubelet[3230]: E0905 23:53:17.738457 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.738554 kubelet[3230]: W0905 23:53:17.738470 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.738554 kubelet[3230]: E0905 23:53:17.738481 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.738845 kubelet[3230]: E0905 23:53:17.738755 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.738845 kubelet[3230]: W0905 23:53:17.738767 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.738845 kubelet[3230]: E0905 23:53:17.738777 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.739187 kubelet[3230]: E0905 23:53:17.739113 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.739187 kubelet[3230]: W0905 23:53:17.739126 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.739187 kubelet[3230]: E0905 23:53:17.739137 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.739493 kubelet[3230]: E0905 23:53:17.739423 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.739493 kubelet[3230]: W0905 23:53:17.739436 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.739493 kubelet[3230]: E0905 23:53:17.739446 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.739799 kubelet[3230]: E0905 23:53:17.739717 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.739799 kubelet[3230]: W0905 23:53:17.739728 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.739799 kubelet[3230]: E0905 23:53:17.739740 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.740415 kubelet[3230]: E0905 23:53:17.740312 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.741006 kubelet[3230]: W0905 23:53:17.740325 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.741006 kubelet[3230]: E0905 23:53:17.740938 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.741796 kubelet[3230]: E0905 23:53:17.741438 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.741796 kubelet[3230]: W0905 23:53:17.741453 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.741796 kubelet[3230]: E0905 23:53:17.741465 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.742622 kubelet[3230]: E0905 23:53:17.742404 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.742622 kubelet[3230]: W0905 23:53:17.742418 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.742622 kubelet[3230]: E0905 23:53:17.742431 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.742916 kubelet[3230]: E0905 23:53:17.742887 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.743026 kubelet[3230]: W0905 23:53:17.743011 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.743173 kubelet[3230]: E0905 23:53:17.743083 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.743463 kubelet[3230]: E0905 23:53:17.743353 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.743463 kubelet[3230]: W0905 23:53:17.743367 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.743463 kubelet[3230]: E0905 23:53:17.743379 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:17.743921 kubelet[3230]: E0905 23:53:17.743834 3230 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:53:17.743921 kubelet[3230]: W0905 23:53:17.743850 3230 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:53:17.744083 kubelet[3230]: E0905 23:53:17.744047 3230 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:53:18.213791 containerd[1735]: time="2025-09-05T23:53:18.213112285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:18.217351 containerd[1735]: time="2025-09-05T23:53:18.217317725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 5 23:53:18.222055 containerd[1735]: time="2025-09-05T23:53:18.222002285Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:18.228008 containerd[1735]: time="2025-09-05T23:53:18.227918405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:18.230145 containerd[1735]: time="2025-09-05T23:53:18.230007805Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.462524847s" Sep 5 23:53:18.230145 containerd[1735]: time="2025-09-05T23:53:18.230051045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 5 23:53:18.237660 containerd[1735]: time="2025-09-05T23:53:18.237454085Z" level=info msg="CreateContainer within sandbox \"1465ffb36e8a66149338633f6445b11a13c3aef32668376ade820ffd8a84716c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 23:53:18.291328 containerd[1735]: time="2025-09-05T23:53:18.291251363Z" level=info msg="CreateContainer within sandbox \"1465ffb36e8a66149338633f6445b11a13c3aef32668376ade820ffd8a84716c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d1398218934819fae60b2f4ff76578a2097518ac1840002a84711179902893b1\"" Sep 5 23:53:18.292214 containerd[1735]: time="2025-09-05T23:53:18.292174403Z" level=info msg="StartContainer for \"d1398218934819fae60b2f4ff76578a2097518ac1840002a84711179902893b1\"" Sep 5 23:53:18.331010 systemd[1]: Started cri-containerd-d1398218934819fae60b2f4ff76578a2097518ac1840002a84711179902893b1.scope - libcontainer container d1398218934819fae60b2f4ff76578a2097518ac1840002a84711179902893b1. Sep 5 23:53:18.363964 containerd[1735]: time="2025-09-05T23:53:18.363914442Z" level=info msg="StartContainer for \"d1398218934819fae60b2f4ff76578a2097518ac1840002a84711179902893b1\" returns successfully" Sep 5 23:53:18.375981 systemd[1]: cri-containerd-d1398218934819fae60b2f4ff76578a2097518ac1840002a84711179902893b1.scope: Deactivated successfully. Sep 5 23:53:18.553355 kubelet[3230]: E0905 23:53:18.553306 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gmdjj" podUID="b655b97a-e9ef-4351-a639-e1502a0f30b8" Sep 5 23:53:18.663661 kubelet[3230]: I0905 23:53:18.663627 3230 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:53:18.771560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1398218934819fae60b2f4ff76578a2097518ac1840002a84711179902893b1-rootfs.mount: Deactivated successfully. Sep 5 23:53:19.373589 containerd[1735]: time="2025-09-05T23:53:19.373394139Z" level=info msg="shim disconnected" id=d1398218934819fae60b2f4ff76578a2097518ac1840002a84711179902893b1 namespace=k8s.io Sep 5 23:53:19.373589 containerd[1735]: time="2025-09-05T23:53:19.373445779Z" level=warning msg="cleaning up after shim disconnected" id=d1398218934819fae60b2f4ff76578a2097518ac1840002a84711179902893b1 namespace=k8s.io Sep 5 23:53:19.373589 containerd[1735]: time="2025-09-05T23:53:19.373453819Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:53:19.668822 containerd[1735]: time="2025-09-05T23:53:19.668055852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 23:53:20.263702 kubelet[3230]: I0905 23:53:20.263345 3230 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:53:20.553682 kubelet[3230]: E0905 23:53:20.553636 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gmdjj" podUID="b655b97a-e9ef-4351-a639-e1502a0f30b8" Sep 5 23:53:22.078084 containerd[1735]: time="2025-09-05T23:53:22.078025038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:22.081000 containerd[1735]: time="2025-09-05T23:53:22.080808078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 5 23:53:22.085688 containerd[1735]: time="2025-09-05T23:53:22.085412598Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:22.090023 containerd[1735]: time="2025-09-05T23:53:22.089978358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:22.090898 containerd[1735]: time="2025-09-05T23:53:22.090853278Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.422760146s" Sep 5 23:53:22.090979 containerd[1735]: time="2025-09-05T23:53:22.090899518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 5 23:53:22.101351 containerd[1735]: time="2025-09-05T23:53:22.101314358Z" level=info msg="CreateContainer within sandbox \"1465ffb36e8a66149338633f6445b11a13c3aef32668376ade820ffd8a84716c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 23:53:22.156414 containerd[1735]: time="2025-09-05T23:53:22.156356357Z" level=info msg="CreateContainer within sandbox \"1465ffb36e8a66149338633f6445b11a13c3aef32668376ade820ffd8a84716c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"37bb559c325703902c46f7c1d28b41851dbde7c57da172307c86f9fea38d563c\"" Sep 5 23:53:22.157145 containerd[1735]: time="2025-09-05T23:53:22.156981837Z" level=info msg="StartContainer for \"37bb559c325703902c46f7c1d28b41851dbde7c57da172307c86f9fea38d563c\"" Sep 5 23:53:22.193087 systemd[1]: Started cri-containerd-37bb559c325703902c46f7c1d28b41851dbde7c57da172307c86f9fea38d563c.scope - libcontainer container 37bb559c325703902c46f7c1d28b41851dbde7c57da172307c86f9fea38d563c. Sep 5 23:53:22.229784 containerd[1735]: time="2025-09-05T23:53:22.229635195Z" level=info msg="StartContainer for \"37bb559c325703902c46f7c1d28b41851dbde7c57da172307c86f9fea38d563c\" returns successfully" Sep 5 23:53:22.553908 kubelet[3230]: E0905 23:53:22.553590 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gmdjj" podUID="b655b97a-e9ef-4351-a639-e1502a0f30b8" Sep 5 23:53:23.454585 containerd[1735]: time="2025-09-05T23:53:23.454540047Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:53:23.457404 systemd[1]: cri-containerd-37bb559c325703902c46f7c1d28b41851dbde7c57da172307c86f9fea38d563c.scope: Deactivated successfully. Sep 5 23:53:23.464364 kubelet[3230]: I0905 23:53:23.460602 3230 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 23:53:23.488025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37bb559c325703902c46f7c1d28b41851dbde7c57da172307c86f9fea38d563c-rootfs.mount: Deactivated successfully. Sep 5 23:53:24.317066 systemd[1]: Created slice kubepods-burstable-pod5666caf1_ebe2_4df7_a26f_9fc0c5f462aa.slice - libcontainer container kubepods-burstable-pod5666caf1_ebe2_4df7_a26f_9fc0c5f462aa.slice. Sep 5 23:53:24.324843 systemd[1]: Created slice kubepods-besteffort-podb655b97a_e9ef_4351_a639_e1502a0f30b8.slice - libcontainer container kubepods-besteffort-podb655b97a_e9ef_4351_a639_e1502a0f30b8.slice. Sep 5 23:53:24.328716 containerd[1735]: time="2025-09-05T23:53:24.328679588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gmdjj,Uid:b655b97a-e9ef-4351-a639-e1502a0f30b8,Namespace:calico-system,Attempt:0,}" Sep 5 23:53:24.329511 containerd[1735]: time="2025-09-05T23:53:24.329340188Z" level=info msg="shim disconnected" id=37bb559c325703902c46f7c1d28b41851dbde7c57da172307c86f9fea38d563c namespace=k8s.io Sep 5 23:53:24.329511 containerd[1735]: time="2025-09-05T23:53:24.329380228Z" level=warning msg="cleaning up after shim disconnected" id=37bb559c325703902c46f7c1d28b41851dbde7c57da172307c86f9fea38d563c namespace=k8s.io Sep 5 23:53:24.329511 containerd[1735]: time="2025-09-05T23:53:24.329387828Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:53:24.340641 systemd[1]: Created slice kubepods-burstable-pode0f93c22_927f_47b6_8cee_8ea27a2ee078.slice - libcontainer container kubepods-burstable-pode0f93c22_927f_47b6_8cee_8ea27a2ee078.slice. Sep 5 23:53:24.361755 systemd[1]: Created slice kubepods-besteffort-pod67194c50_357b_4abf_9127_333241d1e011.slice - libcontainer container kubepods-besteffort-pod67194c50_357b_4abf_9127_333241d1e011.slice. Sep 5 23:53:24.374008 systemd[1]: Created slice kubepods-besteffort-pod7b09cec7_cf29_4181_9ddb_9c4e5f51fab5.slice - libcontainer container kubepods-besteffort-pod7b09cec7_cf29_4181_9ddb_9c4e5f51fab5.slice. Sep 5 23:53:24.390804 kubelet[3230]: I0905 23:53:24.387443 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4l64\" (UniqueName: \"kubernetes.io/projected/e0f93c22-927f-47b6-8cee-8ea27a2ee078-kube-api-access-z4l64\") pod \"coredns-674b8bbfcf-8cqwz\" (UID: \"e0f93c22-927f-47b6-8cee-8ea27a2ee078\") " pod="kube-system/coredns-674b8bbfcf-8cqwz" Sep 5 23:53:24.395491 kubelet[3230]: I0905 23:53:24.391363 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5666caf1-ebe2-4df7-a26f-9fc0c5f462aa-config-volume\") pod \"coredns-674b8bbfcf-hdz5l\" (UID: \"5666caf1-ebe2-4df7-a26f-9fc0c5f462aa\") " pod="kube-system/coredns-674b8bbfcf-hdz5l" Sep 5 23:53:24.395491 kubelet[3230]: I0905 23:53:24.391400 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0f93c22-927f-47b6-8cee-8ea27a2ee078-config-volume\") pod \"coredns-674b8bbfcf-8cqwz\" (UID: \"e0f93c22-927f-47b6-8cee-8ea27a2ee078\") " pod="kube-system/coredns-674b8bbfcf-8cqwz" Sep 5 23:53:24.395491 kubelet[3230]: I0905 23:53:24.391427 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67194c50-357b-4abf-9127-333241d1e011-tigera-ca-bundle\") pod \"calico-kube-controllers-78c94d557f-2p295\" (UID: \"67194c50-357b-4abf-9127-333241d1e011\") " pod="calico-system/calico-kube-controllers-78c94d557f-2p295" Sep 5 23:53:24.395491 kubelet[3230]: I0905 23:53:24.391491 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9cjv\" (UniqueName: \"kubernetes.io/projected/67194c50-357b-4abf-9127-333241d1e011-kube-api-access-c9cjv\") pod \"calico-kube-controllers-78c94d557f-2p295\" (UID: \"67194c50-357b-4abf-9127-333241d1e011\") " pod="calico-system/calico-kube-controllers-78c94d557f-2p295" Sep 5 23:53:24.395491 kubelet[3230]: I0905 23:53:24.391511 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9b9118f6-416f-46e4-abe5-5d379ea246b1-calico-apiserver-certs\") pod \"calico-apiserver-7c5854478d-ms7sv\" (UID: \"9b9118f6-416f-46e4-abe5-5d379ea246b1\") " pod="calico-apiserver/calico-apiserver-7c5854478d-ms7sv" Sep 5 23:53:24.395711 kubelet[3230]: I0905 23:53:24.391548 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7zh5\" (UniqueName: \"kubernetes.io/projected/5666caf1-ebe2-4df7-a26f-9fc0c5f462aa-kube-api-access-g7zh5\") pod \"coredns-674b8bbfcf-hdz5l\" (UID: \"5666caf1-ebe2-4df7-a26f-9fc0c5f462aa\") " pod="kube-system/coredns-674b8bbfcf-hdz5l" Sep 5 23:53:24.395711 kubelet[3230]: I0905 23:53:24.391568 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqqg8\" (UniqueName: \"kubernetes.io/projected/9b9118f6-416f-46e4-abe5-5d379ea246b1-kube-api-access-nqqg8\") pod \"calico-apiserver-7c5854478d-ms7sv\" (UID: \"9b9118f6-416f-46e4-abe5-5d379ea246b1\") " pod="calico-apiserver/calico-apiserver-7c5854478d-ms7sv" Sep 5 23:53:24.399475 systemd[1]: Created slice kubepods-besteffort-podf5f807d1_59cd_4e6a_a224_a7b18405beaf.slice - libcontainer container kubepods-besteffort-podf5f807d1_59cd_4e6a_a224_a7b18405beaf.slice. Sep 5 23:53:24.414899 systemd[1]: Created slice kubepods-besteffort-pod6c6dfb93_02c5_4946_b32d_4225aadf4328.slice - libcontainer container kubepods-besteffort-pod6c6dfb93_02c5_4946_b32d_4225aadf4328.slice. Sep 5 23:53:24.427715 systemd[1]: Created slice kubepods-besteffort-pod9b9118f6_416f_46e4_abe5_5d379ea246b1.slice - libcontainer container kubepods-besteffort-pod9b9118f6_416f_46e4_abe5_5d379ea246b1.slice. Sep 5 23:53:24.468911 containerd[1735]: time="2025-09-05T23:53:24.468794945Z" level=error msg="Failed to destroy network for sandbox \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.473913 containerd[1735]: time="2025-09-05T23:53:24.472153824Z" level=error msg="encountered an error cleaning up failed sandbox \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.473913 containerd[1735]: time="2025-09-05T23:53:24.472221464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gmdjj,Uid:b655b97a-e9ef-4351-a639-e1502a0f30b8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.473342 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456-shm.mount: Deactivated successfully. Sep 5 23:53:24.474225 kubelet[3230]: E0905 23:53:24.472456 3230 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.474225 kubelet[3230]: E0905 23:53:24.472520 3230 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gmdjj" Sep 5 23:53:24.474225 kubelet[3230]: E0905 23:53:24.472537 3230 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gmdjj" Sep 5 23:53:24.474318 kubelet[3230]: E0905 23:53:24.472582 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gmdjj_calico-system(b655b97a-e9ef-4351-a639-e1502a0f30b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gmdjj_calico-system(b655b97a-e9ef-4351-a639-e1502a0f30b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gmdjj" podUID="b655b97a-e9ef-4351-a639-e1502a0f30b8" Sep 5 23:53:24.492941 kubelet[3230]: I0905 23:53:24.492520 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5f807d1-59cd-4e6a-a224-a7b18405beaf-whisker-ca-bundle\") pod \"whisker-6894bc6856-ngkgx\" (UID: \"f5f807d1-59cd-4e6a-a224-a7b18405beaf\") " pod="calico-system/whisker-6894bc6856-ngkgx" Sep 5 23:53:24.492941 kubelet[3230]: I0905 23:53:24.492586 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dttfn\" (UniqueName: \"kubernetes.io/projected/6c6dfb93-02c5-4946-b32d-4225aadf4328-kube-api-access-dttfn\") pod \"goldmane-54d579b49d-jffjd\" (UID: \"6c6dfb93-02c5-4946-b32d-4225aadf4328\") " pod="calico-system/goldmane-54d579b49d-jffjd" Sep 5 23:53:24.493325 kubelet[3230]: I0905 23:53:24.493150 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f5f807d1-59cd-4e6a-a224-a7b18405beaf-whisker-backend-key-pair\") pod \"whisker-6894bc6856-ngkgx\" (UID: \"f5f807d1-59cd-4e6a-a224-a7b18405beaf\") " pod="calico-system/whisker-6894bc6856-ngkgx" Sep 5 23:53:24.493451 kubelet[3230]: I0905 23:53:24.493397 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6c6dfb93-02c5-4946-b32d-4225aadf4328-goldmane-key-pair\") pod \"goldmane-54d579b49d-jffjd\" (UID: \"6c6dfb93-02c5-4946-b32d-4225aadf4328\") " pod="calico-system/goldmane-54d579b49d-jffjd" Sep 5 23:53:24.493869 kubelet[3230]: I0905 23:53:24.493705 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c6dfb93-02c5-4946-b32d-4225aadf4328-config\") pod \"goldmane-54d579b49d-jffjd\" (UID: \"6c6dfb93-02c5-4946-b32d-4225aadf4328\") " pod="calico-system/goldmane-54d579b49d-jffjd" Sep 5 23:53:24.494043 kubelet[3230]: I0905 23:53:24.493968 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7b09cec7-cf29-4181-9ddb-9c4e5f51fab5-calico-apiserver-certs\") pod \"calico-apiserver-7c5854478d-5w9t2\" (UID: \"7b09cec7-cf29-4181-9ddb-9c4e5f51fab5\") " pod="calico-apiserver/calico-apiserver-7c5854478d-5w9t2" Sep 5 23:53:24.494043 kubelet[3230]: I0905 23:53:24.493993 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjtcp\" (UniqueName: \"kubernetes.io/projected/f5f807d1-59cd-4e6a-a224-a7b18405beaf-kube-api-access-gjtcp\") pod \"whisker-6894bc6856-ngkgx\" (UID: \"f5f807d1-59cd-4e6a-a224-a7b18405beaf\") " pod="calico-system/whisker-6894bc6856-ngkgx" Sep 5 23:53:24.494903 kubelet[3230]: I0905 23:53:24.494336 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf25b\" (UniqueName: \"kubernetes.io/projected/7b09cec7-cf29-4181-9ddb-9c4e5f51fab5-kube-api-access-gf25b\") pod \"calico-apiserver-7c5854478d-5w9t2\" (UID: \"7b09cec7-cf29-4181-9ddb-9c4e5f51fab5\") " pod="calico-apiserver/calico-apiserver-7c5854478d-5w9t2" Sep 5 23:53:24.494903 kubelet[3230]: I0905 23:53:24.494364 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c6dfb93-02c5-4946-b32d-4225aadf4328-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-jffjd\" (UID: \"6c6dfb93-02c5-4946-b32d-4225aadf4328\") " pod="calico-system/goldmane-54d579b49d-jffjd" Sep 5 23:53:24.625658 containerd[1735]: time="2025-09-05T23:53:24.625262821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hdz5l,Uid:5666caf1-ebe2-4df7-a26f-9fc0c5f462aa,Namespace:kube-system,Attempt:0,}" Sep 5 23:53:24.658847 containerd[1735]: time="2025-09-05T23:53:24.658802020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8cqwz,Uid:e0f93c22-927f-47b6-8cee-8ea27a2ee078,Namespace:kube-system,Attempt:0,}" Sep 5 23:53:24.669840 containerd[1735]: time="2025-09-05T23:53:24.669797100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78c94d557f-2p295,Uid:67194c50-357b-4abf-9127-333241d1e011,Namespace:calico-system,Attempt:0,}" Sep 5 23:53:24.681420 kubelet[3230]: I0905 23:53:24.680194 3230 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Sep 5 23:53:24.682255 containerd[1735]: time="2025-09-05T23:53:24.682216782Z" level=info msg="StopPodSandbox for \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\"" Sep 5 23:53:24.682644 containerd[1735]: time="2025-09-05T23:53:24.682400222Z" level=info msg="Ensure that sandbox b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456 in task-service has been cleanup successfully" Sep 5 23:53:24.690157 containerd[1735]: time="2025-09-05T23:53:24.690128906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 23:53:24.691657 containerd[1735]: time="2025-09-05T23:53:24.691340986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5854478d-5w9t2,Uid:7b09cec7-cf29-4181-9ddb-9c4e5f51fab5,Namespace:calico-apiserver,Attempt:0,}" Sep 5 23:53:24.714536 containerd[1735]: time="2025-09-05T23:53:24.711740077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6894bc6856-ngkgx,Uid:f5f807d1-59cd-4e6a-a224-a7b18405beaf,Namespace:calico-system,Attempt:0,}" Sep 5 23:53:24.723917 containerd[1735]: time="2025-09-05T23:53:24.723882123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-jffjd,Uid:6c6dfb93-02c5-4946-b32d-4225aadf4328,Namespace:calico-system,Attempt:0,}" Sep 5 23:53:24.736298 containerd[1735]: time="2025-09-05T23:53:24.736266729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5854478d-ms7sv,Uid:9b9118f6-416f-46e4-abe5-5d379ea246b1,Namespace:calico-apiserver,Attempt:0,}" Sep 5 23:53:24.744930 containerd[1735]: time="2025-09-05T23:53:24.744844693Z" level=error msg="StopPodSandbox for \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\" failed" error="failed to destroy network for sandbox \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.745155 kubelet[3230]: E0905 23:53:24.745106 3230 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Sep 5 23:53:24.745287 kubelet[3230]: E0905 23:53:24.745176 3230 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456"} Sep 5 23:53:24.745323 kubelet[3230]: E0905 23:53:24.745311 3230 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b655b97a-e9ef-4351-a639-e1502a0f30b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:53:24.745383 kubelet[3230]: E0905 23:53:24.745336 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b655b97a-e9ef-4351-a639-e1502a0f30b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gmdjj" podUID="b655b97a-e9ef-4351-a639-e1502a0f30b8" Sep 5 23:53:24.762958 containerd[1735]: time="2025-09-05T23:53:24.762906942Z" level=error msg="Failed to destroy network for sandbox \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.763281 containerd[1735]: time="2025-09-05T23:53:24.763244222Z" level=error msg="encountered an error cleaning up failed sandbox \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.763326 containerd[1735]: time="2025-09-05T23:53:24.763308423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hdz5l,Uid:5666caf1-ebe2-4df7-a26f-9fc0c5f462aa,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.763937 kubelet[3230]: E0905 23:53:24.763546 3230 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.763937 kubelet[3230]: E0905 23:53:24.763603 3230 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hdz5l" Sep 5 23:53:24.763937 kubelet[3230]: E0905 23:53:24.763623 3230 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hdz5l" Sep 5 23:53:24.765362 kubelet[3230]: E0905 23:53:24.763673 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hdz5l_kube-system(5666caf1-ebe2-4df7-a26f-9fc0c5f462aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hdz5l_kube-system(5666caf1-ebe2-4df7-a26f-9fc0c5f462aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hdz5l" podUID="5666caf1-ebe2-4df7-a26f-9fc0c5f462aa" Sep 5 23:53:24.863467 containerd[1735]: time="2025-09-05T23:53:24.863329673Z" level=error msg="Failed to destroy network for sandbox \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.863954 containerd[1735]: time="2025-09-05T23:53:24.863921673Z" level=error msg="encountered an error cleaning up failed sandbox \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.864013 containerd[1735]: time="2025-09-05T23:53:24.863987753Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8cqwz,Uid:e0f93c22-927f-47b6-8cee-8ea27a2ee078,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.865805 kubelet[3230]: E0905 23:53:24.864795 3230 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.865805 kubelet[3230]: E0905 23:53:24.864874 3230 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8cqwz" Sep 5 23:53:24.865805 kubelet[3230]: E0905 23:53:24.864897 3230 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8cqwz" Sep 5 23:53:24.865980 kubelet[3230]: E0905 23:53:24.864951 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8cqwz_kube-system(e0f93c22-927f-47b6-8cee-8ea27a2ee078)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8cqwz_kube-system(e0f93c22-927f-47b6-8cee-8ea27a2ee078)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8cqwz" podUID="e0f93c22-927f-47b6-8cee-8ea27a2ee078" Sep 5 23:53:24.916163 containerd[1735]: time="2025-09-05T23:53:24.915129179Z" level=error msg="Failed to destroy network for sandbox \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.918956 containerd[1735]: time="2025-09-05T23:53:24.918907260Z" level=error msg="encountered an error cleaning up failed sandbox \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.919070 containerd[1735]: time="2025-09-05T23:53:24.918975940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78c94d557f-2p295,Uid:67194c50-357b-4abf-9127-333241d1e011,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.919413 kubelet[3230]: E0905 23:53:24.919212 3230 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.919413 kubelet[3230]: E0905 23:53:24.919266 3230 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78c94d557f-2p295" Sep 5 23:53:24.919413 kubelet[3230]: E0905 23:53:24.919290 3230 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78c94d557f-2p295" Sep 5 23:53:24.920317 kubelet[3230]: E0905 23:53:24.919724 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78c94d557f-2p295_calico-system(67194c50-357b-4abf-9127-333241d1e011)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78c94d557f-2p295_calico-system(67194c50-357b-4abf-9127-333241d1e011)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78c94d557f-2p295" podUID="67194c50-357b-4abf-9127-333241d1e011" Sep 5 23:53:24.991423 containerd[1735]: time="2025-09-05T23:53:24.991374737Z" level=error msg="Failed to destroy network for sandbox \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.991707 containerd[1735]: time="2025-09-05T23:53:24.991673337Z" level=error msg="encountered an error cleaning up failed sandbox \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.991741 containerd[1735]: time="2025-09-05T23:53:24.991727897Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5854478d-5w9t2,Uid:7b09cec7-cf29-4181-9ddb-9c4e5f51fab5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.992285 kubelet[3230]: E0905 23:53:24.992075 3230 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:24.992285 kubelet[3230]: E0905 23:53:24.992141 3230 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c5854478d-5w9t2" Sep 5 23:53:24.992285 kubelet[3230]: E0905 23:53:24.992162 3230 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c5854478d-5w9t2" Sep 5 23:53:24.992422 kubelet[3230]: E0905 23:53:24.992213 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c5854478d-5w9t2_calico-apiserver(7b09cec7-cf29-4181-9ddb-9c4e5f51fab5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c5854478d-5w9t2_calico-apiserver(7b09cec7-cf29-4181-9ddb-9c4e5f51fab5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c5854478d-5w9t2" podUID="7b09cec7-cf29-4181-9ddb-9c4e5f51fab5" Sep 5 23:53:25.014485 containerd[1735]: time="2025-09-05T23:53:25.014433788Z" level=error msg="Failed to destroy network for sandbox \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:25.016704 containerd[1735]: time="2025-09-05T23:53:25.016449869Z" level=error msg="encountered an error cleaning up failed sandbox \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:25.016704 containerd[1735]: time="2025-09-05T23:53:25.016522309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6894bc6856-ngkgx,Uid:f5f807d1-59cd-4e6a-a224-a7b18405beaf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:25.017021 kubelet[3230]: E0905 23:53:25.016983 3230 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:25.017753 kubelet[3230]: E0905 23:53:25.017611 3230 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6894bc6856-ngkgx" Sep 5 23:53:25.017753 kubelet[3230]: E0905 23:53:25.017642 3230 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6894bc6856-ngkgx" Sep 5 23:53:25.017753 kubelet[3230]: E0905 23:53:25.017700 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6894bc6856-ngkgx_calico-system(f5f807d1-59cd-4e6a-a224-a7b18405beaf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6894bc6856-ngkgx_calico-system(f5f807d1-59cd-4e6a-a224-a7b18405beaf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6894bc6856-ngkgx" podUID="f5f807d1-59cd-4e6a-a224-a7b18405beaf" Sep 5 23:53:25.020594 containerd[1735]: time="2025-09-05T23:53:25.020552871Z" level=error msg="Failed to destroy network for sandbox \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:25.020971 containerd[1735]: time="2025-09-05T23:53:25.020934471Z" level=error msg="encountered an error cleaning up failed sandbox \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:25.021078 containerd[1735]: time="2025-09-05T23:53:25.021058192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5854478d-ms7sv,Uid:9b9118f6-416f-46e4-abe5-5d379ea246b1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:25.021464 kubelet[3230]: E0905 23:53:25.021322 3230 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:25.021464 kubelet[3230]: E0905 23:53:25.021364 3230 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c5854478d-ms7sv" Sep 5 23:53:25.021464 kubelet[3230]: E0905 23:53:25.021382 3230 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c5854478d-ms7sv" Sep 5 23:53:25.021615 kubelet[3230]: E0905 23:53:25.021426 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c5854478d-ms7sv_calico-apiserver(9b9118f6-416f-46e4-abe5-5d379ea246b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c5854478d-ms7sv_calico-apiserver(9b9118f6-416f-46e4-abe5-5d379ea246b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c5854478d-ms7sv" podUID="9b9118f6-416f-46e4-abe5-5d379ea246b1" Sep 5 23:53:25.024725 containerd[1735]: time="2025-09-05T23:53:25.024690833Z" level=error msg="Failed to destroy network for sandbox \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:25.025502 containerd[1735]: time="2025-09-05T23:53:25.025469714Z" level=error msg="encountered an error cleaning up failed sandbox \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:25.025631 containerd[1735]: time="2025-09-05T23:53:25.025600034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-jffjd,Uid:6c6dfb93-02c5-4946-b32d-4225aadf4328,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:25.025915 kubelet[3230]: E0905 23:53:25.025877 3230 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:25.025993 kubelet[3230]: E0905 23:53:25.025925 3230 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-jffjd" Sep 5 23:53:25.025993 kubelet[3230]: E0905 23:53:25.025953 3230 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-jffjd" Sep 5 23:53:25.026121 kubelet[3230]: E0905 23:53:25.026003 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-jffjd_calico-system(6c6dfb93-02c5-4946-b32d-4225aadf4328)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-jffjd_calico-system(6c6dfb93-02c5-4946-b32d-4225aadf4328)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-jffjd" podUID="6c6dfb93-02c5-4946-b32d-4225aadf4328" Sep 5 23:53:25.689336 kubelet[3230]: I0905 23:53:25.689277 3230 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Sep 5 23:53:25.691989 containerd[1735]: time="2025-09-05T23:53:25.690572207Z" level=info msg="StopPodSandbox for \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\"" Sep 5 23:53:25.691989 containerd[1735]: time="2025-09-05T23:53:25.691651207Z" level=info msg="Ensure that sandbox 5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff in task-service has been cleanup successfully" Sep 5 23:53:25.692967 kubelet[3230]: I0905 23:53:25.692942 3230 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Sep 5 23:53:25.695982 kubelet[3230]: I0905 23:53:25.695960 3230 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Sep 5 23:53:25.697356 containerd[1735]: time="2025-09-05T23:53:25.697327130Z" level=info msg="StopPodSandbox for \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\"" Sep 5 23:53:25.698073 containerd[1735]: time="2025-09-05T23:53:25.697701210Z" level=info msg="Ensure that sandbox 16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a in task-service has been cleanup successfully" Sep 5 23:53:25.707892 containerd[1735]: time="2025-09-05T23:53:25.707840855Z" level=info msg="StopPodSandbox for \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\"" Sep 5 23:53:25.708651 containerd[1735]: time="2025-09-05T23:53:25.708619976Z" level=info msg="Ensure that sandbox 4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526 in task-service has been cleanup successfully" Sep 5 23:53:25.712255 kubelet[3230]: I0905 23:53:25.711465 3230 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Sep 5 23:53:25.714208 containerd[1735]: time="2025-09-05T23:53:25.714165738Z" level=info msg="StopPodSandbox for \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\"" Sep 5 23:53:25.714731 containerd[1735]: time="2025-09-05T23:53:25.714688699Z" level=info msg="Ensure that sandbox 64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697 in task-service has been cleanup successfully" Sep 5 23:53:25.727668 kubelet[3230]: I0905 23:53:25.727645 3230 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Sep 5 23:53:25.731588 containerd[1735]: time="2025-09-05T23:53:25.731548787Z" level=info msg="StopPodSandbox for \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\"" Sep 5 23:53:25.731935 containerd[1735]: time="2025-09-05T23:53:25.731909787Z" level=info msg="Ensure that sandbox 27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed in task-service has been cleanup successfully" Sep 5 23:53:25.737513 kubelet[3230]: I0905 23:53:25.736922 3230 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Sep 5 23:53:25.738816 containerd[1735]: time="2025-09-05T23:53:25.737631510Z" level=info msg="StopPodSandbox for \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\"" Sep 5 23:53:25.738816 containerd[1735]: time="2025-09-05T23:53:25.738044070Z" level=info msg="Ensure that sandbox bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d in task-service has been cleanup successfully" Sep 5 23:53:25.747064 containerd[1735]: time="2025-09-05T23:53:25.747018155Z" level=error msg="StopPodSandbox for \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\" failed" error="failed to destroy network for sandbox \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:25.747498 kubelet[3230]: E0905 23:53:25.747346 3230 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Sep 5 23:53:25.747498 kubelet[3230]: E0905 23:53:25.747410 3230 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff"} Sep 5 23:53:25.747498 kubelet[3230]: E0905 23:53:25.747440 3230 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b09cec7-cf29-4181-9ddb-9c4e5f51fab5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:53:25.747498 kubelet[3230]: E0905 23:53:25.747470 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b09cec7-cf29-4181-9ddb-9c4e5f51fab5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c5854478d-5w9t2" podUID="7b09cec7-cf29-4181-9ddb-9c4e5f51fab5" Sep 5 23:53:25.749393 kubelet[3230]: I0905 23:53:25.748763 3230 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Sep 5 23:53:25.749670 containerd[1735]: time="2025-09-05T23:53:25.749643956Z" level=info msg="StopPodSandbox for \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\"" Sep 5 23:53:25.750172 containerd[1735]: time="2025-09-05T23:53:25.750138957Z" level=info msg="Ensure that sandbox 8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594 in task-service has been cleanup successfully" Sep 5 23:53:25.794416 containerd[1735]: time="2025-09-05T23:53:25.794363659Z" level=error msg="StopPodSandbox for \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\" failed" error="failed to destroy network for sandbox \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:25.794830 kubelet[3230]: E0905 23:53:25.794681 3230 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Sep 5 23:53:25.794830 kubelet[3230]: E0905 23:53:25.794731 3230 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a"} Sep 5 23:53:25.794830 kubelet[3230]: E0905 23:53:25.794764 3230 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6c6dfb93-02c5-4946-b32d-4225aadf4328\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:53:25.794830 kubelet[3230]: E0905 23:53:25.794798 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6c6dfb93-02c5-4946-b32d-4225aadf4328\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-jffjd" podUID="6c6dfb93-02c5-4946-b32d-4225aadf4328" Sep 5 23:53:25.807882 containerd[1735]: time="2025-09-05T23:53:25.804537504Z" level=error msg="StopPodSandbox for \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\" failed" error="failed to destroy network for sandbox \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:25.808003 kubelet[3230]: E0905 23:53:25.806014 3230 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Sep 5 23:53:25.808003 kubelet[3230]: E0905 23:53:25.806062 3230 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526"} Sep 5 23:53:25.808003 kubelet[3230]: E0905 23:53:25.806094 3230 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b9118f6-416f-46e4-abe5-5d379ea246b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:53:25.808003 kubelet[3230]: E0905 23:53:25.806118 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b9118f6-416f-46e4-abe5-5d379ea246b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c5854478d-ms7sv" podUID="9b9118f6-416f-46e4-abe5-5d379ea246b1" Sep 5 23:53:25.809653 containerd[1735]: time="2025-09-05T23:53:25.809615466Z" level=error msg="StopPodSandbox for \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\" failed" error="failed to destroy network for sandbox \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:25.809975 kubelet[3230]: E0905 23:53:25.809947 3230 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Sep 5 23:53:25.810099 kubelet[3230]: E0905 23:53:25.810082 3230 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697"} Sep 5 23:53:25.810199 kubelet[3230]: E0905 23:53:25.810182 3230 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f5f807d1-59cd-4e6a-a224-a7b18405beaf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:53:25.810345 kubelet[3230]: E0905 23:53:25.810326 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f5f807d1-59cd-4e6a-a224-a7b18405beaf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6894bc6856-ngkgx" podUID="f5f807d1-59cd-4e6a-a224-a7b18405beaf" Sep 5 23:53:25.811575 containerd[1735]: time="2025-09-05T23:53:25.811538107Z" level=error msg="StopPodSandbox for \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\" failed" error="failed to destroy network for sandbox \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:25.811996 kubelet[3230]: E0905 23:53:25.811971 3230 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Sep 5 23:53:25.812265 kubelet[3230]: E0905 23:53:25.812246 3230 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594"} Sep 5 23:53:25.812382 kubelet[3230]: E0905 23:53:25.812368 3230 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5666caf1-ebe2-4df7-a26f-9fc0c5f462aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:53:25.812493 kubelet[3230]: E0905 23:53:25.812476 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5666caf1-ebe2-4df7-a26f-9fc0c5f462aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hdz5l" podUID="5666caf1-ebe2-4df7-a26f-9fc0c5f462aa" Sep 5 23:53:25.827298 containerd[1735]: time="2025-09-05T23:53:25.827225035Z" level=error msg="StopPodSandbox for \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\" failed" error="failed to destroy network for sandbox \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:25.827721 kubelet[3230]: E0905 23:53:25.827570 3230 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Sep 5 23:53:25.827721 kubelet[3230]: E0905 23:53:25.827623 3230 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d"} Sep 5 23:53:25.827721 kubelet[3230]: E0905 23:53:25.827652 3230 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e0f93c22-927f-47b6-8cee-8ea27a2ee078\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:53:25.827721 kubelet[3230]: E0905 23:53:25.827672 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e0f93c22-927f-47b6-8cee-8ea27a2ee078\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8cqwz" podUID="e0f93c22-927f-47b6-8cee-8ea27a2ee078" Sep 5 23:53:25.836073 containerd[1735]: time="2025-09-05T23:53:25.836022439Z" level=error msg="StopPodSandbox for \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\" failed" error="failed to destroy network for sandbox \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:53:25.836577 kubelet[3230]: E0905 23:53:25.836406 3230 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Sep 5 23:53:25.836577 kubelet[3230]: E0905 23:53:25.836477 3230 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed"} Sep 5 23:53:25.836577 kubelet[3230]: E0905 23:53:25.836522 3230 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"67194c50-357b-4abf-9127-333241d1e011\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:53:25.836577 kubelet[3230]: E0905 23:53:25.836543 3230 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"67194c50-357b-4abf-9127-333241d1e011\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78c94d557f-2p295" podUID="67194c50-357b-4abf-9127-333241d1e011" Sep 5 23:53:28.899530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount516635119.mount: Deactivated successfully. Sep 5 23:53:29.765740 containerd[1735]: time="2025-09-05T23:53:29.765690628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:29.795827 containerd[1735]: time="2025-09-05T23:53:29.795642066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 5 23:53:29.803884 containerd[1735]: time="2025-09-05T23:53:29.802990425Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:30.463060 containerd[1735]: time="2025-09-05T23:53:30.462986344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:30.463802 containerd[1735]: time="2025-09-05T23:53:30.463774224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 5.772195917s" Sep 5 23:53:30.463914 containerd[1735]: time="2025-09-05T23:53:30.463807664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 5 23:53:30.485646 containerd[1735]: time="2025-09-05T23:53:30.485603863Z" level=info msg="CreateContainer within sandbox \"1465ffb36e8a66149338633f6445b11a13c3aef32668376ade820ffd8a84716c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 23:53:30.559008 containerd[1735]: time="2025-09-05T23:53:30.558883538Z" level=info msg="CreateContainer within sandbox \"1465ffb36e8a66149338633f6445b11a13c3aef32668376ade820ffd8a84716c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"91f5833ff5ae137bb175ac014f33ddb1201b2de807325b7229123ea9b978be90\"" Sep 5 23:53:30.559955 containerd[1735]: time="2025-09-05T23:53:30.559836298Z" level=info msg="StartContainer for \"91f5833ff5ae137bb175ac014f33ddb1201b2de807325b7229123ea9b978be90\"" Sep 5 23:53:30.594078 systemd[1]: Started cri-containerd-91f5833ff5ae137bb175ac014f33ddb1201b2de807325b7229123ea9b978be90.scope - libcontainer container 91f5833ff5ae137bb175ac014f33ddb1201b2de807325b7229123ea9b978be90. Sep 5 23:53:30.625318 containerd[1735]: time="2025-09-05T23:53:30.624561654Z" level=info msg="StartContainer for \"91f5833ff5ae137bb175ac014f33ddb1201b2de807325b7229123ea9b978be90\" returns successfully" Sep 5 23:53:30.786021 kubelet[3230]: I0905 23:53:30.785946 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2t7z7" podStartSLOduration=1.353018326 podStartE2EDuration="16.785927724s" podCreationTimestamp="2025-09-05 23:53:14 +0000 UTC" firstStartedPulling="2025-09-05 23:53:15.031875066 +0000 UTC m=+23.578710302" lastFinishedPulling="2025-09-05 23:53:30.464784504 +0000 UTC m=+39.011619700" observedRunningTime="2025-09-05 23:53:30.783505644 +0000 UTC m=+39.330340880" watchObservedRunningTime="2025-09-05 23:53:30.785927724 +0000 UTC m=+39.332763000" Sep 5 23:53:30.893364 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 23:53:30.893503 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 23:53:31.011707 containerd[1735]: time="2025-09-05T23:53:31.011158510Z" level=info msg="StopPodSandbox for \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\"" Sep 5 23:53:31.191180 containerd[1735]: 2025-09-05 23:53:31.121 [INFO][4380] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Sep 5 23:53:31.191180 containerd[1735]: 2025-09-05 23:53:31.121 [INFO][4380] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" iface="eth0" netns="/var/run/netns/cni-56f6359c-ffdf-1b76-e3a7-14cfb99b869f" Sep 5 23:53:31.191180 containerd[1735]: 2025-09-05 23:53:31.121 [INFO][4380] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" iface="eth0" netns="/var/run/netns/cni-56f6359c-ffdf-1b76-e3a7-14cfb99b869f" Sep 5 23:53:31.191180 containerd[1735]: 2025-09-05 23:53:31.122 [INFO][4380] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" iface="eth0" netns="/var/run/netns/cni-56f6359c-ffdf-1b76-e3a7-14cfb99b869f" Sep 5 23:53:31.191180 containerd[1735]: 2025-09-05 23:53:31.122 [INFO][4380] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Sep 5 23:53:31.191180 containerd[1735]: 2025-09-05 23:53:31.122 [INFO][4380] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Sep 5 23:53:31.191180 containerd[1735]: 2025-09-05 23:53:31.165 [INFO][4392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" HandleID="k8s-pod-network.64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Workload="ci--4081.3.5--n--29d70f4830-k8s-whisker--6894bc6856--ngkgx-eth0" Sep 5 23:53:31.191180 containerd[1735]: 2025-09-05 23:53:31.165 [INFO][4392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:31.191180 containerd[1735]: 2025-09-05 23:53:31.165 [INFO][4392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:31.191180 containerd[1735]: 2025-09-05 23:53:31.183 [WARNING][4392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" HandleID="k8s-pod-network.64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Workload="ci--4081.3.5--n--29d70f4830-k8s-whisker--6894bc6856--ngkgx-eth0" Sep 5 23:53:31.191180 containerd[1735]: 2025-09-05 23:53:31.184 [INFO][4392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" HandleID="k8s-pod-network.64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Workload="ci--4081.3.5--n--29d70f4830-k8s-whisker--6894bc6856--ngkgx-eth0" Sep 5 23:53:31.191180 containerd[1735]: 2025-09-05 23:53:31.185 [INFO][4392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:31.191180 containerd[1735]: 2025-09-05 23:53:31.188 [INFO][4380] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Sep 5 23:53:31.192028 containerd[1735]: time="2025-09-05T23:53:31.191652859Z" level=info msg="TearDown network for sandbox \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\" successfully" Sep 5 23:53:31.192028 containerd[1735]: time="2025-09-05T23:53:31.191681819Z" level=info msg="StopPodSandbox for \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\" returns successfully" Sep 5 23:53:31.343238 kubelet[3230]: I0905 23:53:31.343186 3230 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5f807d1-59cd-4e6a-a224-a7b18405beaf-whisker-ca-bundle\") pod \"f5f807d1-59cd-4e6a-a224-a7b18405beaf\" (UID: \"f5f807d1-59cd-4e6a-a224-a7b18405beaf\") " Sep 5 23:53:31.343238 kubelet[3230]: I0905 23:53:31.343236 3230 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f5f807d1-59cd-4e6a-a224-a7b18405beaf-whisker-backend-key-pair\") pod \"f5f807d1-59cd-4e6a-a224-a7b18405beaf\" (UID: \"f5f807d1-59cd-4e6a-a224-a7b18405beaf\") " Sep 5 23:53:31.343410 kubelet[3230]: I0905 23:53:31.343255 3230 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjtcp\" (UniqueName: \"kubernetes.io/projected/f5f807d1-59cd-4e6a-a224-a7b18405beaf-kube-api-access-gjtcp\") pod \"f5f807d1-59cd-4e6a-a224-a7b18405beaf\" (UID: \"f5f807d1-59cd-4e6a-a224-a7b18405beaf\") " Sep 5 23:53:31.343948 kubelet[3230]: I0905 23:53:31.343848 3230 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5f807d1-59cd-4e6a-a224-a7b18405beaf-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f5f807d1-59cd-4e6a-a224-a7b18405beaf" (UID: "f5f807d1-59cd-4e6a-a224-a7b18405beaf"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 23:53:31.346522 kubelet[3230]: I0905 23:53:31.346424 3230 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5f807d1-59cd-4e6a-a224-a7b18405beaf-kube-api-access-gjtcp" (OuterVolumeSpecName: "kube-api-access-gjtcp") pod "f5f807d1-59cd-4e6a-a224-a7b18405beaf" (UID: "f5f807d1-59cd-4e6a-a224-a7b18405beaf"). InnerVolumeSpecName "kube-api-access-gjtcp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 23:53:31.346783 kubelet[3230]: I0905 23:53:31.346761 3230 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5f807d1-59cd-4e6a-a224-a7b18405beaf-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f5f807d1-59cd-4e6a-a224-a7b18405beaf" (UID: "f5f807d1-59cd-4e6a-a224-a7b18405beaf"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 23:53:31.444222 kubelet[3230]: I0905 23:53:31.444103 3230 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5f807d1-59cd-4e6a-a224-a7b18405beaf-whisker-ca-bundle\") on node \"ci-4081.3.5-n-29d70f4830\" DevicePath \"\"" Sep 5 23:53:31.444222 kubelet[3230]: I0905 23:53:31.444138 3230 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f5f807d1-59cd-4e6a-a224-a7b18405beaf-whisker-backend-key-pair\") on node \"ci-4081.3.5-n-29d70f4830\" DevicePath \"\"" Sep 5 23:53:31.444222 kubelet[3230]: I0905 23:53:31.444152 3230 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gjtcp\" (UniqueName: \"kubernetes.io/projected/f5f807d1-59cd-4e6a-a224-a7b18405beaf-kube-api-access-gjtcp\") on node \"ci-4081.3.5-n-29d70f4830\" DevicePath \"\"" Sep 5 23:53:31.472651 systemd[1]: run-netns-cni\x2d56f6359c\x2dffdf\x2d1b76\x2de3a7\x2d14cfb99b869f.mount: Deactivated successfully. Sep 5 23:53:31.472742 systemd[1]: var-lib-kubelet-pods-f5f807d1\x2d59cd\x2d4e6a\x2da224\x2da7b18405beaf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgjtcp.mount: Deactivated successfully. Sep 5 23:53:31.472794 systemd[1]: var-lib-kubelet-pods-f5f807d1\x2d59cd\x2d4e6a\x2da224\x2da7b18405beaf-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 23:53:31.566128 systemd[1]: Removed slice kubepods-besteffort-podf5f807d1_59cd_4e6a_a224_a7b18405beaf.slice - libcontainer container kubepods-besteffort-podf5f807d1_59cd_4e6a_a224_a7b18405beaf.slice. Sep 5 23:53:31.869240 systemd[1]: Created slice kubepods-besteffort-pod0b3b9cf0_531d_4993_a0f3_c76d30b61a6a.slice - libcontainer container kubepods-besteffort-pod0b3b9cf0_531d_4993_a0f3_c76d30b61a6a.slice. Sep 5 23:53:31.947666 kubelet[3230]: I0905 23:53:31.947618 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0b3b9cf0-531d-4993-a0f3-c76d30b61a6a-whisker-backend-key-pair\") pod \"whisker-6cd8fbdc7f-xn6jj\" (UID: \"0b3b9cf0-531d-4993-a0f3-c76d30b61a6a\") " pod="calico-system/whisker-6cd8fbdc7f-xn6jj" Sep 5 23:53:31.947666 kubelet[3230]: I0905 23:53:31.947674 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b3b9cf0-531d-4993-a0f3-c76d30b61a6a-whisker-ca-bundle\") pod \"whisker-6cd8fbdc7f-xn6jj\" (UID: \"0b3b9cf0-531d-4993-a0f3-c76d30b61a6a\") " pod="calico-system/whisker-6cd8fbdc7f-xn6jj" Sep 5 23:53:31.948114 kubelet[3230]: I0905 23:53:31.947695 3230 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxkct\" (UniqueName: \"kubernetes.io/projected/0b3b9cf0-531d-4993-a0f3-c76d30b61a6a-kube-api-access-vxkct\") pod \"whisker-6cd8fbdc7f-xn6jj\" (UID: \"0b3b9cf0-531d-4993-a0f3-c76d30b61a6a\") " pod="calico-system/whisker-6cd8fbdc7f-xn6jj" Sep 5 23:53:32.175530 containerd[1735]: time="2025-09-05T23:53:32.175108198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cd8fbdc7f-xn6jj,Uid:0b3b9cf0-531d-4993-a0f3-c76d30b61a6a,Namespace:calico-system,Attempt:0,}" Sep 5 23:53:32.491645 systemd-networkd[1504]: caliec5f10d2c24: Link UP Sep 5 23:53:32.494058 systemd-networkd[1504]: caliec5f10d2c24: Gained carrier Sep 5 23:53:32.527966 containerd[1735]: 2025-09-05 23:53:32.241 [INFO][4415] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 23:53:32.527966 containerd[1735]: 2025-09-05 23:53:32.256 [INFO][4415] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--29d70f4830-k8s-whisker--6cd8fbdc7f--xn6jj-eth0 whisker-6cd8fbdc7f- calico-system 0b3b9cf0-531d-4993-a0f3-c76d30b61a6a 900 0 2025-09-05 23:53:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6cd8fbdc7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.5-n-29d70f4830 whisker-6cd8fbdc7f-xn6jj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliec5f10d2c24 [] [] }} ContainerID="8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47" Namespace="calico-system" Pod="whisker-6cd8fbdc7f-xn6jj" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-whisker--6cd8fbdc7f--xn6jj-" Sep 5 23:53:32.527966 containerd[1735]: 2025-09-05 23:53:32.256 [INFO][4415] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47" Namespace="calico-system" Pod="whisker-6cd8fbdc7f-xn6jj" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-whisker--6cd8fbdc7f--xn6jj-eth0" Sep 5 23:53:32.527966 containerd[1735]: 2025-09-05 23:53:32.277 [INFO][4427] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47" HandleID="k8s-pod-network.8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47" Workload="ci--4081.3.5--n--29d70f4830-k8s-whisker--6cd8fbdc7f--xn6jj-eth0" Sep 5 23:53:32.527966 containerd[1735]: 2025-09-05 23:53:32.278 [INFO][4427] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47" HandleID="k8s-pod-network.8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47" Workload="ci--4081.3.5--n--29d70f4830-k8s-whisker--6cd8fbdc7f--xn6jj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-29d70f4830", "pod":"whisker-6cd8fbdc7f-xn6jj", "timestamp":"2025-09-05 23:53:32.277979711 +0000 UTC"}, Hostname:"ci-4081.3.5-n-29d70f4830", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:53:32.527966 containerd[1735]: 2025-09-05 23:53:32.278 [INFO][4427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:32.527966 containerd[1735]: 2025-09-05 23:53:32.278 [INFO][4427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:32.527966 containerd[1735]: 2025-09-05 23:53:32.278 [INFO][4427] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-29d70f4830' Sep 5 23:53:32.527966 containerd[1735]: 2025-09-05 23:53:32.287 [INFO][4427] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:32.527966 containerd[1735]: 2025-09-05 23:53:32.291 [INFO][4427] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:32.527966 containerd[1735]: 2025-09-05 23:53:32.294 [INFO][4427] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:32.527966 containerd[1735]: 2025-09-05 23:53:32.296 [INFO][4427] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:32.527966 containerd[1735]: 2025-09-05 23:53:32.300 [INFO][4427] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:32.527966 containerd[1735]: 2025-09-05 23:53:32.300 [INFO][4427] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:32.527966 containerd[1735]: 2025-09-05 23:53:32.301 [INFO][4427] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47 Sep 5 23:53:32.527966 containerd[1735]: 2025-09-05 23:53:32.308 [INFO][4427] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:32.527966 containerd[1735]: 2025-09-05 23:53:32.314 [INFO][4427] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.129/26] block=192.168.46.128/26 handle="k8s-pod-network.8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:32.527966 containerd[1735]: 2025-09-05 23:53:32.315 [INFO][4427] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.129/26] handle="k8s-pod-network.8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:32.527966 containerd[1735]: 2025-09-05 23:53:32.315 [INFO][4427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:32.527966 containerd[1735]: 2025-09-05 23:53:32.315 [INFO][4427] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.129/26] IPv6=[] ContainerID="8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47" HandleID="k8s-pod-network.8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47" Workload="ci--4081.3.5--n--29d70f4830-k8s-whisker--6cd8fbdc7f--xn6jj-eth0" Sep 5 23:53:32.528612 containerd[1735]: 2025-09-05 23:53:32.316 [INFO][4415] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47" Namespace="calico-system" Pod="whisker-6cd8fbdc7f-xn6jj" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-whisker--6cd8fbdc7f--xn6jj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-whisker--6cd8fbdc7f--xn6jj-eth0", GenerateName:"whisker-6cd8fbdc7f-", Namespace:"calico-system", SelfLink:"", UID:"0b3b9cf0-531d-4993-a0f3-c76d30b61a6a", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cd8fbdc7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"", Pod:"whisker-6cd8fbdc7f-xn6jj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.46.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliec5f10d2c24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:32.528612 containerd[1735]: 2025-09-05 23:53:32.316 [INFO][4415] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.129/32] ContainerID="8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47" Namespace="calico-system" Pod="whisker-6cd8fbdc7f-xn6jj" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-whisker--6cd8fbdc7f--xn6jj-eth0" Sep 5 23:53:32.528612 containerd[1735]: 2025-09-05 23:53:32.317 [INFO][4415] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec5f10d2c24 ContainerID="8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47" Namespace="calico-system" Pod="whisker-6cd8fbdc7f-xn6jj" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-whisker--6cd8fbdc7f--xn6jj-eth0" Sep 5 23:53:32.528612 containerd[1735]: 2025-09-05 23:53:32.494 [INFO][4415] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47" Namespace="calico-system" Pod="whisker-6cd8fbdc7f-xn6jj" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-whisker--6cd8fbdc7f--xn6jj-eth0" Sep 5 23:53:32.528612 containerd[1735]: 2025-09-05 23:53:32.496 [INFO][4415] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47" Namespace="calico-system" Pod="whisker-6cd8fbdc7f-xn6jj" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-whisker--6cd8fbdc7f--xn6jj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-whisker--6cd8fbdc7f--xn6jj-eth0", GenerateName:"whisker-6cd8fbdc7f-", Namespace:"calico-system", SelfLink:"", UID:"0b3b9cf0-531d-4993-a0f3-c76d30b61a6a", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cd8fbdc7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47", Pod:"whisker-6cd8fbdc7f-xn6jj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.46.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliec5f10d2c24", MAC:"36:40:58:f7:4c:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:32.528612 containerd[1735]: 2025-09-05 23:53:32.519 [INFO][4415] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47" Namespace="calico-system" Pod="whisker-6cd8fbdc7f-xn6jj" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-whisker--6cd8fbdc7f--xn6jj-eth0" Sep 5 23:53:32.560044 containerd[1735]: time="2025-09-05T23:53:32.559455494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:32.562189 containerd[1735]: time="2025-09-05T23:53:32.562058973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:32.562189 containerd[1735]: time="2025-09-05T23:53:32.562142133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:32.562389 containerd[1735]: time="2025-09-05T23:53:32.562345453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:32.605665 systemd[1]: Started cri-containerd-8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47.scope - libcontainer container 8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47. Sep 5 23:53:32.672436 containerd[1735]: time="2025-09-05T23:53:32.672318767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cd8fbdc7f-xn6jj,Uid:0b3b9cf0-531d-4993-a0f3-c76d30b61a6a,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47\"" Sep 5 23:53:32.685019 containerd[1735]: time="2025-09-05T23:53:32.684976966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 23:53:32.791883 kernel: bpftool[4602]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 5 23:53:33.022294 systemd-networkd[1504]: vxlan.calico: Link UP Sep 5 23:53:33.022302 systemd-networkd[1504]: vxlan.calico: Gained carrier Sep 5 23:53:33.555933 kubelet[3230]: I0905 23:53:33.555722 3230 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5f807d1-59cd-4e6a-a224-a7b18405beaf" path="/var/lib/kubelet/pods/f5f807d1-59cd-4e6a-a224-a7b18405beaf/volumes" Sep 5 23:53:34.007065 systemd-networkd[1504]: caliec5f10d2c24: Gained IPv6LL Sep 5 23:53:34.144906 containerd[1735]: time="2025-09-05T23:53:34.144627436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:34.150187 containerd[1735]: time="2025-09-05T23:53:34.150143355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 5 23:53:34.156240 containerd[1735]: time="2025-09-05T23:53:34.156205153Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:34.168192 containerd[1735]: time="2025-09-05T23:53:34.168079030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:34.169837 containerd[1735]: time="2025-09-05T23:53:34.169787429Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.484768263s" Sep 5 23:53:34.169837 containerd[1735]: time="2025-09-05T23:53:34.169834429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 5 23:53:34.185596 containerd[1735]: time="2025-09-05T23:53:34.185430305Z" level=info msg="CreateContainer within sandbox \"8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 23:53:34.230598 containerd[1735]: time="2025-09-05T23:53:34.230547532Z" level=info msg="CreateContainer within sandbox \"8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3cb82a5cbd3085e22778ee56b3f347fc029944a643aa3f874e760ca74659893f\"" Sep 5 23:53:34.235623 containerd[1735]: time="2025-09-05T23:53:34.234424091Z" level=info msg="StartContainer for \"3cb82a5cbd3085e22778ee56b3f347fc029944a643aa3f874e760ca74659893f\"" Sep 5 23:53:34.266560 systemd[1]: Started cri-containerd-3cb82a5cbd3085e22778ee56b3f347fc029944a643aa3f874e760ca74659893f.scope - libcontainer container 3cb82a5cbd3085e22778ee56b3f347fc029944a643aa3f874e760ca74659893f. Sep 5 23:53:34.308854 containerd[1735]: time="2025-09-05T23:53:34.308814590Z" level=info msg="StartContainer for \"3cb82a5cbd3085e22778ee56b3f347fc029944a643aa3f874e760ca74659893f\" returns successfully" Sep 5 23:53:34.314920 containerd[1735]: time="2025-09-05T23:53:34.314841309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 23:53:34.327007 systemd-networkd[1504]: vxlan.calico: Gained IPv6LL Sep 5 23:53:35.557504 containerd[1735]: time="2025-09-05T23:53:35.557458399Z" level=info msg="StopPodSandbox for \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\"" Sep 5 23:53:35.652043 containerd[1735]: 2025-09-05 23:53:35.606 [INFO][4728] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Sep 5 23:53:35.652043 containerd[1735]: 2025-09-05 23:53:35.609 [INFO][4728] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" iface="eth0" netns="/var/run/netns/cni-4253cf18-12f1-76b8-3224-b686f1344719" Sep 5 23:53:35.652043 containerd[1735]: 2025-09-05 23:53:35.609 [INFO][4728] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" iface="eth0" netns="/var/run/netns/cni-4253cf18-12f1-76b8-3224-b686f1344719" Sep 5 23:53:35.652043 containerd[1735]: 2025-09-05 23:53:35.609 [INFO][4728] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" iface="eth0" netns="/var/run/netns/cni-4253cf18-12f1-76b8-3224-b686f1344719" Sep 5 23:53:35.652043 containerd[1735]: 2025-09-05 23:53:35.609 [INFO][4728] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Sep 5 23:53:35.652043 containerd[1735]: 2025-09-05 23:53:35.609 [INFO][4728] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Sep 5 23:53:35.652043 containerd[1735]: 2025-09-05 23:53:35.637 [INFO][4735] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" HandleID="k8s-pod-network.b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Workload="ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0" Sep 5 23:53:35.652043 containerd[1735]: 2025-09-05 23:53:35.637 [INFO][4735] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:35.652043 containerd[1735]: 2025-09-05 23:53:35.638 [INFO][4735] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:35.652043 containerd[1735]: 2025-09-05 23:53:35.647 [WARNING][4735] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" HandleID="k8s-pod-network.b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Workload="ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0" Sep 5 23:53:35.652043 containerd[1735]: 2025-09-05 23:53:35.647 [INFO][4735] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" HandleID="k8s-pod-network.b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Workload="ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0" Sep 5 23:53:35.652043 containerd[1735]: 2025-09-05 23:53:35.649 [INFO][4735] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:35.652043 containerd[1735]: 2025-09-05 23:53:35.650 [INFO][4728] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Sep 5 23:53:35.654453 containerd[1735]: time="2025-09-05T23:53:35.654401052Z" level=info msg="TearDown network for sandbox \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\" successfully" Sep 5 23:53:35.654600 containerd[1735]: time="2025-09-05T23:53:35.654525492Z" level=info msg="StopPodSandbox for \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\" returns successfully" Sep 5 23:53:35.654626 systemd[1]: run-netns-cni\x2d4253cf18\x2d12f1\x2d76b8\x2d3224\x2db686f1344719.mount: Deactivated successfully. Sep 5 23:53:35.656572 containerd[1735]: time="2025-09-05T23:53:35.656262371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gmdjj,Uid:b655b97a-e9ef-4351-a639-e1502a0f30b8,Namespace:calico-system,Attempt:1,}" Sep 5 23:53:35.822343 systemd-networkd[1504]: cali5fe3e584cea: Link UP Sep 5 23:53:35.823899 systemd-networkd[1504]: cali5fe3e584cea: Gained carrier Sep 5 23:53:35.847907 containerd[1735]: 2025-09-05 23:53:35.737 [INFO][4742] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0 csi-node-driver- calico-system b655b97a-e9ef-4351-a639-e1502a0f30b8 917 0 2025-09-05 23:53:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.5-n-29d70f4830 csi-node-driver-gmdjj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5fe3e584cea [] [] }} ContainerID="62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40" Namespace="calico-system" Pod="csi-node-driver-gmdjj" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-" Sep 5 23:53:35.847907 containerd[1735]: 2025-09-05 23:53:35.738 [INFO][4742] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40" Namespace="calico-system" Pod="csi-node-driver-gmdjj" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0" Sep 5 23:53:35.847907 containerd[1735]: 2025-09-05 23:53:35.766 [INFO][4753] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40" HandleID="k8s-pod-network.62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40" Workload="ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0" Sep 5 23:53:35.847907 containerd[1735]: 2025-09-05 23:53:35.766 [INFO][4753] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40" HandleID="k8s-pod-network.62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40" Workload="ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-29d70f4830", "pod":"csi-node-driver-gmdjj", "timestamp":"2025-09-05 23:53:35.766271141 +0000 UTC"}, Hostname:"ci-4081.3.5-n-29d70f4830", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:53:35.847907 containerd[1735]: 2025-09-05 23:53:35.766 [INFO][4753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:35.847907 containerd[1735]: 2025-09-05 23:53:35.766 [INFO][4753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:35.847907 containerd[1735]: 2025-09-05 23:53:35.766 [INFO][4753] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-29d70f4830' Sep 5 23:53:35.847907 containerd[1735]: 2025-09-05 23:53:35.776 [INFO][4753] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:35.847907 containerd[1735]: 2025-09-05 23:53:35.783 [INFO][4753] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:35.847907 containerd[1735]: 2025-09-05 23:53:35.790 [INFO][4753] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:35.847907 containerd[1735]: 2025-09-05 23:53:35.792 [INFO][4753] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:35.847907 containerd[1735]: 2025-09-05 23:53:35.794 [INFO][4753] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:35.847907 containerd[1735]: 2025-09-05 23:53:35.795 [INFO][4753] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:35.847907 containerd[1735]: 2025-09-05 23:53:35.796 [INFO][4753] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40 Sep 5 23:53:35.847907 containerd[1735]: 2025-09-05 23:53:35.804 [INFO][4753] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:35.847907 containerd[1735]: 2025-09-05 23:53:35.811 [INFO][4753] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.130/26] block=192.168.46.128/26 handle="k8s-pod-network.62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:35.847907 containerd[1735]: 2025-09-05 23:53:35.811 [INFO][4753] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.130/26] handle="k8s-pod-network.62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:35.847907 containerd[1735]: 2025-09-05 23:53:35.811 [INFO][4753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:35.847907 containerd[1735]: 2025-09-05 23:53:35.811 [INFO][4753] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.130/26] IPv6=[] ContainerID="62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40" HandleID="k8s-pod-network.62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40" Workload="ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0" Sep 5 23:53:35.848551 containerd[1735]: 2025-09-05 23:53:35.814 [INFO][4742] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40" Namespace="calico-system" Pod="csi-node-driver-gmdjj" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b655b97a-e9ef-4351-a639-e1502a0f30b8", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"", Pod:"csi-node-driver-gmdjj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.46.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5fe3e584cea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:35.848551 containerd[1735]: 2025-09-05 23:53:35.814 [INFO][4742] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.130/32] ContainerID="62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40" Namespace="calico-system" Pod="csi-node-driver-gmdjj" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0" Sep 5 23:53:35.848551 containerd[1735]: 2025-09-05 23:53:35.814 [INFO][4742] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5fe3e584cea ContainerID="62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40" Namespace="calico-system" Pod="csi-node-driver-gmdjj" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0" Sep 5 23:53:35.848551 containerd[1735]: 2025-09-05 23:53:35.824 [INFO][4742] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40" Namespace="calico-system" Pod="csi-node-driver-gmdjj" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0" Sep 5 23:53:35.848551 containerd[1735]: 2025-09-05 23:53:35.824 [INFO][4742] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40" Namespace="calico-system" Pod="csi-node-driver-gmdjj" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b655b97a-e9ef-4351-a639-e1502a0f30b8", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40", Pod:"csi-node-driver-gmdjj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.46.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5fe3e584cea", MAC:"d6:06:04:f7:73:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:35.848551 containerd[1735]: 2025-09-05 23:53:35.843 [INFO][4742] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40" Namespace="calico-system" Pod="csi-node-driver-gmdjj" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0" Sep 5 23:53:35.875939 containerd[1735]: time="2025-09-05T23:53:35.875687430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:35.875939 containerd[1735]: time="2025-09-05T23:53:35.875744310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:35.875939 containerd[1735]: time="2025-09-05T23:53:35.875760030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:35.876848 containerd[1735]: time="2025-09-05T23:53:35.876390190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:35.905027 systemd[1]: Started cri-containerd-62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40.scope - libcontainer container 62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40. Sep 5 23:53:35.933580 containerd[1735]: time="2025-09-05T23:53:35.933461134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gmdjj,Uid:b655b97a-e9ef-4351-a639-e1502a0f30b8,Namespace:calico-system,Attempt:1,} returns sandbox id \"62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40\"" Sep 5 23:53:36.541948 containerd[1735]: time="2025-09-05T23:53:36.541460763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:36.544225 containerd[1735]: time="2025-09-05T23:53:36.544067002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 5 23:53:36.548715 containerd[1735]: time="2025-09-05T23:53:36.548660841Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:36.556990 containerd[1735]: time="2025-09-05T23:53:36.556936718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:36.557691 containerd[1735]: time="2025-09-05T23:53:36.557654158Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 2.242749049s" Sep 5 23:53:36.557691 containerd[1735]: time="2025-09-05T23:53:36.557689758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 5 23:53:36.560086 containerd[1735]: time="2025-09-05T23:53:36.559894957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 23:53:36.565890 containerd[1735]: time="2025-09-05T23:53:36.565705196Z" level=info msg="CreateContainer within sandbox \"8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 23:53:36.603095 containerd[1735]: time="2025-09-05T23:53:36.602973105Z" level=info msg="CreateContainer within sandbox \"8f3190af66386fb9bd25362ff0429f1c6ed1ebc9cb8de1cc7fb0fa4403482e47\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0fa276118f9e467fb0cc6d7651702ebc5170a8f9aec30be6a860b9e6c923fa62\"" Sep 5 23:53:36.604086 containerd[1735]: time="2025-09-05T23:53:36.603674105Z" level=info msg="StartContainer for \"0fa276118f9e467fb0cc6d7651702ebc5170a8f9aec30be6a860b9e6c923fa62\"" Sep 5 23:53:36.631033 systemd[1]: Started cri-containerd-0fa276118f9e467fb0cc6d7651702ebc5170a8f9aec30be6a860b9e6c923fa62.scope - libcontainer container 0fa276118f9e467fb0cc6d7651702ebc5170a8f9aec30be6a860b9e6c923fa62. Sep 5 23:53:36.674423 containerd[1735]: time="2025-09-05T23:53:36.674366485Z" level=info msg="StartContainer for \"0fa276118f9e467fb0cc6d7651702ebc5170a8f9aec30be6a860b9e6c923fa62\" returns successfully" Sep 5 23:53:37.142975 systemd-networkd[1504]: cali5fe3e584cea: Gained IPv6LL Sep 5 23:53:37.555624 containerd[1735]: time="2025-09-05T23:53:37.555007438Z" level=info msg="StopPodSandbox for \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\"" Sep 5 23:53:37.607819 kubelet[3230]: I0905 23:53:37.607687 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6cd8fbdc7f-xn6jj" podStartSLOduration=2.732391631 podStartE2EDuration="6.607670503s" podCreationTimestamp="2025-09-05 23:53:31 +0000 UTC" firstStartedPulling="2025-09-05 23:53:32.683747286 +0000 UTC m=+41.230582522" lastFinishedPulling="2025-09-05 23:53:36.559026158 +0000 UTC m=+45.105861394" observedRunningTime="2025-09-05 23:53:36.802933489 +0000 UTC m=+45.349768725" watchObservedRunningTime="2025-09-05 23:53:37.607670503 +0000 UTC m=+46.154505739" Sep 5 23:53:37.642418 containerd[1735]: 2025-09-05 23:53:37.608 [INFO][4865] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Sep 5 23:53:37.642418 containerd[1735]: 2025-09-05 23:53:37.609 [INFO][4865] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" iface="eth0" netns="/var/run/netns/cni-7fd30552-bbad-a7c4-ea3b-35bb8bdcd263" Sep 5 23:53:37.642418 containerd[1735]: 2025-09-05 23:53:37.609 [INFO][4865] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" iface="eth0" netns="/var/run/netns/cni-7fd30552-bbad-a7c4-ea3b-35bb8bdcd263" Sep 5 23:53:37.642418 containerd[1735]: 2025-09-05 23:53:37.610 [INFO][4865] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" iface="eth0" netns="/var/run/netns/cni-7fd30552-bbad-a7c4-ea3b-35bb8bdcd263" Sep 5 23:53:37.642418 containerd[1735]: 2025-09-05 23:53:37.610 [INFO][4865] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Sep 5 23:53:37.642418 containerd[1735]: 2025-09-05 23:53:37.610 [INFO][4865] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Sep 5 23:53:37.642418 containerd[1735]: 2025-09-05 23:53:37.629 [INFO][4872] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" HandleID="k8s-pod-network.4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0" Sep 5 23:53:37.642418 containerd[1735]: 2025-09-05 23:53:37.629 [INFO][4872] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:37.642418 containerd[1735]: 2025-09-05 23:53:37.629 [INFO][4872] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:37.642418 containerd[1735]: 2025-09-05 23:53:37.638 [WARNING][4872] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" HandleID="k8s-pod-network.4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0" Sep 5 23:53:37.642418 containerd[1735]: 2025-09-05 23:53:37.638 [INFO][4872] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" HandleID="k8s-pod-network.4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0" Sep 5 23:53:37.642418 containerd[1735]: 2025-09-05 23:53:37.639 [INFO][4872] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:37.642418 containerd[1735]: 2025-09-05 23:53:37.640 [INFO][4865] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Sep 5 23:53:37.645409 containerd[1735]: time="2025-09-05T23:53:37.644959652Z" level=info msg="TearDown network for sandbox \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\" successfully" Sep 5 23:53:37.645409 containerd[1735]: time="2025-09-05T23:53:37.645013572Z" level=info msg="StopPodSandbox for \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\" returns successfully" Sep 5 23:53:37.647401 containerd[1735]: time="2025-09-05T23:53:37.646082772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5854478d-ms7sv,Uid:9b9118f6-416f-46e4-abe5-5d379ea246b1,Namespace:calico-apiserver,Attempt:1,}" Sep 5 23:53:37.646721 systemd[1]: run-netns-cni\x2d7fd30552\x2dbbad\x2da7c4\x2dea3b\x2d35bb8bdcd263.mount: Deactivated successfully. Sep 5 23:53:37.803335 systemd-networkd[1504]: cali4f3851ac33a: Link UP Sep 5 23:53:37.803802 systemd-networkd[1504]: cali4f3851ac33a: Gained carrier Sep 5 23:53:37.823937 containerd[1735]: 2025-09-05 23:53:37.740 [INFO][4879] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0 calico-apiserver-7c5854478d- calico-apiserver 9b9118f6-416f-46e4-abe5-5d379ea246b1 934 0 2025-09-05 23:53:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c5854478d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-29d70f4830 calico-apiserver-7c5854478d-ms7sv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4f3851ac33a [] [] }} ContainerID="dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751" Namespace="calico-apiserver" Pod="calico-apiserver-7c5854478d-ms7sv" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-" Sep 5 23:53:37.823937 containerd[1735]: 2025-09-05 23:53:37.740 [INFO][4879] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751" Namespace="calico-apiserver" Pod="calico-apiserver-7c5854478d-ms7sv" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0" Sep 5 23:53:37.823937 containerd[1735]: 2025-09-05 23:53:37.763 [INFO][4890] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751" HandleID="k8s-pod-network.dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0" Sep 5 23:53:37.823937 containerd[1735]: 2025-09-05 23:53:37.763 [INFO][4890] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751" HandleID="k8s-pod-network.dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-29d70f4830", "pod":"calico-apiserver-7c5854478d-ms7sv", "timestamp":"2025-09-05 23:53:37.763144739 +0000 UTC"}, Hostname:"ci-4081.3.5-n-29d70f4830", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:53:37.823937 containerd[1735]: 2025-09-05 23:53:37.763 [INFO][4890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:37.823937 containerd[1735]: 2025-09-05 23:53:37.763 [INFO][4890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:37.823937 containerd[1735]: 2025-09-05 23:53:37.763 [INFO][4890] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-29d70f4830' Sep 5 23:53:37.823937 containerd[1735]: 2025-09-05 23:53:37.772 [INFO][4890] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:37.823937 containerd[1735]: 2025-09-05 23:53:37.776 [INFO][4890] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:37.823937 containerd[1735]: 2025-09-05 23:53:37.779 [INFO][4890] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:37.823937 containerd[1735]: 2025-09-05 23:53:37.781 [INFO][4890] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:37.823937 containerd[1735]: 2025-09-05 23:53:37.783 [INFO][4890] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:37.823937 containerd[1735]: 2025-09-05 23:53:37.783 [INFO][4890] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:37.823937 containerd[1735]: 2025-09-05 23:53:37.784 [INFO][4890] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751 Sep 5 23:53:37.823937 containerd[1735]: 2025-09-05 23:53:37.788 [INFO][4890] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:37.823937 containerd[1735]: 2025-09-05 23:53:37.798 [INFO][4890] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.131/26] block=192.168.46.128/26 handle="k8s-pod-network.dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:37.823937 containerd[1735]: 2025-09-05 23:53:37.798 [INFO][4890] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.131/26] handle="k8s-pod-network.dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:37.823937 containerd[1735]: 2025-09-05 23:53:37.798 [INFO][4890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:37.823937 containerd[1735]: 2025-09-05 23:53:37.798 [INFO][4890] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.131/26] IPv6=[] ContainerID="dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751" HandleID="k8s-pod-network.dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0" Sep 5 23:53:37.824793 containerd[1735]: 2025-09-05 23:53:37.800 [INFO][4879] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751" Namespace="calico-apiserver" Pod="calico-apiserver-7c5854478d-ms7sv" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0", GenerateName:"calico-apiserver-7c5854478d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b9118f6-416f-46e4-abe5-5d379ea246b1", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c5854478d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"", Pod:"calico-apiserver-7c5854478d-ms7sv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f3851ac33a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:37.824793 containerd[1735]: 2025-09-05 23:53:37.800 [INFO][4879] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.131/32] ContainerID="dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751" Namespace="calico-apiserver" Pod="calico-apiserver-7c5854478d-ms7sv" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0" Sep 5 23:53:37.824793 containerd[1735]: 2025-09-05 23:53:37.800 [INFO][4879] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f3851ac33a ContainerID="dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751" Namespace="calico-apiserver" Pod="calico-apiserver-7c5854478d-ms7sv" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0" Sep 5 23:53:37.824793 containerd[1735]: 2025-09-05 23:53:37.804 [INFO][4879] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751" Namespace="calico-apiserver" Pod="calico-apiserver-7c5854478d-ms7sv" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0" Sep 5 23:53:37.824793 containerd[1735]: 2025-09-05 23:53:37.805 [INFO][4879] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751" Namespace="calico-apiserver" Pod="calico-apiserver-7c5854478d-ms7sv" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0", GenerateName:"calico-apiserver-7c5854478d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b9118f6-416f-46e4-abe5-5d379ea246b1", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c5854478d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751", Pod:"calico-apiserver-7c5854478d-ms7sv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f3851ac33a", MAC:"8a:a4:c8:6d:6f:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:37.824793 containerd[1735]: 2025-09-05 23:53:37.819 [INFO][4879] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751" Namespace="calico-apiserver" Pod="calico-apiserver-7c5854478d-ms7sv" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0" Sep 5 23:53:37.850884 containerd[1735]: time="2025-09-05T23:53:37.850644635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:37.850884 containerd[1735]: time="2025-09-05T23:53:37.850698995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:37.850884 containerd[1735]: time="2025-09-05T23:53:37.850709635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:37.850884 containerd[1735]: time="2025-09-05T23:53:37.850787835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:37.877038 systemd[1]: Started cri-containerd-dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751.scope - libcontainer container dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751. Sep 5 23:53:37.907134 containerd[1735]: time="2025-09-05T23:53:37.907090019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5854478d-ms7sv,Uid:9b9118f6-416f-46e4-abe5-5d379ea246b1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751\"" Sep 5 23:53:38.215985 containerd[1735]: time="2025-09-05T23:53:38.215373652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:38.218313 containerd[1735]: time="2025-09-05T23:53:38.218159651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 5 23:53:38.224120 containerd[1735]: time="2025-09-05T23:53:38.223785810Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:38.230595 containerd[1735]: time="2025-09-05T23:53:38.230565008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:38.231341 containerd[1735]: time="2025-09-05T23:53:38.231192608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.671260651s" Sep 5 23:53:38.231341 containerd[1735]: time="2025-09-05T23:53:38.231225008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 5 23:53:38.233704 containerd[1735]: time="2025-09-05T23:53:38.232620607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 23:53:38.239873 containerd[1735]: time="2025-09-05T23:53:38.239747685Z" level=info msg="CreateContainer within sandbox \"62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 23:53:38.289821 containerd[1735]: time="2025-09-05T23:53:38.289774791Z" level=info msg="CreateContainer within sandbox \"62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7f1746836fdc110c9a2594315457bdd51e66fe6fc65b51daec4b3c200576ab6c\"" Sep 5 23:53:38.291211 containerd[1735]: time="2025-09-05T23:53:38.291181551Z" level=info msg="StartContainer for \"7f1746836fdc110c9a2594315457bdd51e66fe6fc65b51daec4b3c200576ab6c\"" Sep 5 23:53:38.316022 systemd[1]: Started cri-containerd-7f1746836fdc110c9a2594315457bdd51e66fe6fc65b51daec4b3c200576ab6c.scope - libcontainer container 7f1746836fdc110c9a2594315457bdd51e66fe6fc65b51daec4b3c200576ab6c. Sep 5 23:53:38.348268 containerd[1735]: time="2025-09-05T23:53:38.348224495Z" level=info msg="StartContainer for \"7f1746836fdc110c9a2594315457bdd51e66fe6fc65b51daec4b3c200576ab6c\" returns successfully" Sep 5 23:53:38.554396 containerd[1735]: time="2025-09-05T23:53:38.554270717Z" level=info msg="StopPodSandbox for \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\"" Sep 5 23:53:38.555221 containerd[1735]: time="2025-09-05T23:53:38.554274597Z" level=info msg="StopPodSandbox for \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\"" Sep 5 23:53:38.662947 containerd[1735]: 2025-09-05 23:53:38.613 [INFO][5007] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Sep 5 23:53:38.662947 containerd[1735]: 2025-09-05 23:53:38.615 [INFO][5007] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" iface="eth0" netns="/var/run/netns/cni-a9434488-de47-d50b-682b-18c2bf10f511" Sep 5 23:53:38.662947 containerd[1735]: 2025-09-05 23:53:38.616 [INFO][5007] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" iface="eth0" netns="/var/run/netns/cni-a9434488-de47-d50b-682b-18c2bf10f511" Sep 5 23:53:38.662947 containerd[1735]: 2025-09-05 23:53:38.616 [INFO][5007] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" iface="eth0" netns="/var/run/netns/cni-a9434488-de47-d50b-682b-18c2bf10f511" Sep 5 23:53:38.662947 containerd[1735]: 2025-09-05 23:53:38.616 [INFO][5007] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Sep 5 23:53:38.662947 containerd[1735]: 2025-09-05 23:53:38.616 [INFO][5007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Sep 5 23:53:38.662947 containerd[1735]: 2025-09-05 23:53:38.644 [INFO][5021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" HandleID="k8s-pod-network.bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0" Sep 5 23:53:38.662947 containerd[1735]: 2025-09-05 23:53:38.644 [INFO][5021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:38.662947 containerd[1735]: 2025-09-05 23:53:38.645 [INFO][5021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:38.662947 containerd[1735]: 2025-09-05 23:53:38.655 [WARNING][5021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" HandleID="k8s-pod-network.bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0" Sep 5 23:53:38.662947 containerd[1735]: 2025-09-05 23:53:38.655 [INFO][5021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" HandleID="k8s-pod-network.bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0" Sep 5 23:53:38.662947 containerd[1735]: 2025-09-05 23:53:38.656 [INFO][5021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:38.662947 containerd[1735]: 2025-09-05 23:53:38.658 [INFO][5007] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Sep 5 23:53:38.664168 containerd[1735]: time="2025-09-05T23:53:38.663632366Z" level=info msg="TearDown network for sandbox \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\" successfully" Sep 5 23:53:38.664168 containerd[1735]: time="2025-09-05T23:53:38.663660966Z" level=info msg="StopPodSandbox for \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\" returns successfully" Sep 5 23:53:38.665125 containerd[1735]: time="2025-09-05T23:53:38.665008886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8cqwz,Uid:e0f93c22-927f-47b6-8cee-8ea27a2ee078,Namespace:kube-system,Attempt:1,}" Sep 5 23:53:38.673020 containerd[1735]: 2025-09-05 23:53:38.619 [INFO][5011] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Sep 5 23:53:38.673020 containerd[1735]: 2025-09-05 23:53:38.619 [INFO][5011] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" iface="eth0" netns="/var/run/netns/cni-fd10c819-a4da-1924-34a1-405c64d329eb" Sep 5 23:53:38.673020 containerd[1735]: 2025-09-05 23:53:38.620 [INFO][5011] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" iface="eth0" netns="/var/run/netns/cni-fd10c819-a4da-1924-34a1-405c64d329eb" Sep 5 23:53:38.673020 containerd[1735]: 2025-09-05 23:53:38.620 [INFO][5011] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" iface="eth0" netns="/var/run/netns/cni-fd10c819-a4da-1924-34a1-405c64d329eb" Sep 5 23:53:38.673020 containerd[1735]: 2025-09-05 23:53:38.620 [INFO][5011] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Sep 5 23:53:38.673020 containerd[1735]: 2025-09-05 23:53:38.620 [INFO][5011] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Sep 5 23:53:38.673020 containerd[1735]: 2025-09-05 23:53:38.648 [INFO][5024] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" HandleID="k8s-pod-network.16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Workload="ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0" Sep 5 23:53:38.673020 containerd[1735]: 2025-09-05 23:53:38.648 [INFO][5024] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:38.673020 containerd[1735]: 2025-09-05 23:53:38.656 [INFO][5024] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:38.673020 containerd[1735]: 2025-09-05 23:53:38.668 [WARNING][5024] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" HandleID="k8s-pod-network.16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Workload="ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0" Sep 5 23:53:38.673020 containerd[1735]: 2025-09-05 23:53:38.668 [INFO][5024] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" HandleID="k8s-pod-network.16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Workload="ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0" Sep 5 23:53:38.673020 containerd[1735]: 2025-09-05 23:53:38.669 [INFO][5024] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:38.673020 containerd[1735]: 2025-09-05 23:53:38.671 [INFO][5011] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Sep 5 23:53:38.673559 containerd[1735]: time="2025-09-05T23:53:38.673459003Z" level=info msg="TearDown network for sandbox \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\" successfully" Sep 5 23:53:38.673559 containerd[1735]: time="2025-09-05T23:53:38.673480763Z" level=info msg="StopPodSandbox for \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\" returns successfully" Sep 5 23:53:38.674247 containerd[1735]: time="2025-09-05T23:53:38.674051883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-jffjd,Uid:6c6dfb93-02c5-4946-b32d-4225aadf4328,Namespace:calico-system,Attempt:1,}" Sep 5 23:53:38.689037 systemd[1]: run-netns-cni\x2dfd10c819\x2da4da\x2d1924\x2d34a1\x2d405c64d329eb.mount: Deactivated successfully. Sep 5 23:53:38.689147 systemd[1]: run-netns-cni\x2da9434488\x2dde47\x2dd50b\x2d682b\x2d18c2bf10f511.mount: Deactivated successfully. Sep 5 23:53:38.866191 systemd-networkd[1504]: calic477d5bada8: Link UP Sep 5 23:53:38.869479 systemd-networkd[1504]: calic477d5bada8: Gained carrier Sep 5 23:53:38.889116 containerd[1735]: 2025-09-05 23:53:38.776 [INFO][5040] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0 goldmane-54d579b49d- calico-system 6c6dfb93-02c5-4946-b32d-4225aadf4328 947 0 2025-09-05 23:53:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.5-n-29d70f4830 goldmane-54d579b49d-jffjd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic477d5bada8 [] [] }} ContainerID="570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be" Namespace="calico-system" Pod="goldmane-54d579b49d-jffjd" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-" Sep 5 23:53:38.889116 containerd[1735]: 2025-09-05 23:53:38.776 [INFO][5040] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be" Namespace="calico-system" Pod="goldmane-54d579b49d-jffjd" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0" Sep 5 23:53:38.889116 containerd[1735]: 2025-09-05 23:53:38.808 [INFO][5061] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be" HandleID="k8s-pod-network.570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be" Workload="ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0" Sep 5 23:53:38.889116 containerd[1735]: 2025-09-05 23:53:38.809 [INFO][5061] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be" HandleID="k8s-pod-network.570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be" Workload="ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-29d70f4830", "pod":"goldmane-54d579b49d-jffjd", "timestamp":"2025-09-05 23:53:38.808972005 +0000 UTC"}, Hostname:"ci-4081.3.5-n-29d70f4830", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:53:38.889116 containerd[1735]: 2025-09-05 23:53:38.809 [INFO][5061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:38.889116 containerd[1735]: 2025-09-05 23:53:38.809 [INFO][5061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:38.889116 containerd[1735]: 2025-09-05 23:53:38.809 [INFO][5061] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-29d70f4830' Sep 5 23:53:38.889116 containerd[1735]: 2025-09-05 23:53:38.827 [INFO][5061] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:38.889116 containerd[1735]: 2025-09-05 23:53:38.832 [INFO][5061] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:38.889116 containerd[1735]: 2025-09-05 23:53:38.837 [INFO][5061] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:38.889116 containerd[1735]: 2025-09-05 23:53:38.839 [INFO][5061] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:38.889116 containerd[1735]: 2025-09-05 23:53:38.841 [INFO][5061] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:38.889116 containerd[1735]: 2025-09-05 23:53:38.841 [INFO][5061] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:38.889116 containerd[1735]: 2025-09-05 23:53:38.842 [INFO][5061] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be Sep 5 23:53:38.889116 containerd[1735]: 2025-09-05 23:53:38.847 [INFO][5061] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:38.889116 containerd[1735]: 2025-09-05 23:53:38.857 [INFO][5061] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.132/26] block=192.168.46.128/26 handle="k8s-pod-network.570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:38.889116 containerd[1735]: 2025-09-05 23:53:38.857 [INFO][5061] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.132/26] handle="k8s-pod-network.570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:38.889116 containerd[1735]: 2025-09-05 23:53:38.857 [INFO][5061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:38.889116 containerd[1735]: 2025-09-05 23:53:38.857 [INFO][5061] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.132/26] IPv6=[] ContainerID="570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be" HandleID="k8s-pod-network.570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be" Workload="ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0" Sep 5 23:53:38.889675 containerd[1735]: 2025-09-05 23:53:38.861 [INFO][5040] cni-plugin/k8s.go 418: Populated endpoint ContainerID="570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be" Namespace="calico-system" Pod="goldmane-54d579b49d-jffjd" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6c6dfb93-02c5-4946-b32d-4225aadf4328", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"", Pod:"goldmane-54d579b49d-jffjd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.46.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic477d5bada8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:38.889675 containerd[1735]: 2025-09-05 23:53:38.861 [INFO][5040] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.132/32] ContainerID="570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be" Namespace="calico-system" Pod="goldmane-54d579b49d-jffjd" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0" Sep 5 23:53:38.889675 containerd[1735]: 2025-09-05 23:53:38.861 [INFO][5040] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic477d5bada8 ContainerID="570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be" Namespace="calico-system" Pod="goldmane-54d579b49d-jffjd" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0" Sep 5 23:53:38.889675 containerd[1735]: 2025-09-05 23:53:38.870 [INFO][5040] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be" Namespace="calico-system" Pod="goldmane-54d579b49d-jffjd" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0" Sep 5 23:53:38.889675 containerd[1735]: 2025-09-05 23:53:38.871 [INFO][5040] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be" Namespace="calico-system" Pod="goldmane-54d579b49d-jffjd" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6c6dfb93-02c5-4946-b32d-4225aadf4328", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be", Pod:"goldmane-54d579b49d-jffjd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.46.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic477d5bada8", MAC:"ba:0f:56:f0:04:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:38.889675 containerd[1735]: 2025-09-05 23:53:38.886 [INFO][5040] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be" Namespace="calico-system" Pod="goldmane-54d579b49d-jffjd" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0" Sep 5 23:53:38.927114 containerd[1735]: time="2025-09-05T23:53:38.926769732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:38.927114 containerd[1735]: time="2025-09-05T23:53:38.926826892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:38.927114 containerd[1735]: time="2025-09-05T23:53:38.926842052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:38.927114 containerd[1735]: time="2025-09-05T23:53:38.927006172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:38.937176 systemd-networkd[1504]: cali4f3851ac33a: Gained IPv6LL Sep 5 23:53:38.950144 systemd[1]: Started cri-containerd-570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be.scope - libcontainer container 570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be. Sep 5 23:53:38.992965 systemd-networkd[1504]: cali5d6ee8b1694: Link UP Sep 5 23:53:38.999358 systemd-networkd[1504]: cali5d6ee8b1694: Gained carrier Sep 5 23:53:39.016722 containerd[1735]: time="2025-09-05T23:53:39.016674987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-jffjd,Uid:6c6dfb93-02c5-4946-b32d-4225aadf4328,Namespace:calico-system,Attempt:1,} returns sandbox id \"570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be\"" Sep 5 23:53:39.026898 containerd[1735]: 2025-09-05 23:53:38.774 [INFO][5035] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0 coredns-674b8bbfcf- kube-system e0f93c22-927f-47b6-8cee-8ea27a2ee078 946 0 2025-09-05 23:52:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-29d70f4830 coredns-674b8bbfcf-8cqwz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5d6ee8b1694 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc" Namespace="kube-system" Pod="coredns-674b8bbfcf-8cqwz" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-" Sep 5 23:53:39.026898 containerd[1735]: 2025-09-05 23:53:38.774 [INFO][5035] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc" Namespace="kube-system" Pod="coredns-674b8bbfcf-8cqwz" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0" Sep 5 23:53:39.026898 containerd[1735]: 2025-09-05 23:53:38.810 [INFO][5063] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc" HandleID="k8s-pod-network.53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0" Sep 5 23:53:39.026898 containerd[1735]: 2025-09-05 23:53:38.810 [INFO][5063] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc" HandleID="k8s-pod-network.53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3a60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-29d70f4830", "pod":"coredns-674b8bbfcf-8cqwz", "timestamp":"2025-09-05 23:53:38.810335805 +0000 UTC"}, Hostname:"ci-4081.3.5-n-29d70f4830", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:53:39.026898 containerd[1735]: 2025-09-05 23:53:38.810 [INFO][5063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:39.026898 containerd[1735]: 2025-09-05 23:53:38.857 [INFO][5063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:39.026898 containerd[1735]: 2025-09-05 23:53:38.857 [INFO][5063] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-29d70f4830' Sep 5 23:53:39.026898 containerd[1735]: 2025-09-05 23:53:38.938 [INFO][5063] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:39.026898 containerd[1735]: 2025-09-05 23:53:38.945 [INFO][5063] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:39.026898 containerd[1735]: 2025-09-05 23:53:38.952 [INFO][5063] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:39.026898 containerd[1735]: 2025-09-05 23:53:38.955 [INFO][5063] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:39.026898 containerd[1735]: 2025-09-05 23:53:38.958 [INFO][5063] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:39.026898 containerd[1735]: 2025-09-05 23:53:38.959 [INFO][5063] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:39.026898 containerd[1735]: 2025-09-05 23:53:38.961 [INFO][5063] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc Sep 5 23:53:39.026898 containerd[1735]: 2025-09-05 23:53:38.972 [INFO][5063] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:39.026898 containerd[1735]: 2025-09-05 23:53:38.980 [INFO][5063] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.133/26] block=192.168.46.128/26 handle="k8s-pod-network.53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:39.026898 containerd[1735]: 2025-09-05 23:53:38.981 [INFO][5063] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.133/26] handle="k8s-pod-network.53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:39.026898 containerd[1735]: 2025-09-05 23:53:38.981 [INFO][5063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:39.026898 containerd[1735]: 2025-09-05 23:53:38.981 [INFO][5063] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.133/26] IPv6=[] ContainerID="53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc" HandleID="k8s-pod-network.53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0" Sep 5 23:53:39.027988 containerd[1735]: 2025-09-05 23:53:38.986 [INFO][5035] cni-plugin/k8s.go 418: Populated endpoint ContainerID="53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc" Namespace="kube-system" Pod="coredns-674b8bbfcf-8cqwz" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e0f93c22-927f-47b6-8cee-8ea27a2ee078", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 52, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"", Pod:"coredns-674b8bbfcf-8cqwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5d6ee8b1694", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:39.027988 containerd[1735]: 2025-09-05 23:53:38.986 [INFO][5035] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.133/32] ContainerID="53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc" Namespace="kube-system" Pod="coredns-674b8bbfcf-8cqwz" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0" Sep 5 23:53:39.027988 containerd[1735]: 2025-09-05 23:53:38.986 [INFO][5035] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d6ee8b1694 ContainerID="53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc" Namespace="kube-system" Pod="coredns-674b8bbfcf-8cqwz" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0" Sep 5 23:53:39.027988 containerd[1735]: 2025-09-05 23:53:39.004 [INFO][5035] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc" Namespace="kube-system" Pod="coredns-674b8bbfcf-8cqwz" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0" Sep 5 23:53:39.027988 containerd[1735]: 2025-09-05 23:53:39.005 [INFO][5035] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc" Namespace="kube-system" Pod="coredns-674b8bbfcf-8cqwz" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e0f93c22-927f-47b6-8cee-8ea27a2ee078", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 52, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc", Pod:"coredns-674b8bbfcf-8cqwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5d6ee8b1694", MAC:"92:9a:5f:7c:2a:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:39.027988 containerd[1735]: 2025-09-05 23:53:39.022 [INFO][5035] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc" Namespace="kube-system" Pod="coredns-674b8bbfcf-8cqwz" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0" Sep 5 23:53:39.053697 containerd[1735]: time="2025-09-05T23:53:39.053588816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:39.054015 containerd[1735]: time="2025-09-05T23:53:39.053649576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:39.054015 containerd[1735]: time="2025-09-05T23:53:39.053659736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:39.054165 containerd[1735]: time="2025-09-05T23:53:39.053739496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:39.073058 systemd[1]: Started cri-containerd-53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc.scope - libcontainer container 53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc. Sep 5 23:53:39.109330 containerd[1735]: time="2025-09-05T23:53:39.109268321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8cqwz,Uid:e0f93c22-927f-47b6-8cee-8ea27a2ee078,Namespace:kube-system,Attempt:1,} returns sandbox id \"53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc\"" Sep 5 23:53:39.122398 containerd[1735]: time="2025-09-05T23:53:39.122182797Z" level=info msg="CreateContainer within sandbox \"53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:53:39.163568 containerd[1735]: time="2025-09-05T23:53:39.163412946Z" level=info msg="CreateContainer within sandbox \"53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dacb8bbee9160aff530b1a3aaf26bb2d79e348be258cb53676c349ddcf96bdbd\"" Sep 5 23:53:39.165161 containerd[1735]: time="2025-09-05T23:53:39.165132985Z" level=info msg="StartContainer for \"dacb8bbee9160aff530b1a3aaf26bb2d79e348be258cb53676c349ddcf96bdbd\"" Sep 5 23:53:39.190040 systemd[1]: Started cri-containerd-dacb8bbee9160aff530b1a3aaf26bb2d79e348be258cb53676c349ddcf96bdbd.scope - libcontainer container dacb8bbee9160aff530b1a3aaf26bb2d79e348be258cb53676c349ddcf96bdbd. Sep 5 23:53:39.217890 containerd[1735]: time="2025-09-05T23:53:39.217711850Z" level=info msg="StartContainer for \"dacb8bbee9160aff530b1a3aaf26bb2d79e348be258cb53676c349ddcf96bdbd\" returns successfully" Sep 5 23:53:39.555100 containerd[1735]: time="2025-09-05T23:53:39.554791075Z" level=info msg="StopPodSandbox for \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\"" Sep 5 23:53:39.558969 containerd[1735]: time="2025-09-05T23:53:39.558088955Z" level=info msg="StopPodSandbox for \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\"" Sep 5 23:53:39.668645 containerd[1735]: 2025-09-05 23:53:39.629 [INFO][5228] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Sep 5 23:53:39.668645 containerd[1735]: 2025-09-05 23:53:39.630 [INFO][5228] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" iface="eth0" netns="/var/run/netns/cni-257607a7-9f04-d69f-cadc-245a3c552ae5" Sep 5 23:53:39.668645 containerd[1735]: 2025-09-05 23:53:39.631 [INFO][5228] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" iface="eth0" netns="/var/run/netns/cni-257607a7-9f04-d69f-cadc-245a3c552ae5" Sep 5 23:53:39.668645 containerd[1735]: 2025-09-05 23:53:39.631 [INFO][5228] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" iface="eth0" netns="/var/run/netns/cni-257607a7-9f04-d69f-cadc-245a3c552ae5" Sep 5 23:53:39.668645 containerd[1735]: 2025-09-05 23:53:39.632 [INFO][5228] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Sep 5 23:53:39.668645 containerd[1735]: 2025-09-05 23:53:39.632 [INFO][5228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Sep 5 23:53:39.668645 containerd[1735]: 2025-09-05 23:53:39.655 [INFO][5243] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" HandleID="k8s-pod-network.8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0" Sep 5 23:53:39.668645 containerd[1735]: 2025-09-05 23:53:39.655 [INFO][5243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:39.668645 containerd[1735]: 2025-09-05 23:53:39.655 [INFO][5243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:39.668645 containerd[1735]: 2025-09-05 23:53:39.664 [WARNING][5243] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" HandleID="k8s-pod-network.8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0" Sep 5 23:53:39.668645 containerd[1735]: 2025-09-05 23:53:39.664 [INFO][5243] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" HandleID="k8s-pod-network.8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0" Sep 5 23:53:39.668645 containerd[1735]: 2025-09-05 23:53:39.665 [INFO][5243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:39.668645 containerd[1735]: 2025-09-05 23:53:39.667 [INFO][5228] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Sep 5 23:53:39.670818 containerd[1735]: time="2025-09-05T23:53:39.670557803Z" level=info msg="TearDown network for sandbox \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\" successfully" Sep 5 23:53:39.670818 containerd[1735]: time="2025-09-05T23:53:39.670590163Z" level=info msg="StopPodSandbox for \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\" returns successfully" Sep 5 23:53:39.672022 containerd[1735]: time="2025-09-05T23:53:39.671983083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hdz5l,Uid:5666caf1-ebe2-4df7-a26f-9fc0c5f462aa,Namespace:kube-system,Attempt:1,}" Sep 5 23:53:39.683661 containerd[1735]: 2025-09-05 23:53:39.624 [INFO][5225] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Sep 5 23:53:39.683661 containerd[1735]: 2025-09-05 23:53:39.626 [INFO][5225] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" iface="eth0" netns="/var/run/netns/cni-efa7b77e-1b5c-1205-d83a-bc8e223eafb4" Sep 5 23:53:39.683661 containerd[1735]: 2025-09-05 23:53:39.626 [INFO][5225] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" iface="eth0" netns="/var/run/netns/cni-efa7b77e-1b5c-1205-d83a-bc8e223eafb4" Sep 5 23:53:39.683661 containerd[1735]: 2025-09-05 23:53:39.628 [INFO][5225] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" iface="eth0" netns="/var/run/netns/cni-efa7b77e-1b5c-1205-d83a-bc8e223eafb4" Sep 5 23:53:39.683661 containerd[1735]: 2025-09-05 23:53:39.628 [INFO][5225] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Sep 5 23:53:39.683661 containerd[1735]: 2025-09-05 23:53:39.629 [INFO][5225] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Sep 5 23:53:39.683661 containerd[1735]: 2025-09-05 23:53:39.655 [INFO][5241] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" HandleID="k8s-pod-network.27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0" Sep 5 23:53:39.683661 containerd[1735]: 2025-09-05 23:53:39.655 [INFO][5241] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:39.683661 containerd[1735]: 2025-09-05 23:53:39.665 [INFO][5241] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:39.683661 containerd[1735]: 2025-09-05 23:53:39.679 [WARNING][5241] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" HandleID="k8s-pod-network.27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0" Sep 5 23:53:39.683661 containerd[1735]: 2025-09-05 23:53:39.679 [INFO][5241] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" HandleID="k8s-pod-network.27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0" Sep 5 23:53:39.683661 containerd[1735]: 2025-09-05 23:53:39.680 [INFO][5241] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:39.683661 containerd[1735]: 2025-09-05 23:53:39.682 [INFO][5225] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Sep 5 23:53:39.684538 containerd[1735]: time="2025-09-05T23:53:39.684106239Z" level=info msg="TearDown network for sandbox \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\" successfully" Sep 5 23:53:39.684538 containerd[1735]: time="2025-09-05T23:53:39.684134199Z" level=info msg="StopPodSandbox for \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\" returns successfully" Sep 5 23:53:39.685118 containerd[1735]: time="2025-09-05T23:53:39.685085959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78c94d557f-2p295,Uid:67194c50-357b-4abf-9127-333241d1e011,Namespace:calico-system,Attempt:1,}" Sep 5 23:53:39.690340 systemd[1]: run-netns-cni\x2defa7b77e\x2d1b5c\x2d1205\x2dd83a\x2dbc8e223eafb4.mount: Deactivated successfully. Sep 5 23:53:39.690692 systemd[1]: run-netns-cni\x2d257607a7\x2d9f04\x2dd69f\x2dcadc\x2d245a3c552ae5.mount: Deactivated successfully. Sep 5 23:53:39.857214 kubelet[3230]: I0905 23:53:39.854712 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8cqwz" podStartSLOduration=42.854695462 podStartE2EDuration="42.854695462s" podCreationTimestamp="2025-09-05 23:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:53:39.827739464 +0000 UTC m=+48.374574700" watchObservedRunningTime="2025-09-05 23:53:39.854695462 +0000 UTC m=+48.401530698" Sep 5 23:53:39.917259 systemd-networkd[1504]: cali6baaf5062a0: Link UP Sep 5 23:53:39.917457 systemd-networkd[1504]: cali6baaf5062a0: Gained carrier Sep 5 23:53:39.939108 containerd[1735]: 2025-09-05 23:53:39.793 [INFO][5255] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0 coredns-674b8bbfcf- kube-system 5666caf1-ebe2-4df7-a26f-9fc0c5f462aa 966 0 2025-09-05 23:52:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-29d70f4830 coredns-674b8bbfcf-hdz5l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6baaf5062a0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdz5l" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-" Sep 5 23:53:39.939108 containerd[1735]: 2025-09-05 23:53:39.794 [INFO][5255] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdz5l" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0" Sep 5 23:53:39.939108 containerd[1735]: 2025-09-05 23:53:39.848 [INFO][5281] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f" HandleID="k8s-pod-network.f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0" Sep 5 23:53:39.939108 containerd[1735]: 2025-09-05 23:53:39.848 [INFO][5281] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f" HandleID="k8s-pod-network.f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d840), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-29d70f4830", "pod":"coredns-674b8bbfcf-hdz5l", "timestamp":"2025-09-05 23:53:39.848703303 +0000 UTC"}, Hostname:"ci-4081.3.5-n-29d70f4830", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:53:39.939108 containerd[1735]: 2025-09-05 23:53:39.849 [INFO][5281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:39.939108 containerd[1735]: 2025-09-05 23:53:39.849 [INFO][5281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:39.939108 containerd[1735]: 2025-09-05 23:53:39.849 [INFO][5281] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-29d70f4830' Sep 5 23:53:39.939108 containerd[1735]: 2025-09-05 23:53:39.874 [INFO][5281] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:39.939108 containerd[1735]: 2025-09-05 23:53:39.884 [INFO][5281] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:39.939108 containerd[1735]: 2025-09-05 23:53:39.889 [INFO][5281] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:39.939108 containerd[1735]: 2025-09-05 23:53:39.891 [INFO][5281] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:39.939108 containerd[1735]: 2025-09-05 23:53:39.893 [INFO][5281] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:39.939108 containerd[1735]: 2025-09-05 23:53:39.893 [INFO][5281] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:39.939108 containerd[1735]: 2025-09-05 23:53:39.895 [INFO][5281] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f Sep 5 23:53:39.939108 containerd[1735]: 2025-09-05 23:53:39.903 [INFO][5281] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:39.939108 containerd[1735]: 2025-09-05 23:53:39.911 [INFO][5281] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.134/26] block=192.168.46.128/26 handle="k8s-pod-network.f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:39.939108 containerd[1735]: 2025-09-05 23:53:39.911 [INFO][5281] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.134/26] handle="k8s-pod-network.f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:39.939108 containerd[1735]: 2025-09-05 23:53:39.911 [INFO][5281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:39.939108 containerd[1735]: 2025-09-05 23:53:39.911 [INFO][5281] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.134/26] IPv6=[] ContainerID="f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f" HandleID="k8s-pod-network.f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0" Sep 5 23:53:39.939676 containerd[1735]: 2025-09-05 23:53:39.914 [INFO][5255] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdz5l" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5666caf1-ebe2-4df7-a26f-9fc0c5f462aa", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 52, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"", Pod:"coredns-674b8bbfcf-hdz5l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6baaf5062a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:39.939676 containerd[1735]: 2025-09-05 23:53:39.914 [INFO][5255] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.134/32] ContainerID="f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdz5l" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0" Sep 5 23:53:39.939676 containerd[1735]: 2025-09-05 23:53:39.914 [INFO][5255] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6baaf5062a0 ContainerID="f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdz5l" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0" Sep 5 23:53:39.939676 containerd[1735]: 2025-09-05 23:53:39.917 [INFO][5255] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdz5l" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0" Sep 5 23:53:39.939676 containerd[1735]: 2025-09-05 23:53:39.919 [INFO][5255] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdz5l" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5666caf1-ebe2-4df7-a26f-9fc0c5f462aa", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 52, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f", Pod:"coredns-674b8bbfcf-hdz5l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6baaf5062a0", MAC:"52:f9:b2:06:c0:aa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:39.939676 containerd[1735]: 2025-09-05 23:53:39.936 [INFO][5255] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f" Namespace="kube-system" Pod="coredns-674b8bbfcf-hdz5l" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0" Sep 5 23:53:39.958842 containerd[1735]: time="2025-09-05T23:53:39.958624775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:39.959197 containerd[1735]: time="2025-09-05T23:53:39.958808135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:39.959197 containerd[1735]: time="2025-09-05T23:53:39.959030015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:39.959746 containerd[1735]: time="2025-09-05T23:53:39.959229935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:39.983352 systemd[1]: Started cri-containerd-f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f.scope - libcontainer container f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f. Sep 5 23:53:40.032116 systemd-networkd[1504]: calia9455db1718: Link UP Sep 5 23:53:40.033277 systemd-networkd[1504]: calia9455db1718: Gained carrier Sep 5 23:53:40.047047 containerd[1735]: time="2025-09-05T23:53:40.046997490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hdz5l,Uid:5666caf1-ebe2-4df7-a26f-9fc0c5f462aa,Namespace:kube-system,Attempt:1,} returns sandbox id \"f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f\"" Sep 5 23:53:40.057438 containerd[1735]: time="2025-09-05T23:53:40.057392129Z" level=info msg="CreateContainer within sandbox \"f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:53:40.068953 containerd[1735]: 2025-09-05 23:53:39.803 [INFO][5260] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0 calico-kube-controllers-78c94d557f- calico-system 67194c50-357b-4abf-9127-333241d1e011 965 0 2025-09-05 23:53:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78c94d557f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.5-n-29d70f4830 calico-kube-controllers-78c94d557f-2p295 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia9455db1718 [] [] }} ContainerID="e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38" Namespace="calico-system" Pod="calico-kube-controllers-78c94d557f-2p295" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-" Sep 5 23:53:40.068953 containerd[1735]: 2025-09-05 23:53:39.804 [INFO][5260] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38" Namespace="calico-system" Pod="calico-kube-controllers-78c94d557f-2p295" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0" Sep 5 23:53:40.068953 containerd[1735]: 2025-09-05 23:53:39.873 [INFO][5286] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38" HandleID="k8s-pod-network.e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0" Sep 5 23:53:40.068953 containerd[1735]: 2025-09-05 23:53:39.873 [INFO][5286] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38" HandleID="k8s-pod-network.e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032b4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-29d70f4830", "pod":"calico-kube-controllers-78c94d557f-2p295", "timestamp":"2025-09-05 23:53:39.873454781 +0000 UTC"}, Hostname:"ci-4081.3.5-n-29d70f4830", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:53:40.068953 containerd[1735]: 2025-09-05 23:53:39.874 [INFO][5286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:40.068953 containerd[1735]: 2025-09-05 23:53:39.911 [INFO][5286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:40.068953 containerd[1735]: 2025-09-05 23:53:39.911 [INFO][5286] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-29d70f4830' Sep 5 23:53:40.068953 containerd[1735]: 2025-09-05 23:53:39.979 [INFO][5286] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:40.068953 containerd[1735]: 2025-09-05 23:53:39.989 [INFO][5286] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:40.068953 containerd[1735]: 2025-09-05 23:53:39.994 [INFO][5286] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:40.068953 containerd[1735]: 2025-09-05 23:53:39.996 [INFO][5286] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:40.068953 containerd[1735]: 2025-09-05 23:53:39.999 [INFO][5286] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:40.068953 containerd[1735]: 2025-09-05 23:53:39.999 [INFO][5286] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:40.068953 containerd[1735]: 2025-09-05 23:53:40.000 [INFO][5286] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38 Sep 5 23:53:40.068953 containerd[1735]: 2025-09-05 23:53:40.007 [INFO][5286] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:40.068953 containerd[1735]: 2025-09-05 23:53:40.020 [INFO][5286] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.135/26] block=192.168.46.128/26 handle="k8s-pod-network.e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:40.068953 containerd[1735]: 2025-09-05 23:53:40.022 [INFO][5286] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.135/26] handle="k8s-pod-network.e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:40.068953 containerd[1735]: 2025-09-05 23:53:40.022 [INFO][5286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:40.068953 containerd[1735]: 2025-09-05 23:53:40.022 [INFO][5286] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.135/26] IPv6=[] ContainerID="e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38" HandleID="k8s-pod-network.e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0" Sep 5 23:53:40.071216 containerd[1735]: 2025-09-05 23:53:40.026 [INFO][5260] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38" Namespace="calico-system" Pod="calico-kube-controllers-78c94d557f-2p295" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0", GenerateName:"calico-kube-controllers-78c94d557f-", Namespace:"calico-system", SelfLink:"", UID:"67194c50-357b-4abf-9127-333241d1e011", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78c94d557f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"", Pod:"calico-kube-controllers-78c94d557f-2p295", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.46.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9455db1718", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:40.071216 containerd[1735]: 2025-09-05 23:53:40.026 [INFO][5260] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.135/32] ContainerID="e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38" Namespace="calico-system" Pod="calico-kube-controllers-78c94d557f-2p295" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0" Sep 5 23:53:40.071216 containerd[1735]: 2025-09-05 23:53:40.026 [INFO][5260] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9455db1718 ContainerID="e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38" Namespace="calico-system" Pod="calico-kube-controllers-78c94d557f-2p295" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0" Sep 5 23:53:40.071216 containerd[1735]: 2025-09-05 23:53:40.035 [INFO][5260] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38" Namespace="calico-system" Pod="calico-kube-controllers-78c94d557f-2p295" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0" Sep 5 23:53:40.071216 containerd[1735]: 2025-09-05 23:53:40.038 [INFO][5260] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38" Namespace="calico-system" Pod="calico-kube-controllers-78c94d557f-2p295" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0", GenerateName:"calico-kube-controllers-78c94d557f-", Namespace:"calico-system", SelfLink:"", UID:"67194c50-357b-4abf-9127-333241d1e011", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78c94d557f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38", Pod:"calico-kube-controllers-78c94d557f-2p295", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.46.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9455db1718", MAC:"42:0d:86:e4:e0:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:40.071216 containerd[1735]: 2025-09-05 23:53:40.061 [INFO][5260] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38" Namespace="calico-system" Pod="calico-kube-controllers-78c94d557f-2p295" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0" Sep 5 23:53:40.099987 containerd[1735]: time="2025-09-05T23:53:40.099766806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:40.099987 containerd[1735]: time="2025-09-05T23:53:40.099911966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:40.099987 containerd[1735]: time="2025-09-05T23:53:40.099936366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:40.100919 containerd[1735]: time="2025-09-05T23:53:40.100785566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:40.114936 containerd[1735]: time="2025-09-05T23:53:40.113160085Z" level=info msg="CreateContainer within sandbox \"f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3ac54168fd1e1c4c69a289747f96556bf82dc33f21f3fe63a8a7dfbc779097d4\"" Sep 5 23:53:40.115118 containerd[1735]: time="2025-09-05T23:53:40.115082525Z" level=info msg="StartContainer for \"3ac54168fd1e1c4c69a289747f96556bf82dc33f21f3fe63a8a7dfbc779097d4\"" Sep 5 23:53:40.126092 systemd[1]: Started cri-containerd-e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38.scope - libcontainer container e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38. Sep 5 23:53:40.164153 systemd[1]: Started cri-containerd-3ac54168fd1e1c4c69a289747f96556bf82dc33f21f3fe63a8a7dfbc779097d4.scope - libcontainer container 3ac54168fd1e1c4c69a289747f96556bf82dc33f21f3fe63a8a7dfbc779097d4. Sep 5 23:53:40.208347 containerd[1735]: time="2025-09-05T23:53:40.208158119Z" level=info msg="StartContainer for \"3ac54168fd1e1c4c69a289747f96556bf82dc33f21f3fe63a8a7dfbc779097d4\" returns successfully" Sep 5 23:53:40.214070 containerd[1735]: time="2025-09-05T23:53:40.213789639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78c94d557f-2p295,Uid:67194c50-357b-4abf-9127-333241d1e011,Namespace:calico-system,Attempt:1,} returns sandbox id \"e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38\"" Sep 5 23:53:40.471069 systemd-networkd[1504]: cali5d6ee8b1694: Gained IPv6LL Sep 5 23:53:40.559874 containerd[1735]: time="2025-09-05T23:53:40.559498536Z" level=info msg="StopPodSandbox for \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\"" Sep 5 23:53:40.666344 containerd[1735]: 2025-09-05 23:53:40.630 [INFO][5446] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Sep 5 23:53:40.666344 containerd[1735]: 2025-09-05 23:53:40.630 [INFO][5446] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" iface="eth0" netns="/var/run/netns/cni-e575d895-3c93-afb5-4105-a5199119114e" Sep 5 23:53:40.666344 containerd[1735]: 2025-09-05 23:53:40.631 [INFO][5446] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" iface="eth0" netns="/var/run/netns/cni-e575d895-3c93-afb5-4105-a5199119114e" Sep 5 23:53:40.666344 containerd[1735]: 2025-09-05 23:53:40.632 [INFO][5446] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" iface="eth0" netns="/var/run/netns/cni-e575d895-3c93-afb5-4105-a5199119114e" Sep 5 23:53:40.666344 containerd[1735]: 2025-09-05 23:53:40.632 [INFO][5446] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Sep 5 23:53:40.666344 containerd[1735]: 2025-09-05 23:53:40.632 [INFO][5446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Sep 5 23:53:40.666344 containerd[1735]: 2025-09-05 23:53:40.653 [INFO][5454] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" HandleID="k8s-pod-network.5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0" Sep 5 23:53:40.666344 containerd[1735]: 2025-09-05 23:53:40.653 [INFO][5454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:40.666344 containerd[1735]: 2025-09-05 23:53:40.653 [INFO][5454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:40.666344 containerd[1735]: 2025-09-05 23:53:40.662 [WARNING][5454] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" HandleID="k8s-pod-network.5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0" Sep 5 23:53:40.666344 containerd[1735]: 2025-09-05 23:53:40.662 [INFO][5454] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" HandleID="k8s-pod-network.5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0" Sep 5 23:53:40.666344 containerd[1735]: 2025-09-05 23:53:40.663 [INFO][5454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:40.666344 containerd[1735]: 2025-09-05 23:53:40.664 [INFO][5446] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Sep 5 23:53:40.666761 containerd[1735]: time="2025-09-05T23:53:40.666555249Z" level=info msg="TearDown network for sandbox \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\" successfully" Sep 5 23:53:40.666761 containerd[1735]: time="2025-09-05T23:53:40.666590689Z" level=info msg="StopPodSandbox for \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\" returns successfully" Sep 5 23:53:40.667298 containerd[1735]: time="2025-09-05T23:53:40.667268209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5854478d-5w9t2,Uid:7b09cec7-cf29-4181-9ddb-9c4e5f51fab5,Namespace:calico-apiserver,Attempt:1,}" Sep 5 23:53:40.691052 systemd[1]: run-netns-cni\x2de575d895\x2d3c93\x2dafb5\x2d4105\x2da5199119114e.mount: Deactivated successfully. Sep 5 23:53:40.727072 systemd-networkd[1504]: calic477d5bada8: Gained IPv6LL Sep 5 23:53:40.842464 kubelet[3230]: I0905 23:53:40.842088 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hdz5l" podStartSLOduration=43.842071318 podStartE2EDuration="43.842071318s" podCreationTimestamp="2025-09-05 23:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:53:40.841447198 +0000 UTC m=+49.388282434" watchObservedRunningTime="2025-09-05 23:53:40.842071318 +0000 UTC m=+49.388906554" Sep 5 23:53:41.124433 systemd-networkd[1504]: cali1152305d96e: Link UP Sep 5 23:53:41.125187 systemd-networkd[1504]: cali1152305d96e: Gained carrier Sep 5 23:53:41.153208 containerd[1735]: 2025-09-05 23:53:41.022 [INFO][5467] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0 calico-apiserver-7c5854478d- calico-apiserver 7b09cec7-cf29-4181-9ddb-9c4e5f51fab5 989 0 2025-09-05 23:53:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c5854478d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-29d70f4830 calico-apiserver-7c5854478d-5w9t2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1152305d96e [] [] }} ContainerID="91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1" Namespace="calico-apiserver" Pod="calico-apiserver-7c5854478d-5w9t2" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-" Sep 5 23:53:41.153208 containerd[1735]: 2025-09-05 23:53:41.023 [INFO][5467] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1" Namespace="calico-apiserver" Pod="calico-apiserver-7c5854478d-5w9t2" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0" Sep 5 23:53:41.153208 containerd[1735]: 2025-09-05 23:53:41.065 [INFO][5478] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1" HandleID="k8s-pod-network.91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0" Sep 5 23:53:41.153208 containerd[1735]: 2025-09-05 23:53:41.066 [INFO][5478] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1" HandleID="k8s-pod-network.91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b2b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-29d70f4830", "pod":"calico-apiserver-7c5854478d-5w9t2", "timestamp":"2025-09-05 23:53:41.065877903 +0000 UTC"}, Hostname:"ci-4081.3.5-n-29d70f4830", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:53:41.153208 containerd[1735]: 2025-09-05 23:53:41.066 [INFO][5478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:41.153208 containerd[1735]: 2025-09-05 23:53:41.066 [INFO][5478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:41.153208 containerd[1735]: 2025-09-05 23:53:41.066 [INFO][5478] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-29d70f4830' Sep 5 23:53:41.153208 containerd[1735]: 2025-09-05 23:53:41.081 [INFO][5478] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:41.153208 containerd[1735]: 2025-09-05 23:53:41.086 [INFO][5478] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:41.153208 containerd[1735]: 2025-09-05 23:53:41.092 [INFO][5478] ipam/ipam.go 511: Trying affinity for 192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:41.153208 containerd[1735]: 2025-09-05 23:53:41.095 [INFO][5478] ipam/ipam.go 158: Attempting to load block cidr=192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:41.153208 containerd[1735]: 2025-09-05 23:53:41.098 [INFO][5478] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.46.128/26 host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:41.153208 containerd[1735]: 2025-09-05 23:53:41.098 [INFO][5478] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.46.128/26 handle="k8s-pod-network.91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:41.153208 containerd[1735]: 2025-09-05 23:53:41.100 [INFO][5478] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1 Sep 5 23:53:41.153208 containerd[1735]: 2025-09-05 23:53:41.108 [INFO][5478] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.46.128/26 handle="k8s-pod-network.91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:41.153208 containerd[1735]: 2025-09-05 23:53:41.118 [INFO][5478] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.46.136/26] block=192.168.46.128/26 handle="k8s-pod-network.91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:41.153208 containerd[1735]: 2025-09-05 23:53:41.118 [INFO][5478] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.46.136/26] handle="k8s-pod-network.91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1" host="ci-4081.3.5-n-29d70f4830" Sep 5 23:53:41.153208 containerd[1735]: 2025-09-05 23:53:41.118 [INFO][5478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:41.153208 containerd[1735]: 2025-09-05 23:53:41.118 [INFO][5478] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.46.136/26] IPv6=[] ContainerID="91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1" HandleID="k8s-pod-network.91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0" Sep 5 23:53:41.155787 containerd[1735]: 2025-09-05 23:53:41.121 [INFO][5467] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1" Namespace="calico-apiserver" Pod="calico-apiserver-7c5854478d-5w9t2" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0", GenerateName:"calico-apiserver-7c5854478d-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b09cec7-cf29-4181-9ddb-9c4e5f51fab5", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c5854478d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"", Pod:"calico-apiserver-7c5854478d-5w9t2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1152305d96e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:41.155787 containerd[1735]: 2025-09-05 23:53:41.121 [INFO][5467] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.46.136/32] ContainerID="91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1" Namespace="calico-apiserver" Pod="calico-apiserver-7c5854478d-5w9t2" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0" Sep 5 23:53:41.155787 containerd[1735]: 2025-09-05 23:53:41.121 [INFO][5467] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1152305d96e ContainerID="91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1" Namespace="calico-apiserver" Pod="calico-apiserver-7c5854478d-5w9t2" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0" Sep 5 23:53:41.155787 containerd[1735]: 2025-09-05 23:53:41.123 [INFO][5467] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1" Namespace="calico-apiserver" Pod="calico-apiserver-7c5854478d-5w9t2" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0" Sep 5 23:53:41.155787 containerd[1735]: 2025-09-05 23:53:41.124 [INFO][5467] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1" Namespace="calico-apiserver" Pod="calico-apiserver-7c5854478d-5w9t2" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0", GenerateName:"calico-apiserver-7c5854478d-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b09cec7-cf29-4181-9ddb-9c4e5f51fab5", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c5854478d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1", Pod:"calico-apiserver-7c5854478d-5w9t2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1152305d96e", MAC:"f6:6a:49:c8:ea:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:41.155787 containerd[1735]: 2025-09-05 23:53:41.147 [INFO][5467] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1" Namespace="calico-apiserver" Pod="calico-apiserver-7c5854478d-5w9t2" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0" Sep 5 23:53:41.201615 containerd[1735]: time="2025-09-05T23:53:41.199662174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:41.201615 containerd[1735]: time="2025-09-05T23:53:41.199713054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:41.201615 containerd[1735]: time="2025-09-05T23:53:41.199728934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:41.201615 containerd[1735]: time="2025-09-05T23:53:41.199804174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:41.257089 systemd[1]: Started cri-containerd-91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1.scope - libcontainer container 91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1. Sep 5 23:53:41.345782 containerd[1735]: time="2025-09-05T23:53:41.345743484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c5854478d-5w9t2,Uid:7b09cec7-cf29-4181-9ddb-9c4e5f51fab5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1\"" Sep 5 23:53:41.688038 systemd-networkd[1504]: cali6baaf5062a0: Gained IPv6LL Sep 5 23:53:41.692627 systemd[1]: run-containerd-runc-k8s.io-91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1-runc.CxIWzk.mount: Deactivated successfully. Sep 5 23:53:41.752558 systemd-networkd[1504]: calia9455db1718: Gained IPv6LL Sep 5 23:53:41.782147 containerd[1735]: time="2025-09-05T23:53:41.781375375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:41.783892 containerd[1735]: time="2025-09-05T23:53:41.783848575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 5 23:53:41.789132 containerd[1735]: time="2025-09-05T23:53:41.789106094Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:41.794747 containerd[1735]: time="2025-09-05T23:53:41.794707374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:41.795646 containerd[1735]: time="2025-09-05T23:53:41.795613094Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 3.562956287s" Sep 5 23:53:41.795719 containerd[1735]: time="2025-09-05T23:53:41.795648174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 23:53:41.797074 containerd[1735]: time="2025-09-05T23:53:41.797045134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 23:53:41.804301 containerd[1735]: time="2025-09-05T23:53:41.804265493Z" level=info msg="CreateContainer within sandbox \"dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 23:53:41.867814 containerd[1735]: time="2025-09-05T23:53:41.867770969Z" level=info msg="CreateContainer within sandbox \"dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"497b93a87528cfef92fa0ec8e0c4b454fe835603c643da10b507b77b26b10589\"" Sep 5 23:53:41.868392 containerd[1735]: time="2025-09-05T23:53:41.868373889Z" level=info msg="StartContainer for \"497b93a87528cfef92fa0ec8e0c4b454fe835603c643da10b507b77b26b10589\"" Sep 5 23:53:41.918085 systemd[1]: Started cri-containerd-497b93a87528cfef92fa0ec8e0c4b454fe835603c643da10b507b77b26b10589.scope - libcontainer container 497b93a87528cfef92fa0ec8e0c4b454fe835603c643da10b507b77b26b10589. Sep 5 23:53:41.954285 containerd[1735]: time="2025-09-05T23:53:41.954052523Z" level=info msg="StartContainer for \"497b93a87528cfef92fa0ec8e0c4b454fe835603c643da10b507b77b26b10589\" returns successfully" Sep 5 23:53:43.095060 systemd-networkd[1504]: cali1152305d96e: Gained IPv6LL Sep 5 23:53:43.322770 kubelet[3230]: I0905 23:53:43.322235 3230 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:53:43.576172 kubelet[3230]: I0905 23:53:43.574759 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c5854478d-ms7sv" podStartSLOduration=31.686760259 podStartE2EDuration="35.574740575s" podCreationTimestamp="2025-09-05 23:53:08 +0000 UTC" firstStartedPulling="2025-09-05 23:53:37.908503098 +0000 UTC m=+46.455338334" lastFinishedPulling="2025-09-05 23:53:41.796483414 +0000 UTC m=+50.343318650" observedRunningTime="2025-09-05 23:53:42.866812223 +0000 UTC m=+51.413647459" watchObservedRunningTime="2025-09-05 23:53:43.574740575 +0000 UTC m=+52.121575811" Sep 5 23:53:43.693397 containerd[1735]: time="2025-09-05T23:53:43.693345327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:43.696724 containerd[1735]: time="2025-09-05T23:53:43.696686447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 5 23:53:43.701637 containerd[1735]: time="2025-09-05T23:53:43.701578967Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:43.709721 containerd[1735]: time="2025-09-05T23:53:43.709291366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:43.710848 containerd[1735]: time="2025-09-05T23:53:43.710589766Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.913510112s" Sep 5 23:53:43.711943 containerd[1735]: time="2025-09-05T23:53:43.711909646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 5 23:53:43.715418 containerd[1735]: time="2025-09-05T23:53:43.714602206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 23:53:43.721556 containerd[1735]: time="2025-09-05T23:53:43.721529806Z" level=info msg="CreateContainer within sandbox \"62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 23:53:43.768469 containerd[1735]: time="2025-09-05T23:53:43.768370442Z" level=info msg="CreateContainer within sandbox \"62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2b94908434b8fa1b3632b0467a8b9f76d17cec02f01b4f54ec10ee48c019fb36\"" Sep 5 23:53:43.770007 containerd[1735]: time="2025-09-05T23:53:43.769223522Z" level=info msg="StartContainer for \"2b94908434b8fa1b3632b0467a8b9f76d17cec02f01b4f54ec10ee48c019fb36\"" Sep 5 23:53:43.800069 systemd[1]: Started cri-containerd-2b94908434b8fa1b3632b0467a8b9f76d17cec02f01b4f54ec10ee48c019fb36.scope - libcontainer container 2b94908434b8fa1b3632b0467a8b9f76d17cec02f01b4f54ec10ee48c019fb36. Sep 5 23:53:43.830516 containerd[1735]: time="2025-09-05T23:53:43.830309598Z" level=info msg="StartContainer for \"2b94908434b8fa1b3632b0467a8b9f76d17cec02f01b4f54ec10ee48c019fb36\" returns successfully" Sep 5 23:53:43.864410 kubelet[3230]: I0905 23:53:43.864356 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gmdjj" podStartSLOduration=22.086576723 podStartE2EDuration="29.864307116s" podCreationTimestamp="2025-09-05 23:53:14 +0000 UTC" firstStartedPulling="2025-09-05 23:53:35.935259053 +0000 UTC m=+44.482094249" lastFinishedPulling="2025-09-05 23:53:43.712989406 +0000 UTC m=+52.259824642" observedRunningTime="2025-09-05 23:53:43.863394716 +0000 UTC m=+52.410229952" watchObservedRunningTime="2025-09-05 23:53:43.864307116 +0000 UTC m=+52.411142352" Sep 5 23:53:44.396273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1465305688.mount: Deactivated successfully. Sep 5 23:53:44.676581 kubelet[3230]: I0905 23:53:44.675924 3230 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 23:53:44.681547 kubelet[3230]: I0905 23:53:44.681509 3230 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 23:53:46.750958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount723189296.mount: Deactivated successfully. Sep 5 23:53:47.281632 containerd[1735]: time="2025-09-05T23:53:47.280814208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:47.287370 containerd[1735]: time="2025-09-05T23:53:47.287332128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 5 23:53:47.292825 containerd[1735]: time="2025-09-05T23:53:47.292769247Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:47.301880 containerd[1735]: time="2025-09-05T23:53:47.300425127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:47.303397 containerd[1735]: time="2025-09-05T23:53:47.302615807Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 3.587979281s" Sep 5 23:53:47.303397 containerd[1735]: time="2025-09-05T23:53:47.302655247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 5 23:53:47.304155 containerd[1735]: time="2025-09-05T23:53:47.304133327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 23:53:47.312973 containerd[1735]: time="2025-09-05T23:53:47.312938566Z" level=info msg="CreateContainer within sandbox \"570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 23:53:47.361766 containerd[1735]: time="2025-09-05T23:53:47.361725363Z" level=info msg="CreateContainer within sandbox \"570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"fb8f7b0be4ea90c4d7e5be89679cef2788fe25d803d68c536903bb641b25ded9\"" Sep 5 23:53:47.362614 containerd[1735]: time="2025-09-05T23:53:47.362570723Z" level=info msg="StartContainer for \"fb8f7b0be4ea90c4d7e5be89679cef2788fe25d803d68c536903bb641b25ded9\"" Sep 5 23:53:47.428041 systemd[1]: Started cri-containerd-fb8f7b0be4ea90c4d7e5be89679cef2788fe25d803d68c536903bb641b25ded9.scope - libcontainer container fb8f7b0be4ea90c4d7e5be89679cef2788fe25d803d68c536903bb641b25ded9. Sep 5 23:53:47.504955 containerd[1735]: time="2025-09-05T23:53:47.504898873Z" level=info msg="StartContainer for \"fb8f7b0be4ea90c4d7e5be89679cef2788fe25d803d68c536903bb641b25ded9\" returns successfully" Sep 5 23:53:47.886440 systemd[1]: run-containerd-runc-k8s.io-fb8f7b0be4ea90c4d7e5be89679cef2788fe25d803d68c536903bb641b25ded9-runc.5QqzYc.mount: Deactivated successfully. Sep 5 23:53:49.737907 containerd[1735]: time="2025-09-05T23:53:49.736259723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:49.740343 containerd[1735]: time="2025-09-05T23:53:49.740310562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 5 23:53:49.748745 containerd[1735]: time="2025-09-05T23:53:49.748711482Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:49.756426 containerd[1735]: time="2025-09-05T23:53:49.756387001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:49.757256 containerd[1735]: time="2025-09-05T23:53:49.757199921Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.452388915s" Sep 5 23:53:49.757256 containerd[1735]: time="2025-09-05T23:53:49.757232241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 5 23:53:49.758921 containerd[1735]: time="2025-09-05T23:53:49.758887921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 23:53:49.787204 containerd[1735]: time="2025-09-05T23:53:49.787157079Z" level=info msg="CreateContainer within sandbox \"e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 23:53:49.862364 containerd[1735]: time="2025-09-05T23:53:49.862314394Z" level=info msg="CreateContainer within sandbox \"e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0dd656130cdfeb703e18e651ccae49c9102a3ca754f0d7e8131eb082f2de3257\"" Sep 5 23:53:49.865525 containerd[1735]: time="2025-09-05T23:53:49.863500754Z" level=info msg="StartContainer for \"0dd656130cdfeb703e18e651ccae49c9102a3ca754f0d7e8131eb082f2de3257\"" Sep 5 23:53:49.914036 systemd[1]: Started cri-containerd-0dd656130cdfeb703e18e651ccae49c9102a3ca754f0d7e8131eb082f2de3257.scope - libcontainer container 0dd656130cdfeb703e18e651ccae49c9102a3ca754f0d7e8131eb082f2de3257. Sep 5 23:53:50.303329 containerd[1735]: time="2025-09-05T23:53:50.303086644Z" level=info msg="StartContainer for \"0dd656130cdfeb703e18e651ccae49c9102a3ca754f0d7e8131eb082f2de3257\" returns successfully" Sep 5 23:53:50.349274 containerd[1735]: time="2025-09-05T23:53:50.349212281Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:50.352741 containerd[1735]: time="2025-09-05T23:53:50.352690480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 23:53:50.354804 containerd[1735]: time="2025-09-05T23:53:50.354753120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 595.830199ms" Sep 5 23:53:50.354804 containerd[1735]: time="2025-09-05T23:53:50.354801600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 23:53:50.371558 containerd[1735]: time="2025-09-05T23:53:50.371502039Z" level=info msg="CreateContainer within sandbox \"91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 23:53:50.434059 containerd[1735]: time="2025-09-05T23:53:50.433964435Z" level=info msg="CreateContainer within sandbox \"91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e21de8eff07a073690da67cdad35b3dd6cdde176e485df63f3c6d347718bc5ec\"" Sep 5 23:53:50.434782 containerd[1735]: time="2025-09-05T23:53:50.434748635Z" level=info msg="StartContainer for \"e21de8eff07a073690da67cdad35b3dd6cdde176e485df63f3c6d347718bc5ec\"" Sep 5 23:53:50.485078 systemd[1]: Started cri-containerd-e21de8eff07a073690da67cdad35b3dd6cdde176e485df63f3c6d347718bc5ec.scope - libcontainer container e21de8eff07a073690da67cdad35b3dd6cdde176e485df63f3c6d347718bc5ec. Sep 5 23:53:51.209558 containerd[1735]: time="2025-09-05T23:53:51.209464862Z" level=info msg="StartContainer for \"e21de8eff07a073690da67cdad35b3dd6cdde176e485df63f3c6d347718bc5ec\" returns successfully" Sep 5 23:53:51.521580 kubelet[3230]: I0905 23:53:51.521364 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-jffjd" podStartSLOduration=29.235946659 podStartE2EDuration="37.52134868s" podCreationTimestamp="2025-09-05 23:53:14 +0000 UTC" firstStartedPulling="2025-09-05 23:53:39.018579506 +0000 UTC m=+47.565414742" lastFinishedPulling="2025-09-05 23:53:47.303981527 +0000 UTC m=+55.850816763" observedRunningTime="2025-09-05 23:53:47.928988925 +0000 UTC m=+56.475824161" watchObservedRunningTime="2025-09-05 23:53:51.52134868 +0000 UTC m=+60.068183916" Sep 5 23:53:51.559785 kubelet[3230]: I0905 23:53:51.559695 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c5854478d-5w9t2" podStartSLOduration=34.553399482 podStartE2EDuration="43.559679318s" podCreationTimestamp="2025-09-05 23:53:08 +0000 UTC" firstStartedPulling="2025-09-05 23:53:41.349342084 +0000 UTC m=+49.896177280" lastFinishedPulling="2025-09-05 23:53:50.35562188 +0000 UTC m=+58.902457116" observedRunningTime="2025-09-05 23:53:51.52380232 +0000 UTC m=+60.070637556" watchObservedRunningTime="2025-09-05 23:53:51.559679318 +0000 UTC m=+60.106514554" Sep 5 23:53:51.565186 containerd[1735]: time="2025-09-05T23:53:51.564938237Z" level=info msg="StopPodSandbox for \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\"" Sep 5 23:53:51.698822 containerd[1735]: 2025-09-05 23:53:51.631 [WARNING][5894] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0", GenerateName:"calico-apiserver-7c5854478d-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b09cec7-cf29-4181-9ddb-9c4e5f51fab5", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c5854478d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1", Pod:"calico-apiserver-7c5854478d-5w9t2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1152305d96e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:51.698822 containerd[1735]: 2025-09-05 23:53:51.631 [INFO][5894] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Sep 5 23:53:51.698822 containerd[1735]: 2025-09-05 23:53:51.631 [INFO][5894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" iface="eth0" netns="" Sep 5 23:53:51.698822 containerd[1735]: 2025-09-05 23:53:51.632 [INFO][5894] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Sep 5 23:53:51.698822 containerd[1735]: 2025-09-05 23:53:51.632 [INFO][5894] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Sep 5 23:53:51.698822 containerd[1735]: 2025-09-05 23:53:51.675 [INFO][5901] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" HandleID="k8s-pod-network.5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0" Sep 5 23:53:51.698822 containerd[1735]: 2025-09-05 23:53:51.676 [INFO][5901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:51.698822 containerd[1735]: 2025-09-05 23:53:51.677 [INFO][5901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:51.698822 containerd[1735]: 2025-09-05 23:53:51.691 [WARNING][5901] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" HandleID="k8s-pod-network.5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0" Sep 5 23:53:51.698822 containerd[1735]: 2025-09-05 23:53:51.691 [INFO][5901] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" HandleID="k8s-pod-network.5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0" Sep 5 23:53:51.698822 containerd[1735]: 2025-09-05 23:53:51.694 [INFO][5901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:51.698822 containerd[1735]: 2025-09-05 23:53:51.695 [INFO][5894] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Sep 5 23:53:51.699499 containerd[1735]: time="2025-09-05T23:53:51.698879628Z" level=info msg="TearDown network for sandbox \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\" successfully" Sep 5 23:53:51.699499 containerd[1735]: time="2025-09-05T23:53:51.698902708Z" level=info msg="StopPodSandbox for \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\" returns successfully" Sep 5 23:53:51.701097 containerd[1735]: time="2025-09-05T23:53:51.700972868Z" level=info msg="RemovePodSandbox for \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\"" Sep 5 23:53:51.703883 containerd[1735]: time="2025-09-05T23:53:51.703761428Z" level=info msg="Forcibly stopping sandbox \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\"" Sep 5 23:53:51.740558 kubelet[3230]: I0905 23:53:51.740490 3230 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78c94d557f-2p295" podStartSLOduration=28.200435743 podStartE2EDuration="37.740471505s" podCreationTimestamp="2025-09-05 23:53:14 +0000 UTC" firstStartedPulling="2025-09-05 23:53:40.218323359 +0000 UTC m=+48.765158595" lastFinishedPulling="2025-09-05 23:53:49.758359121 +0000 UTC m=+58.305194357" observedRunningTime="2025-09-05 23:53:51.565836117 +0000 UTC m=+60.112671353" watchObservedRunningTime="2025-09-05 23:53:51.740471505 +0000 UTC m=+60.287306741" Sep 5 23:53:51.857916 containerd[1735]: 2025-09-05 23:53:51.789 [WARNING][5918] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0", GenerateName:"calico-apiserver-7c5854478d-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b09cec7-cf29-4181-9ddb-9c4e5f51fab5", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c5854478d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"91d9b4818de97322f10b82e53e545923bcfd58e93475b5090b0fb990963bc0d1", Pod:"calico-apiserver-7c5854478d-5w9t2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1152305d96e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:51.857916 containerd[1735]: 2025-09-05 23:53:51.789 [INFO][5918] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Sep 5 23:53:51.857916 containerd[1735]: 2025-09-05 23:53:51.789 [INFO][5918] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" iface="eth0" netns="" Sep 5 23:53:51.857916 containerd[1735]: 2025-09-05 23:53:51.789 [INFO][5918] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Sep 5 23:53:51.857916 containerd[1735]: 2025-09-05 23:53:51.789 [INFO][5918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Sep 5 23:53:51.857916 containerd[1735]: 2025-09-05 23:53:51.840 [INFO][5928] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" HandleID="k8s-pod-network.5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0" Sep 5 23:53:51.857916 containerd[1735]: 2025-09-05 23:53:51.840 [INFO][5928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:51.857916 containerd[1735]: 2025-09-05 23:53:51.840 [INFO][5928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:51.857916 containerd[1735]: 2025-09-05 23:53:51.852 [WARNING][5928] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" HandleID="k8s-pod-network.5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0" Sep 5 23:53:51.857916 containerd[1735]: 2025-09-05 23:53:51.852 [INFO][5928] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" HandleID="k8s-pod-network.5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--5w9t2-eth0" Sep 5 23:53:51.857916 containerd[1735]: 2025-09-05 23:53:51.853 [INFO][5928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:51.857916 containerd[1735]: 2025-09-05 23:53:51.854 [INFO][5918] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff" Sep 5 23:53:51.857916 containerd[1735]: time="2025-09-05T23:53:51.856311977Z" level=info msg="TearDown network for sandbox \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\" successfully" Sep 5 23:53:51.882692 containerd[1735]: time="2025-09-05T23:53:51.882646696Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:53:51.882952 containerd[1735]: time="2025-09-05T23:53:51.882931056Z" level=info msg="RemovePodSandbox \"5272b79538b15c5abf88fceda002737bb6535ec963af4c9da1d9217782733fff\" returns successfully" Sep 5 23:53:51.883583 containerd[1735]: time="2025-09-05T23:53:51.883550816Z" level=info msg="StopPodSandbox for \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\"" Sep 5 23:53:51.971110 containerd[1735]: 2025-09-05 23:53:51.921 [WARNING][5942] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-whisker--6894bc6856--ngkgx-eth0" Sep 5 23:53:51.971110 containerd[1735]: 2025-09-05 23:53:51.921 [INFO][5942] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Sep 5 23:53:51.971110 containerd[1735]: 2025-09-05 23:53:51.921 [INFO][5942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" iface="eth0" netns="" Sep 5 23:53:51.971110 containerd[1735]: 2025-09-05 23:53:51.921 [INFO][5942] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Sep 5 23:53:51.971110 containerd[1735]: 2025-09-05 23:53:51.921 [INFO][5942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Sep 5 23:53:51.971110 containerd[1735]: 2025-09-05 23:53:51.948 [INFO][5949] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" HandleID="k8s-pod-network.64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Workload="ci--4081.3.5--n--29d70f4830-k8s-whisker--6894bc6856--ngkgx-eth0" Sep 5 23:53:51.971110 containerd[1735]: 2025-09-05 23:53:51.949 [INFO][5949] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:51.971110 containerd[1735]: 2025-09-05 23:53:51.949 [INFO][5949] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:51.971110 containerd[1735]: 2025-09-05 23:53:51.964 [WARNING][5949] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" HandleID="k8s-pod-network.64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Workload="ci--4081.3.5--n--29d70f4830-k8s-whisker--6894bc6856--ngkgx-eth0" Sep 5 23:53:51.971110 containerd[1735]: 2025-09-05 23:53:51.964 [INFO][5949] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" HandleID="k8s-pod-network.64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Workload="ci--4081.3.5--n--29d70f4830-k8s-whisker--6894bc6856--ngkgx-eth0" Sep 5 23:53:51.971110 containerd[1735]: 2025-09-05 23:53:51.967 [INFO][5949] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:51.971110 containerd[1735]: 2025-09-05 23:53:51.968 [INFO][5942] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Sep 5 23:53:51.973111 containerd[1735]: time="2025-09-05T23:53:51.971152050Z" level=info msg="TearDown network for sandbox \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\" successfully" Sep 5 23:53:51.973111 containerd[1735]: time="2025-09-05T23:53:51.971177690Z" level=info msg="StopPodSandbox for \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\" returns successfully" Sep 5 23:53:51.973111 containerd[1735]: time="2025-09-05T23:53:51.971645050Z" level=info msg="RemovePodSandbox for \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\"" Sep 5 23:53:51.973111 containerd[1735]: time="2025-09-05T23:53:51.971675730Z" level=info msg="Forcibly stopping sandbox \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\"" Sep 5 23:53:52.076891 containerd[1735]: 2025-09-05 23:53:52.019 [WARNING][5963] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" WorkloadEndpoint="ci--4081.3.5--n--29d70f4830-k8s-whisker--6894bc6856--ngkgx-eth0" Sep 5 23:53:52.076891 containerd[1735]: 2025-09-05 23:53:52.019 [INFO][5963] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Sep 5 23:53:52.076891 containerd[1735]: 2025-09-05 23:53:52.019 [INFO][5963] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" iface="eth0" netns="" Sep 5 23:53:52.076891 containerd[1735]: 2025-09-05 23:53:52.019 [INFO][5963] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Sep 5 23:53:52.076891 containerd[1735]: 2025-09-05 23:53:52.019 [INFO][5963] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Sep 5 23:53:52.076891 containerd[1735]: 2025-09-05 23:53:52.057 [INFO][5971] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" HandleID="k8s-pod-network.64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Workload="ci--4081.3.5--n--29d70f4830-k8s-whisker--6894bc6856--ngkgx-eth0" Sep 5 23:53:52.076891 containerd[1735]: 2025-09-05 23:53:52.060 [INFO][5971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:52.076891 containerd[1735]: 2025-09-05 23:53:52.060 [INFO][5971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:52.076891 containerd[1735]: 2025-09-05 23:53:52.069 [WARNING][5971] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" HandleID="k8s-pod-network.64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Workload="ci--4081.3.5--n--29d70f4830-k8s-whisker--6894bc6856--ngkgx-eth0" Sep 5 23:53:52.076891 containerd[1735]: 2025-09-05 23:53:52.069 [INFO][5971] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" HandleID="k8s-pod-network.64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Workload="ci--4081.3.5--n--29d70f4830-k8s-whisker--6894bc6856--ngkgx-eth0" Sep 5 23:53:52.076891 containerd[1735]: 2025-09-05 23:53:52.072 [INFO][5971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:52.076891 containerd[1735]: 2025-09-05 23:53:52.073 [INFO][5963] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697" Sep 5 23:53:52.076891 containerd[1735]: time="2025-09-05T23:53:52.076014522Z" level=info msg="TearDown network for sandbox \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\" successfully" Sep 5 23:53:52.089014 containerd[1735]: time="2025-09-05T23:53:52.088972002Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:53:52.089215 containerd[1735]: time="2025-09-05T23:53:52.089196281Z" level=info msg="RemovePodSandbox \"64b9e0ce683e29eb1bee8b29a9db60317a6112ee63d6611e55e27eb18eb12697\" returns successfully" Sep 5 23:53:52.089702 containerd[1735]: time="2025-09-05T23:53:52.089680641Z" level=info msg="StopPodSandbox for \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\"" Sep 5 23:53:52.199668 containerd[1735]: 2025-09-05 23:53:52.129 [WARNING][5985] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e0f93c22-927f-47b6-8cee-8ea27a2ee078", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 52, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc", Pod:"coredns-674b8bbfcf-8cqwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5d6ee8b1694", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:52.199668 containerd[1735]: 2025-09-05 23:53:52.130 [INFO][5985] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Sep 5 23:53:52.199668 containerd[1735]: 2025-09-05 23:53:52.130 [INFO][5985] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" iface="eth0" netns="" Sep 5 23:53:52.199668 containerd[1735]: 2025-09-05 23:53:52.130 [INFO][5985] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Sep 5 23:53:52.199668 containerd[1735]: 2025-09-05 23:53:52.130 [INFO][5985] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Sep 5 23:53:52.199668 containerd[1735]: 2025-09-05 23:53:52.178 [INFO][5992] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" HandleID="k8s-pod-network.bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0" Sep 5 23:53:52.199668 containerd[1735]: 2025-09-05 23:53:52.178 [INFO][5992] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:52.199668 containerd[1735]: 2025-09-05 23:53:52.178 [INFO][5992] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:52.199668 containerd[1735]: 2025-09-05 23:53:52.190 [WARNING][5992] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" HandleID="k8s-pod-network.bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0" Sep 5 23:53:52.199668 containerd[1735]: 2025-09-05 23:53:52.192 [INFO][5992] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" HandleID="k8s-pod-network.bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0" Sep 5 23:53:52.199668 containerd[1735]: 2025-09-05 23:53:52.195 [INFO][5992] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:52.199668 containerd[1735]: 2025-09-05 23:53:52.198 [INFO][5985] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Sep 5 23:53:52.199668 containerd[1735]: time="2025-09-05T23:53:52.199462274Z" level=info msg="TearDown network for sandbox \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\" successfully" Sep 5 23:53:52.199668 containerd[1735]: time="2025-09-05T23:53:52.199486874Z" level=info msg="StopPodSandbox for \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\" returns successfully" Sep 5 23:53:52.203163 containerd[1735]: time="2025-09-05T23:53:52.201408154Z" level=info msg="RemovePodSandbox for \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\"" Sep 5 23:53:52.203163 containerd[1735]: time="2025-09-05T23:53:52.201438754Z" level=info msg="Forcibly stopping sandbox \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\"" Sep 5 23:53:52.353890 containerd[1735]: 2025-09-05 23:53:52.292 [WARNING][6007] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e0f93c22-927f-47b6-8cee-8ea27a2ee078", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 52, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"53bc66123c7ff91b06de782c087fc7d23b7b12aa76b316d460f61089959ee0dc", Pod:"coredns-674b8bbfcf-8cqwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5d6ee8b1694", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:52.353890 containerd[1735]: 2025-09-05 23:53:52.292 [INFO][6007] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Sep 5 23:53:52.353890 containerd[1735]: 2025-09-05 23:53:52.292 [INFO][6007] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" iface="eth0" netns="" Sep 5 23:53:52.353890 containerd[1735]: 2025-09-05 23:53:52.292 [INFO][6007] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Sep 5 23:53:52.353890 containerd[1735]: 2025-09-05 23:53:52.292 [INFO][6007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Sep 5 23:53:52.353890 containerd[1735]: 2025-09-05 23:53:52.326 [INFO][6014] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" HandleID="k8s-pod-network.bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0" Sep 5 23:53:52.353890 containerd[1735]: 2025-09-05 23:53:52.326 [INFO][6014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:52.353890 containerd[1735]: 2025-09-05 23:53:52.326 [INFO][6014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:52.353890 containerd[1735]: 2025-09-05 23:53:52.347 [WARNING][6014] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" HandleID="k8s-pod-network.bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0" Sep 5 23:53:52.353890 containerd[1735]: 2025-09-05 23:53:52.347 [INFO][6014] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" HandleID="k8s-pod-network.bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--8cqwz-eth0" Sep 5 23:53:52.353890 containerd[1735]: 2025-09-05 23:53:52.349 [INFO][6014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:52.353890 containerd[1735]: 2025-09-05 23:53:52.350 [INFO][6007] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d" Sep 5 23:53:52.353890 containerd[1735]: time="2025-09-05T23:53:52.353137063Z" level=info msg="TearDown network for sandbox \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\" successfully" Sep 5 23:53:52.365734 containerd[1735]: time="2025-09-05T23:53:52.365505023Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:53:52.365734 containerd[1735]: time="2025-09-05T23:53:52.365634143Z" level=info msg="RemovePodSandbox \"bf8b23eacd990dbb6b0a603447b89342fcd131df21129e3aad6eab12d8c2fa2d\" returns successfully" Sep 5 23:53:52.366113 containerd[1735]: time="2025-09-05T23:53:52.366089023Z" level=info msg="StopPodSandbox for \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\"" Sep 5 23:53:52.477241 containerd[1735]: 2025-09-05 23:53:52.421 [WARNING][6028] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5666caf1-ebe2-4df7-a26f-9fc0c5f462aa", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 52, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f", Pod:"coredns-674b8bbfcf-hdz5l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6baaf5062a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:52.477241 containerd[1735]: 2025-09-05 23:53:52.421 [INFO][6028] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Sep 5 23:53:52.477241 containerd[1735]: 2025-09-05 23:53:52.421 [INFO][6028] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" iface="eth0" netns="" Sep 5 23:53:52.477241 containerd[1735]: 2025-09-05 23:53:52.421 [INFO][6028] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Sep 5 23:53:52.477241 containerd[1735]: 2025-09-05 23:53:52.421 [INFO][6028] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Sep 5 23:53:52.477241 containerd[1735]: 2025-09-05 23:53:52.460 [INFO][6035] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" HandleID="k8s-pod-network.8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0" Sep 5 23:53:52.477241 containerd[1735]: 2025-09-05 23:53:52.460 [INFO][6035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:52.477241 containerd[1735]: 2025-09-05 23:53:52.460 [INFO][6035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:52.477241 containerd[1735]: 2025-09-05 23:53:52.470 [WARNING][6035] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" HandleID="k8s-pod-network.8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0" Sep 5 23:53:52.477241 containerd[1735]: 2025-09-05 23:53:52.471 [INFO][6035] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" HandleID="k8s-pod-network.8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0" Sep 5 23:53:52.477241 containerd[1735]: 2025-09-05 23:53:52.472 [INFO][6035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:52.477241 containerd[1735]: 2025-09-05 23:53:52.475 [INFO][6028] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Sep 5 23:53:52.477241 containerd[1735]: time="2025-09-05T23:53:52.476431495Z" level=info msg="TearDown network for sandbox \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\" successfully" Sep 5 23:53:52.477241 containerd[1735]: time="2025-09-05T23:53:52.476456735Z" level=info msg="StopPodSandbox for \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\" returns successfully" Sep 5 23:53:52.479540 containerd[1735]: time="2025-09-05T23:53:52.479244495Z" level=info msg="RemovePodSandbox for \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\"" Sep 5 23:53:52.479540 containerd[1735]: time="2025-09-05T23:53:52.479280775Z" level=info msg="Forcibly stopping sandbox \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\"" Sep 5 23:53:52.569812 containerd[1735]: 2025-09-05 23:53:52.524 [WARNING][6050] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5666caf1-ebe2-4df7-a26f-9fc0c5f462aa", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 52, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"f96bdd6e65d7d3fe8048203c6e656d9f3d0c0fe89ebcab5b087e4f02cb4fd14f", Pod:"coredns-674b8bbfcf-hdz5l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.46.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6baaf5062a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:52.569812 containerd[1735]: 2025-09-05 23:53:52.525 [INFO][6050] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Sep 5 23:53:52.569812 containerd[1735]: 2025-09-05 23:53:52.525 [INFO][6050] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" iface="eth0" netns="" Sep 5 23:53:52.569812 containerd[1735]: 2025-09-05 23:53:52.525 [INFO][6050] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Sep 5 23:53:52.569812 containerd[1735]: 2025-09-05 23:53:52.525 [INFO][6050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Sep 5 23:53:52.569812 containerd[1735]: 2025-09-05 23:53:52.552 [INFO][6057] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" HandleID="k8s-pod-network.8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0" Sep 5 23:53:52.569812 containerd[1735]: 2025-09-05 23:53:52.552 [INFO][6057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:52.569812 containerd[1735]: 2025-09-05 23:53:52.552 [INFO][6057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:52.569812 containerd[1735]: 2025-09-05 23:53:52.563 [WARNING][6057] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" HandleID="k8s-pod-network.8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0" Sep 5 23:53:52.569812 containerd[1735]: 2025-09-05 23:53:52.563 [INFO][6057] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" HandleID="k8s-pod-network.8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Workload="ci--4081.3.5--n--29d70f4830-k8s-coredns--674b8bbfcf--hdz5l-eth0" Sep 5 23:53:52.569812 containerd[1735]: 2025-09-05 23:53:52.564 [INFO][6057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:52.569812 containerd[1735]: 2025-09-05 23:53:52.566 [INFO][6050] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594" Sep 5 23:53:52.570838 containerd[1735]: time="2025-09-05T23:53:52.569944049Z" level=info msg="TearDown network for sandbox \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\" successfully" Sep 5 23:53:52.602723 containerd[1735]: time="2025-09-05T23:53:52.602679206Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:53:52.602937 containerd[1735]: time="2025-09-05T23:53:52.602918846Z" level=info msg="RemovePodSandbox \"8e3fecac38ca4b77a616185ab1dd2f0e3a08db198620b5c7145e713778495594\" returns successfully" Sep 5 23:53:52.603698 containerd[1735]: time="2025-09-05T23:53:52.603432886Z" level=info msg="StopPodSandbox for \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\"" Sep 5 23:53:52.721726 containerd[1735]: 2025-09-05 23:53:52.667 [WARNING][6071] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6c6dfb93-02c5-4946-b32d-4225aadf4328", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be", Pod:"goldmane-54d579b49d-jffjd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.46.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic477d5bada8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:52.721726 containerd[1735]: 2025-09-05 23:53:52.670 [INFO][6071] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Sep 5 23:53:52.721726 containerd[1735]: 2025-09-05 23:53:52.671 [INFO][6071] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" iface="eth0" netns="" Sep 5 23:53:52.721726 containerd[1735]: 2025-09-05 23:53:52.671 [INFO][6071] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Sep 5 23:53:52.721726 containerd[1735]: 2025-09-05 23:53:52.671 [INFO][6071] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Sep 5 23:53:52.721726 containerd[1735]: 2025-09-05 23:53:52.703 [INFO][6079] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" HandleID="k8s-pod-network.16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Workload="ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0" Sep 5 23:53:52.721726 containerd[1735]: 2025-09-05 23:53:52.704 [INFO][6079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:52.721726 containerd[1735]: 2025-09-05 23:53:52.704 [INFO][6079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:52.721726 containerd[1735]: 2025-09-05 23:53:52.715 [WARNING][6079] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" HandleID="k8s-pod-network.16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Workload="ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0" Sep 5 23:53:52.721726 containerd[1735]: 2025-09-05 23:53:52.715 [INFO][6079] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" HandleID="k8s-pod-network.16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Workload="ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0" Sep 5 23:53:52.721726 containerd[1735]: 2025-09-05 23:53:52.717 [INFO][6079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:52.721726 containerd[1735]: 2025-09-05 23:53:52.719 [INFO][6071] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Sep 5 23:53:52.721726 containerd[1735]: time="2025-09-05T23:53:52.721602118Z" level=info msg="TearDown network for sandbox \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\" successfully" Sep 5 23:53:52.721726 containerd[1735]: time="2025-09-05T23:53:52.721629478Z" level=info msg="StopPodSandbox for \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\" returns successfully" Sep 5 23:53:52.722802 containerd[1735]: time="2025-09-05T23:53:52.722382518Z" level=info msg="RemovePodSandbox for \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\"" Sep 5 23:53:52.722802 containerd[1735]: time="2025-09-05T23:53:52.722413198Z" level=info msg="Forcibly stopping sandbox \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\"" Sep 5 23:53:52.825768 containerd[1735]: 2025-09-05 23:53:52.773 [WARNING][6093] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6c6dfb93-02c5-4946-b32d-4225aadf4328", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"570c136361e22f06ff07ecdb7b8e750d60bf26b40f79ac2c258bbfceee4f40be", Pod:"goldmane-54d579b49d-jffjd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.46.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic477d5bada8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:52.825768 containerd[1735]: 2025-09-05 23:53:52.773 [INFO][6093] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Sep 5 23:53:52.825768 containerd[1735]: 2025-09-05 23:53:52.773 [INFO][6093] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" iface="eth0" netns="" Sep 5 23:53:52.825768 containerd[1735]: 2025-09-05 23:53:52.773 [INFO][6093] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Sep 5 23:53:52.825768 containerd[1735]: 2025-09-05 23:53:52.774 [INFO][6093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Sep 5 23:53:52.825768 containerd[1735]: 2025-09-05 23:53:52.803 [INFO][6100] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" HandleID="k8s-pod-network.16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Workload="ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0" Sep 5 23:53:52.825768 containerd[1735]: 2025-09-05 23:53:52.804 [INFO][6100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:52.825768 containerd[1735]: 2025-09-05 23:53:52.805 [INFO][6100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:52.825768 containerd[1735]: 2025-09-05 23:53:52.818 [WARNING][6100] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" HandleID="k8s-pod-network.16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Workload="ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0" Sep 5 23:53:52.825768 containerd[1735]: 2025-09-05 23:53:52.818 [INFO][6100] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" HandleID="k8s-pod-network.16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Workload="ci--4081.3.5--n--29d70f4830-k8s-goldmane--54d579b49d--jffjd-eth0" Sep 5 23:53:52.825768 containerd[1735]: 2025-09-05 23:53:52.820 [INFO][6100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:52.825768 containerd[1735]: 2025-09-05 23:53:52.823 [INFO][6093] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a" Sep 5 23:53:52.827318 containerd[1735]: time="2025-09-05T23:53:52.825964311Z" level=info msg="TearDown network for sandbox \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\" successfully" Sep 5 23:53:52.858324 containerd[1735]: time="2025-09-05T23:53:52.858263469Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:53:52.858674 containerd[1735]: time="2025-09-05T23:53:52.858547109Z" level=info msg="RemovePodSandbox \"16c3dd604c08d1ce53f3ad852d74106383d2c4eba19bee952e76f3b6eeede73a\" returns successfully" Sep 5 23:53:52.859206 containerd[1735]: time="2025-09-05T23:53:52.859178909Z" level=info msg="StopPodSandbox for \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\"" Sep 5 23:53:52.955695 containerd[1735]: 2025-09-05 23:53:52.913 [WARNING][6114] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b655b97a-e9ef-4351-a639-e1502a0f30b8", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40", Pod:"csi-node-driver-gmdjj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.46.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5fe3e584cea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:52.955695 containerd[1735]: 2025-09-05 23:53:52.914 [INFO][6114] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Sep 5 23:53:52.955695 containerd[1735]: 2025-09-05 23:53:52.914 [INFO][6114] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" iface="eth0" netns="" Sep 5 23:53:52.955695 containerd[1735]: 2025-09-05 23:53:52.914 [INFO][6114] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Sep 5 23:53:52.955695 containerd[1735]: 2025-09-05 23:53:52.914 [INFO][6114] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Sep 5 23:53:52.955695 containerd[1735]: 2025-09-05 23:53:52.941 [INFO][6122] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" HandleID="k8s-pod-network.b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Workload="ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0" Sep 5 23:53:52.955695 containerd[1735]: 2025-09-05 23:53:52.941 [INFO][6122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:52.955695 containerd[1735]: 2025-09-05 23:53:52.941 [INFO][6122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:52.955695 containerd[1735]: 2025-09-05 23:53:52.951 [WARNING][6122] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" HandleID="k8s-pod-network.b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Workload="ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0" Sep 5 23:53:52.955695 containerd[1735]: 2025-09-05 23:53:52.951 [INFO][6122] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" HandleID="k8s-pod-network.b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Workload="ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0" Sep 5 23:53:52.955695 containerd[1735]: 2025-09-05 23:53:52.952 [INFO][6122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:52.955695 containerd[1735]: 2025-09-05 23:53:52.954 [INFO][6114] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Sep 5 23:53:52.956253 containerd[1735]: time="2025-09-05T23:53:52.955751622Z" level=info msg="TearDown network for sandbox \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\" successfully" Sep 5 23:53:52.956253 containerd[1735]: time="2025-09-05T23:53:52.955785182Z" level=info msg="StopPodSandbox for \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\" returns successfully" Sep 5 23:53:52.957158 containerd[1735]: time="2025-09-05T23:53:52.956564182Z" level=info msg="RemovePodSandbox for \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\"" Sep 5 23:53:52.957158 containerd[1735]: time="2025-09-05T23:53:52.956599022Z" level=info msg="Forcibly stopping sandbox \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\"" Sep 5 23:53:53.050217 containerd[1735]: 2025-09-05 23:53:52.993 [WARNING][6137] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b655b97a-e9ef-4351-a639-e1502a0f30b8", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"62b91608e3f88a9448309621957b6c486cdbc5fb716f66e7fb583cac6ead1a40", Pod:"csi-node-driver-gmdjj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.46.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5fe3e584cea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:53.050217 containerd[1735]: 2025-09-05 23:53:52.994 [INFO][6137] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Sep 5 23:53:53.050217 containerd[1735]: 2025-09-05 23:53:52.994 [INFO][6137] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" iface="eth0" netns="" Sep 5 23:53:53.050217 containerd[1735]: 2025-09-05 23:53:52.994 [INFO][6137] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Sep 5 23:53:53.050217 containerd[1735]: 2025-09-05 23:53:52.994 [INFO][6137] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Sep 5 23:53:53.050217 containerd[1735]: 2025-09-05 23:53:53.029 [INFO][6145] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" HandleID="k8s-pod-network.b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Workload="ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0" Sep 5 23:53:53.050217 containerd[1735]: 2025-09-05 23:53:53.031 [INFO][6145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:53.050217 containerd[1735]: 2025-09-05 23:53:53.031 [INFO][6145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:53.050217 containerd[1735]: 2025-09-05 23:53:53.043 [WARNING][6145] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" HandleID="k8s-pod-network.b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Workload="ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0" Sep 5 23:53:53.050217 containerd[1735]: 2025-09-05 23:53:53.043 [INFO][6145] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" HandleID="k8s-pod-network.b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Workload="ci--4081.3.5--n--29d70f4830-k8s-csi--node--driver--gmdjj-eth0" Sep 5 23:53:53.050217 containerd[1735]: 2025-09-05 23:53:53.044 [INFO][6145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:53.050217 containerd[1735]: 2025-09-05 23:53:53.047 [INFO][6137] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456" Sep 5 23:53:53.052876 containerd[1735]: time="2025-09-05T23:53:53.050186416Z" level=info msg="TearDown network for sandbox \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\" successfully" Sep 5 23:53:53.063047 containerd[1735]: time="2025-09-05T23:53:53.063002855Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:53:53.063320 containerd[1735]: time="2025-09-05T23:53:53.063263735Z" level=info msg="RemovePodSandbox \"b3cd6f584e55ce36170e10d0798d20591934c81d5f4aacb8fc3baa2863de3456\" returns successfully" Sep 5 23:53:53.063813 containerd[1735]: time="2025-09-05T23:53:53.063785935Z" level=info msg="StopPodSandbox for \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\"" Sep 5 23:53:53.150056 containerd[1735]: 2025-09-05 23:53:53.105 [WARNING][6159] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0", GenerateName:"calico-kube-controllers-78c94d557f-", Namespace:"calico-system", SelfLink:"", UID:"67194c50-357b-4abf-9127-333241d1e011", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78c94d557f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38", Pod:"calico-kube-controllers-78c94d557f-2p295", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.46.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9455db1718", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:53.150056 containerd[1735]: 2025-09-05 23:53:53.105 [INFO][6159] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Sep 5 23:53:53.150056 containerd[1735]: 2025-09-05 23:53:53.105 [INFO][6159] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" iface="eth0" netns="" Sep 5 23:53:53.150056 containerd[1735]: 2025-09-05 23:53:53.105 [INFO][6159] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Sep 5 23:53:53.150056 containerd[1735]: 2025-09-05 23:53:53.105 [INFO][6159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Sep 5 23:53:53.150056 containerd[1735]: 2025-09-05 23:53:53.134 [INFO][6166] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" HandleID="k8s-pod-network.27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0" Sep 5 23:53:53.150056 containerd[1735]: 2025-09-05 23:53:53.134 [INFO][6166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:53.150056 containerd[1735]: 2025-09-05 23:53:53.134 [INFO][6166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:53.150056 containerd[1735]: 2025-09-05 23:53:53.143 [WARNING][6166] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" HandleID="k8s-pod-network.27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0" Sep 5 23:53:53.150056 containerd[1735]: 2025-09-05 23:53:53.143 [INFO][6166] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" HandleID="k8s-pod-network.27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0" Sep 5 23:53:53.150056 containerd[1735]: 2025-09-05 23:53:53.144 [INFO][6166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:53.150056 containerd[1735]: 2025-09-05 23:53:53.146 [INFO][6159] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Sep 5 23:53:53.150056 containerd[1735]: time="2025-09-05T23:53:53.149986369Z" level=info msg="TearDown network for sandbox \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\" successfully" Sep 5 23:53:53.150056 containerd[1735]: time="2025-09-05T23:53:53.150011489Z" level=info msg="StopPodSandbox for \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\" returns successfully" Sep 5 23:53:53.152607 containerd[1735]: time="2025-09-05T23:53:53.152311169Z" level=info msg="RemovePodSandbox for \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\"" Sep 5 23:53:53.152607 containerd[1735]: time="2025-09-05T23:53:53.152343049Z" level=info msg="Forcibly stopping sandbox \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\"" Sep 5 23:53:53.263738 containerd[1735]: 2025-09-05 23:53:53.211 [WARNING][6180] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0", GenerateName:"calico-kube-controllers-78c94d557f-", Namespace:"calico-system", SelfLink:"", UID:"67194c50-357b-4abf-9127-333241d1e011", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78c94d557f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"e3539032944b3146d3eba48dbef1aaf2f1ae5a36374de2f856bf741404e43e38", Pod:"calico-kube-controllers-78c94d557f-2p295", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.46.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9455db1718", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:53.263738 containerd[1735]: 2025-09-05 23:53:53.211 [INFO][6180] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Sep 5 23:53:53.263738 containerd[1735]: 2025-09-05 23:53:53.211 [INFO][6180] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" iface="eth0" netns="" Sep 5 23:53:53.263738 containerd[1735]: 2025-09-05 23:53:53.212 [INFO][6180] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Sep 5 23:53:53.263738 containerd[1735]: 2025-09-05 23:53:53.212 [INFO][6180] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Sep 5 23:53:53.263738 containerd[1735]: 2025-09-05 23:53:53.241 [INFO][6187] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" HandleID="k8s-pod-network.27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0" Sep 5 23:53:53.263738 containerd[1735]: 2025-09-05 23:53:53.241 [INFO][6187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:53.263738 containerd[1735]: 2025-09-05 23:53:53.241 [INFO][6187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:53.263738 containerd[1735]: 2025-09-05 23:53:53.257 [WARNING][6187] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" HandleID="k8s-pod-network.27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0" Sep 5 23:53:53.263738 containerd[1735]: 2025-09-05 23:53:53.257 [INFO][6187] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" HandleID="k8s-pod-network.27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--kube--controllers--78c94d557f--2p295-eth0" Sep 5 23:53:53.263738 containerd[1735]: 2025-09-05 23:53:53.259 [INFO][6187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:53.263738 containerd[1735]: 2025-09-05 23:53:53.261 [INFO][6180] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed" Sep 5 23:53:53.265275 containerd[1735]: time="2025-09-05T23:53:53.264671081Z" level=info msg="TearDown network for sandbox \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\" successfully" Sep 5 23:53:53.279146 containerd[1735]: time="2025-09-05T23:53:53.278920880Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:53:53.279146 containerd[1735]: time="2025-09-05T23:53:53.279040200Z" level=info msg="RemovePodSandbox \"27584292085c173c241f84e61f3acf8a287a2685cd5d0813913b6f98b41571ed\" returns successfully" Sep 5 23:53:53.280113 containerd[1735]: time="2025-09-05T23:53:53.279824160Z" level=info msg="StopPodSandbox for \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\"" Sep 5 23:53:53.407251 containerd[1735]: 2025-09-05 23:53:53.343 [WARNING][6201] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0", GenerateName:"calico-apiserver-7c5854478d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b9118f6-416f-46e4-abe5-5d379ea246b1", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c5854478d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751", Pod:"calico-apiserver-7c5854478d-ms7sv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f3851ac33a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:53.407251 containerd[1735]: 2025-09-05 23:53:53.343 [INFO][6201] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Sep 5 23:53:53.407251 containerd[1735]: 2025-09-05 23:53:53.343 [INFO][6201] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" iface="eth0" netns="" Sep 5 23:53:53.407251 containerd[1735]: 2025-09-05 23:53:53.344 [INFO][6201] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Sep 5 23:53:53.407251 containerd[1735]: 2025-09-05 23:53:53.344 [INFO][6201] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Sep 5 23:53:53.407251 containerd[1735]: 2025-09-05 23:53:53.381 [INFO][6208] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" HandleID="k8s-pod-network.4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0" Sep 5 23:53:53.407251 containerd[1735]: 2025-09-05 23:53:53.381 [INFO][6208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:53.407251 containerd[1735]: 2025-09-05 23:53:53.381 [INFO][6208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:53.407251 containerd[1735]: 2025-09-05 23:53:53.399 [WARNING][6208] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" HandleID="k8s-pod-network.4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0" Sep 5 23:53:53.407251 containerd[1735]: 2025-09-05 23:53:53.399 [INFO][6208] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" HandleID="k8s-pod-network.4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0" Sep 5 23:53:53.407251 containerd[1735]: 2025-09-05 23:53:53.401 [INFO][6208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:53.407251 containerd[1735]: 2025-09-05 23:53:53.403 [INFO][6201] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Sep 5 23:53:53.407251 containerd[1735]: time="2025-09-05T23:53:53.407107351Z" level=info msg="TearDown network for sandbox \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\" successfully" Sep 5 23:53:53.407251 containerd[1735]: time="2025-09-05T23:53:53.407134151Z" level=info msg="StopPodSandbox for \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\" returns successfully" Sep 5 23:53:53.410584 containerd[1735]: time="2025-09-05T23:53:53.410165391Z" level=info msg="RemovePodSandbox for \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\"" Sep 5 23:53:53.410584 containerd[1735]: time="2025-09-05T23:53:53.410204951Z" level=info msg="Forcibly stopping sandbox \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\"" Sep 5 23:53:53.477516 kubelet[3230]: I0905 23:53:53.477355 3230 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:53:53.530041 containerd[1735]: 2025-09-05 23:53:53.478 [WARNING][6223] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0", GenerateName:"calico-apiserver-7c5854478d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b9118f6-416f-46e4-abe5-5d379ea246b1", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c5854478d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-29d70f4830", ContainerID:"dbc42c944eff26f43f9bb1d3341381d6c44421622cf0c771b007b19dd914c751", Pod:"calico-apiserver-7c5854478d-ms7sv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.46.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f3851ac33a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:53:53.530041 containerd[1735]: 2025-09-05 23:53:53.481 [INFO][6223] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Sep 5 23:53:53.530041 containerd[1735]: 2025-09-05 23:53:53.481 [INFO][6223] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" iface="eth0" netns="" Sep 5 23:53:53.530041 containerd[1735]: 2025-09-05 23:53:53.481 [INFO][6223] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Sep 5 23:53:53.530041 containerd[1735]: 2025-09-05 23:53:53.481 [INFO][6223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Sep 5 23:53:53.530041 containerd[1735]: 2025-09-05 23:53:53.513 [INFO][6230] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" HandleID="k8s-pod-network.4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0" Sep 5 23:53:53.530041 containerd[1735]: 2025-09-05 23:53:53.513 [INFO][6230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:53:53.530041 containerd[1735]: 2025-09-05 23:53:53.513 [INFO][6230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:53:53.530041 containerd[1735]: 2025-09-05 23:53:53.523 [WARNING][6230] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" HandleID="k8s-pod-network.4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0" Sep 5 23:53:53.530041 containerd[1735]: 2025-09-05 23:53:53.523 [INFO][6230] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" HandleID="k8s-pod-network.4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Workload="ci--4081.3.5--n--29d70f4830-k8s-calico--apiserver--7c5854478d--ms7sv-eth0" Sep 5 23:53:53.530041 containerd[1735]: 2025-09-05 23:53:53.525 [INFO][6230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:53:53.530041 containerd[1735]: 2025-09-05 23:53:53.527 [INFO][6223] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526" Sep 5 23:53:53.530041 containerd[1735]: time="2025-09-05T23:53:53.529424503Z" level=info msg="TearDown network for sandbox \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\" successfully" Sep 5 23:53:53.562480 containerd[1735]: time="2025-09-05T23:53:53.562310941Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:53:53.562480 containerd[1735]: time="2025-09-05T23:53:53.562396661Z" level=info msg="RemovePodSandbox \"4fe2a1da3b38958daa098e612225121e3486fd40a013fda36b2e393c92575526\" returns successfully" Sep 5 23:54:15.954401 update_engine[1705]: I20250905 23:54:15.953941 1705 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 5 23:54:15.954401 update_engine[1705]: I20250905 23:54:15.953989 1705 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 5 23:54:15.954401 update_engine[1705]: I20250905 23:54:15.954185 1705 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 5 23:54:15.955413 update_engine[1705]: I20250905 23:54:15.955389 1705 omaha_request_params.cc:62] Current group set to lts Sep 5 23:54:15.955796 update_engine[1705]: I20250905 23:54:15.955566 1705 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 5 23:54:15.955796 update_engine[1705]: I20250905 23:54:15.955581 1705 update_attempter.cc:643] Scheduling an action processor start. Sep 5 23:54:15.955796 update_engine[1705]: I20250905 23:54:15.955596 1705 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 5 23:54:15.956878 update_engine[1705]: I20250905 23:54:15.956750 1705 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 5 23:54:15.956878 update_engine[1705]: I20250905 23:54:15.956827 1705 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 5 23:54:15.956878 update_engine[1705]: I20250905 23:54:15.956836 1705 omaha_request_action.cc:272] Request: Sep 5 23:54:15.956878 update_engine[1705]: Sep 5 23:54:15.956878 update_engine[1705]: Sep 5 23:54:15.956878 update_engine[1705]: Sep 5 23:54:15.956878 update_engine[1705]: Sep 5 23:54:15.956878 update_engine[1705]: Sep 5 23:54:15.956878 update_engine[1705]: Sep 5 23:54:15.956878 update_engine[1705]: Sep 5 23:54:15.956878 update_engine[1705]: Sep 5 23:54:15.956878 update_engine[1705]: I20250905 23:54:15.956842 1705 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 23:54:15.964296 update_engine[1705]: I20250905 23:54:15.962337 1705 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 23:54:15.964296 update_engine[1705]: I20250905 23:54:15.963158 1705 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 23:54:15.964397 locksmithd[1753]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 5 23:54:16.039265 update_engine[1705]: E20250905 23:54:16.039198 1705 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 23:54:16.039423 update_engine[1705]: I20250905 23:54:16.039301 1705 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 5 23:54:21.475358 systemd[1]: run-containerd-runc-k8s.io-0dd656130cdfeb703e18e651ccae49c9102a3ca754f0d7e8131eb082f2de3257-runc.JJCkE6.mount: Deactivated successfully. Sep 5 23:54:25.936639 update_engine[1705]: I20250905 23:54:25.936508 1705 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 23:54:25.936989 update_engine[1705]: I20250905 23:54:25.936764 1705 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 23:54:25.937025 update_engine[1705]: I20250905 23:54:25.936998 1705 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 23:54:26.061102 update_engine[1705]: E20250905 23:54:26.061041 1705 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 23:54:26.061264 update_engine[1705]: I20250905 23:54:26.061136 1705 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 5 23:54:35.936585 update_engine[1705]: I20250905 23:54:35.936488 1705 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 23:54:35.937014 update_engine[1705]: I20250905 23:54:35.936772 1705 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 23:54:35.937118 update_engine[1705]: I20250905 23:54:35.937083 1705 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 23:54:35.953938 update_engine[1705]: E20250905 23:54:35.953876 1705 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 23:54:35.954090 update_engine[1705]: I20250905 23:54:35.953975 1705 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 5 23:54:45.938413 update_engine[1705]: I20250905 23:54:45.937901 1705 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 23:54:45.938413 update_engine[1705]: I20250905 23:54:45.938117 1705 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 23:54:45.938413 update_engine[1705]: I20250905 23:54:45.938344 1705 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 23:54:46.044539 update_engine[1705]: E20250905 23:54:46.044476 1705 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 23:54:46.044780 update_engine[1705]: I20250905 23:54:46.044559 1705 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 5 23:54:46.044780 update_engine[1705]: I20250905 23:54:46.044569 1705 omaha_request_action.cc:617] Omaha request response: Sep 5 23:54:46.044780 update_engine[1705]: E20250905 23:54:46.044656 1705 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 5 23:54:46.044780 update_engine[1705]: I20250905 23:54:46.044674 1705 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 5 23:54:46.044780 update_engine[1705]: I20250905 23:54:46.044680 1705 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 5 23:54:46.044780 update_engine[1705]: I20250905 23:54:46.044684 1705 update_attempter.cc:306] Processing Done. Sep 5 23:54:46.044780 update_engine[1705]: E20250905 23:54:46.044698 1705 update_attempter.cc:619] Update failed. Sep 5 23:54:46.044780 update_engine[1705]: I20250905 23:54:46.044703 1705 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 5 23:54:46.044780 update_engine[1705]: I20250905 23:54:46.044709 1705 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 5 23:54:46.044780 update_engine[1705]: I20250905 23:54:46.044714 1705 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 5 23:54:46.045036 update_engine[1705]: I20250905 23:54:46.044929 1705 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 5 23:54:46.045036 update_engine[1705]: I20250905 23:54:46.044958 1705 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 5 23:54:46.045036 update_engine[1705]: I20250905 23:54:46.044965 1705 omaha_request_action.cc:272] Request: Sep 5 23:54:46.045036 update_engine[1705]: Sep 5 23:54:46.045036 update_engine[1705]: Sep 5 23:54:46.045036 update_engine[1705]: Sep 5 23:54:46.045036 update_engine[1705]: Sep 5 23:54:46.045036 update_engine[1705]: Sep 5 23:54:46.045036 update_engine[1705]: Sep 5 23:54:46.045036 update_engine[1705]: I20250905 23:54:46.044970 1705 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 5 23:54:46.045230 update_engine[1705]: I20250905 23:54:46.045121 1705 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 5 23:54:46.045452 update_engine[1705]: I20250905 23:54:46.045346 1705 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 5 23:54:46.045500 locksmithd[1753]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 5 23:54:46.117211 update_engine[1705]: E20250905 23:54:46.117152 1705 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 5 23:54:46.117346 update_engine[1705]: I20250905 23:54:46.117237 1705 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 5 23:54:46.117346 update_engine[1705]: I20250905 23:54:46.117247 1705 omaha_request_action.cc:617] Omaha request response: Sep 5 23:54:46.117346 update_engine[1705]: I20250905 23:54:46.117254 1705 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 5 23:54:46.117346 update_engine[1705]: I20250905 23:54:46.117259 1705 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 5 23:54:46.117346 update_engine[1705]: I20250905 23:54:46.117264 1705 update_attempter.cc:306] Processing Done. Sep 5 23:54:46.117346 update_engine[1705]: I20250905 23:54:46.117271 1705 update_attempter.cc:310] Error event sent. Sep 5 23:54:46.117346 update_engine[1705]: I20250905 23:54:46.117282 1705 update_check_scheduler.cc:74] Next update check in 40m49s Sep 5 23:54:46.117662 locksmithd[1753]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 5 23:55:18.880410 systemd[1]: run-containerd-runc-k8s.io-fb8f7b0be4ea90c4d7e5be89679cef2788fe25d803d68c536903bb641b25ded9-runc.4RHo7T.mount: Deactivated successfully. Sep 5 23:56:13.618513 systemd[1]: run-containerd-runc-k8s.io-91f5833ff5ae137bb175ac014f33ddb1201b2de807325b7229123ea9b978be90-runc.Q5ggKJ.mount: Deactivated successfully. Sep 5 23:56:18.880178 systemd[1]: run-containerd-runc-k8s.io-fb8f7b0be4ea90c4d7e5be89679cef2788fe25d803d68c536903bb641b25ded9-runc.Ks2Tzu.mount: Deactivated successfully. Sep 5 23:56:39.717215 systemd[1]: Started sshd@7-10.200.20.38:22-10.200.16.10:49648.service - OpenSSH per-connection server daemon (10.200.16.10:49648). Sep 5 23:56:40.141729 sshd[6733]: Accepted publickey for core from 10.200.16.10 port 49648 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:56:40.143727 sshd[6733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:40.148887 systemd-logind[1701]: New session 10 of user core. Sep 5 23:56:40.152067 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 23:56:40.523549 sshd[6733]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:40.526674 systemd-logind[1701]: Session 10 logged out. Waiting for processes to exit. Sep 5 23:56:40.527805 systemd[1]: sshd@7-10.200.20.38:22-10.200.16.10:49648.service: Deactivated successfully. Sep 5 23:56:40.530141 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 23:56:40.533572 systemd-logind[1701]: Removed session 10. Sep 5 23:56:45.603425 systemd[1]: Started sshd@8-10.200.20.38:22-10.200.16.10:56324.service - OpenSSH per-connection server daemon (10.200.16.10:56324). Sep 5 23:56:46.038313 sshd[6774]: Accepted publickey for core from 10.200.16.10 port 56324 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:56:46.040229 sshd[6774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:46.046136 systemd-logind[1701]: New session 11 of user core. Sep 5 23:56:46.050268 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 23:56:46.421954 sshd[6774]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:46.426318 systemd[1]: sshd@8-10.200.20.38:22-10.200.16.10:56324.service: Deactivated successfully. Sep 5 23:56:46.429414 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 23:56:46.431437 systemd-logind[1701]: Session 11 logged out. Waiting for processes to exit. Sep 5 23:56:46.432529 systemd-logind[1701]: Removed session 11. Sep 5 23:56:48.883827 systemd[1]: run-containerd-runc-k8s.io-fb8f7b0be4ea90c4d7e5be89679cef2788fe25d803d68c536903bb641b25ded9-runc.isuqxO.mount: Deactivated successfully. Sep 5 23:56:51.506278 systemd[1]: Started sshd@9-10.200.20.38:22-10.200.16.10:48698.service - OpenSSH per-connection server daemon (10.200.16.10:48698). Sep 5 23:56:51.927246 sshd[6841]: Accepted publickey for core from 10.200.16.10 port 48698 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:56:51.928590 sshd[6841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:51.932576 systemd-logind[1701]: New session 12 of user core. Sep 5 23:56:51.936018 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 23:56:52.308112 sshd[6841]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:52.310992 systemd-logind[1701]: Session 12 logged out. Waiting for processes to exit. Sep 5 23:56:52.312188 systemd[1]: sshd@9-10.200.20.38:22-10.200.16.10:48698.service: Deactivated successfully. Sep 5 23:56:52.315299 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 23:56:52.319267 systemd-logind[1701]: Removed session 12. Sep 5 23:56:57.400936 systemd[1]: Started sshd@10-10.200.20.38:22-10.200.16.10:48708.service - OpenSSH per-connection server daemon (10.200.16.10:48708). Sep 5 23:56:57.892851 sshd[6877]: Accepted publickey for core from 10.200.16.10 port 48708 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:56:57.894315 sshd[6877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:57.897976 systemd-logind[1701]: New session 13 of user core. Sep 5 23:56:57.903101 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 23:56:58.316187 sshd[6877]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:58.319675 systemd[1]: sshd@10-10.200.20.38:22-10.200.16.10:48708.service: Deactivated successfully. Sep 5 23:56:58.322300 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 23:56:58.323223 systemd-logind[1701]: Session 13 logged out. Waiting for processes to exit. Sep 5 23:56:58.324382 systemd-logind[1701]: Removed session 13. Sep 5 23:56:58.403087 systemd[1]: Started sshd@11-10.200.20.38:22-10.200.16.10:48716.service - OpenSSH per-connection server daemon (10.200.16.10:48716). Sep 5 23:56:58.852434 sshd[6893]: Accepted publickey for core from 10.200.16.10 port 48716 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:56:58.853806 sshd[6893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:58.858702 systemd-logind[1701]: New session 14 of user core. Sep 5 23:56:58.867046 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 23:56:59.282452 sshd[6893]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:59.286273 systemd-logind[1701]: Session 14 logged out. Waiting for processes to exit. Sep 5 23:56:59.286798 systemd[1]: sshd@11-10.200.20.38:22-10.200.16.10:48716.service: Deactivated successfully. Sep 5 23:56:59.288748 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 23:56:59.290583 systemd-logind[1701]: Removed session 14. Sep 5 23:56:59.361548 systemd[1]: Started sshd@12-10.200.20.38:22-10.200.16.10:48730.service - OpenSSH per-connection server daemon (10.200.16.10:48730). Sep 5 23:56:59.786874 sshd[6904]: Accepted publickey for core from 10.200.16.10 port 48730 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:56:59.788115 sshd[6904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:59.792439 systemd-logind[1701]: New session 15 of user core. Sep 5 23:56:59.798039 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 23:57:00.172172 sshd[6904]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:00.175586 systemd[1]: sshd@12-10.200.20.38:22-10.200.16.10:48730.service: Deactivated successfully. Sep 5 23:57:00.177287 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 23:57:00.179386 systemd-logind[1701]: Session 15 logged out. Waiting for processes to exit. Sep 5 23:57:00.180925 systemd-logind[1701]: Removed session 15. Sep 5 23:57:05.250939 systemd[1]: Started sshd@13-10.200.20.38:22-10.200.16.10:34138.service - OpenSSH per-connection server daemon (10.200.16.10:34138). Sep 5 23:57:05.675738 sshd[6921]: Accepted publickey for core from 10.200.16.10 port 34138 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:57:05.677110 sshd[6921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:05.680938 systemd-logind[1701]: New session 16 of user core. Sep 5 23:57:05.687039 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 23:57:06.046073 sshd[6921]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:06.049683 systemd[1]: sshd@13-10.200.20.38:22-10.200.16.10:34138.service: Deactivated successfully. Sep 5 23:57:06.052085 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 23:57:06.053247 systemd-logind[1701]: Session 16 logged out. Waiting for processes to exit. Sep 5 23:57:06.054637 systemd-logind[1701]: Removed session 16. Sep 5 23:57:06.141400 systemd[1]: Started sshd@14-10.200.20.38:22-10.200.16.10:34146.service - OpenSSH per-connection server daemon (10.200.16.10:34146). Sep 5 23:57:06.587426 sshd[6934]: Accepted publickey for core from 10.200.16.10 port 34146 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:57:06.588791 sshd[6934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:06.593059 systemd-logind[1701]: New session 17 of user core. Sep 5 23:57:06.601002 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 23:57:07.115315 sshd[6934]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:07.118540 systemd[1]: sshd@14-10.200.20.38:22-10.200.16.10:34146.service: Deactivated successfully. Sep 5 23:57:07.120176 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 23:57:07.120831 systemd-logind[1701]: Session 17 logged out. Waiting for processes to exit. Sep 5 23:57:07.122134 systemd-logind[1701]: Removed session 17. Sep 5 23:57:07.187764 systemd[1]: Started sshd@15-10.200.20.38:22-10.200.16.10:34148.service - OpenSSH per-connection server daemon (10.200.16.10:34148). Sep 5 23:57:07.611943 sshd[6945]: Accepted publickey for core from 10.200.16.10 port 34148 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:57:07.613444 sshd[6945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:07.617495 systemd-logind[1701]: New session 18 of user core. Sep 5 23:57:07.624177 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 23:57:08.403516 sshd[6945]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:08.406893 systemd-logind[1701]: Session 18 logged out. Waiting for processes to exit. Sep 5 23:57:08.407160 systemd[1]: sshd@15-10.200.20.38:22-10.200.16.10:34148.service: Deactivated successfully. Sep 5 23:57:08.408765 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 23:57:08.411776 systemd-logind[1701]: Removed session 18. Sep 5 23:57:08.486327 systemd[1]: Started sshd@16-10.200.20.38:22-10.200.16.10:34158.service - OpenSSH per-connection server daemon (10.200.16.10:34158). Sep 5 23:57:08.951289 sshd[6964]: Accepted publickey for core from 10.200.16.10 port 34158 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:57:08.952206 sshd[6964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:08.956153 systemd-logind[1701]: New session 19 of user core. Sep 5 23:57:08.962016 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 23:57:09.465719 sshd[6964]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:09.469624 systemd[1]: sshd@16-10.200.20.38:22-10.200.16.10:34158.service: Deactivated successfully. Sep 5 23:57:09.472515 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 23:57:09.473475 systemd-logind[1701]: Session 19 logged out. Waiting for processes to exit. Sep 5 23:57:09.474418 systemd-logind[1701]: Removed session 19. Sep 5 23:57:09.550101 systemd[1]: Started sshd@17-10.200.20.38:22-10.200.16.10:34170.service - OpenSSH per-connection server daemon (10.200.16.10:34170). Sep 5 23:57:09.971515 sshd[6975]: Accepted publickey for core from 10.200.16.10 port 34170 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:57:09.972362 sshd[6975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:09.976759 systemd-logind[1701]: New session 20 of user core. Sep 5 23:57:09.981054 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 23:57:10.357006 sshd[6975]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:10.360305 systemd[1]: sshd@17-10.200.20.38:22-10.200.16.10:34170.service: Deactivated successfully. Sep 5 23:57:10.363605 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 23:57:10.364786 systemd-logind[1701]: Session 20 logged out. Waiting for processes to exit. Sep 5 23:57:10.365794 systemd-logind[1701]: Removed session 20. Sep 5 23:57:13.619049 systemd[1]: run-containerd-runc-k8s.io-91f5833ff5ae137bb175ac014f33ddb1201b2de807325b7229123ea9b978be90-runc.rM3YqY.mount: Deactivated successfully. Sep 5 23:57:15.444100 systemd[1]: Started sshd@18-10.200.20.38:22-10.200.16.10:39768.service - OpenSSH per-connection server daemon (10.200.16.10:39768). Sep 5 23:57:15.868198 sshd[7013]: Accepted publickey for core from 10.200.16.10 port 39768 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:57:15.869433 sshd[7013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:15.874918 systemd-logind[1701]: New session 21 of user core. Sep 5 23:57:15.882087 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 23:57:16.304205 sshd[7013]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:16.309049 systemd-logind[1701]: Session 21 logged out. Waiting for processes to exit. Sep 5 23:57:16.310268 systemd[1]: sshd@18-10.200.20.38:22-10.200.16.10:39768.service: Deactivated successfully. Sep 5 23:57:16.313740 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 23:57:16.315572 systemd-logind[1701]: Removed session 21. Sep 5 23:57:18.882483 systemd[1]: run-containerd-runc-k8s.io-fb8f7b0be4ea90c4d7e5be89679cef2788fe25d803d68c536903bb641b25ded9-runc.ne2iR9.mount: Deactivated successfully. Sep 5 23:57:21.400134 systemd[1]: Started sshd@19-10.200.20.38:22-10.200.16.10:54598.service - OpenSSH per-connection server daemon (10.200.16.10:54598). Sep 5 23:57:21.851583 sshd[7045]: Accepted publickey for core from 10.200.16.10 port 54598 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:57:21.853539 sshd[7045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:21.857780 systemd-logind[1701]: New session 22 of user core. Sep 5 23:57:21.862014 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 23:57:22.256731 sshd[7045]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:22.260233 systemd-logind[1701]: Session 22 logged out. Waiting for processes to exit. Sep 5 23:57:22.260628 systemd[1]: sshd@19-10.200.20.38:22-10.200.16.10:54598.service: Deactivated successfully. Sep 5 23:57:22.264619 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 23:57:22.267275 systemd-logind[1701]: Removed session 22. Sep 5 23:57:27.344144 systemd[1]: Started sshd@20-10.200.20.38:22-10.200.16.10:54604.service - OpenSSH per-connection server daemon (10.200.16.10:54604). Sep 5 23:57:27.794740 sshd[7095]: Accepted publickey for core from 10.200.16.10 port 54604 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:57:27.796783 sshd[7095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:27.800953 systemd-logind[1701]: New session 23 of user core. Sep 5 23:57:27.809099 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 23:57:28.184734 sshd[7095]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:28.188798 systemd[1]: sshd@20-10.200.20.38:22-10.200.16.10:54604.service: Deactivated successfully. Sep 5 23:57:28.190846 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 23:57:28.191678 systemd-logind[1701]: Session 23 logged out. Waiting for processes to exit. Sep 5 23:57:28.193044 systemd-logind[1701]: Removed session 23. Sep 5 23:57:33.268136 systemd[1]: Started sshd@21-10.200.20.38:22-10.200.16.10:51932.service - OpenSSH per-connection server daemon (10.200.16.10:51932). Sep 5 23:57:33.689533 sshd[7110]: Accepted publickey for core from 10.200.16.10 port 51932 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:57:33.690992 sshd[7110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:33.694987 systemd-logind[1701]: New session 24 of user core. Sep 5 23:57:33.700421 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 23:57:34.059247 sshd[7110]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:34.063653 systemd-logind[1701]: Session 24 logged out. Waiting for processes to exit. Sep 5 23:57:34.064145 systemd[1]: sshd@21-10.200.20.38:22-10.200.16.10:51932.service: Deactivated successfully. Sep 5 23:57:34.066791 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 23:57:34.068195 systemd-logind[1701]: Removed session 24. Sep 5 23:57:39.155263 systemd[1]: Started sshd@22-10.200.20.38:22-10.200.16.10:51942.service - OpenSSH per-connection server daemon (10.200.16.10:51942). Sep 5 23:57:39.582269 sshd[7123]: Accepted publickey for core from 10.200.16.10 port 51942 ssh2: RSA SHA256:fK6nM8kjfgqqvWIYceju9SFaQMVJK6ZkdPKlJjcIIKw Sep 5 23:57:39.585008 sshd[7123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:57:39.589784 systemd-logind[1701]: New session 25 of user core. Sep 5 23:57:39.597107 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 23:57:40.000285 sshd[7123]: pam_unix(sshd:session): session closed for user core Sep 5 23:57:40.004323 systemd[1]: sshd@22-10.200.20.38:22-10.200.16.10:51942.service: Deactivated successfully. Sep 5 23:57:40.007473 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 23:57:40.008707 systemd-logind[1701]: Session 25 logged out. Waiting for processes to exit. Sep 5 23:57:40.010284 systemd-logind[1701]: Removed session 25.