Sep 12 23:58:26.274600 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 23:58:26.274622 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 12 23:58:26.274630 kernel: KASLR enabled Sep 12 23:58:26.274636 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 12 23:58:26.274643 kernel: printk: bootconsole [pl11] enabled Sep 12 23:58:26.274649 kernel: efi: EFI v2.7 by EDK II Sep 12 23:58:26.274656 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Sep 12 23:58:26.274662 kernel: random: crng init done Sep 12 23:58:26.274668 kernel: ACPI: Early table checksum verification disabled Sep 12 23:58:26.274674 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 12 23:58:26.274680 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:26.274686 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:26.274694 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 12 23:58:26.274700 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:26.274707 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:26.274714 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:26.274720 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:26.274728 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:26.274734 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:26.274741 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 12 23:58:26.274747 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:26.274753 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 12 23:58:26.274760 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 12 23:58:26.274766 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 12 23:58:26.274772 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 12 23:58:26.274779 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 12 23:58:26.274785 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 12 23:58:26.274791 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 12 23:58:26.274799 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 12 23:58:26.274806 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 12 23:58:26.274812 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 12 23:58:26.274819 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 12 23:58:26.274825 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 12 23:58:26.274832 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 12 23:58:26.274838 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Sep 12 23:58:26.274845 kernel: Zone ranges: Sep 12 23:58:26.274851 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 12 23:58:26.274857 kernel: DMA32 empty Sep 12 23:58:26.274863 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 23:58:26.274870 kernel: Movable zone start for each node Sep 12 23:58:26.274880 kernel: Early memory node ranges Sep 12 23:58:26.274887 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 12 23:58:26.274894 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 12 23:58:26.274900 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 12 23:58:26.274907 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 12 23:58:26.274915 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 12 23:58:26.274922 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 12 23:58:26.274928 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 23:58:26.274935 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 12 23:58:26.274942 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 12 23:58:26.274948 kernel: psci: probing for conduit method from ACPI. Sep 12 23:58:26.274955 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 23:58:26.274962 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 23:58:26.274968 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 12 23:58:26.274975 kernel: psci: SMC Calling Convention v1.4 Sep 12 23:58:26.274982 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 12 23:58:26.274988 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 12 23:58:26.274997 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 23:58:26.275003 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 23:58:26.275010 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 23:58:26.275017 kernel: Detected PIPT I-cache on CPU0 Sep 12 23:58:26.275024 kernel: CPU features: detected: GIC system register CPU interface Sep 12 23:58:26.275031 kernel: CPU features: detected: Hardware dirty bit management Sep 12 23:58:26.275038 kernel: CPU features: detected: Spectre-BHB Sep 12 23:58:26.275045 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 23:58:26.275052 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 23:58:26.275059 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 23:58:26.275066 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 12 23:58:26.275074 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 23:58:26.275081 kernel: alternatives: applying boot alternatives Sep 12 23:58:26.275089 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:58:26.275096 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 23:58:26.275103 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 23:58:26.275110 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 23:58:26.275116 kernel: Fallback order for Node 0: 0 Sep 12 23:58:26.275123 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 12 23:58:26.275130 kernel: Policy zone: Normal Sep 12 23:58:26.275136 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 23:58:26.275143 kernel: software IO TLB: area num 2. Sep 12 23:58:26.275151 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Sep 12 23:58:26.275158 kernel: Memory: 3982564K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 211596K reserved, 0K cma-reserved) Sep 12 23:58:26.275165 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 23:58:26.275172 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 23:58:26.275179 kernel: rcu: RCU event tracing is enabled. Sep 12 23:58:26.275186 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 23:58:26.275193 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 23:58:26.275200 kernel: Tracing variant of Tasks RCU enabled. Sep 12 23:58:26.275207 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 23:58:26.275213 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 23:58:26.275220 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 23:58:26.275228 kernel: GICv3: 960 SPIs implemented Sep 12 23:58:26.275235 kernel: GICv3: 0 Extended SPIs implemented Sep 12 23:58:26.275242 kernel: Root IRQ handler: gic_handle_irq Sep 12 23:58:26.275248 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 23:58:26.275255 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 12 23:58:26.275262 kernel: ITS: No ITS available, not enabling LPIs Sep 12 23:58:26.275269 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 23:58:26.275276 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:58:26.277326 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 23:58:26.277337 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 23:58:26.277345 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 23:58:26.277356 kernel: Console: colour dummy device 80x25 Sep 12 23:58:26.277364 kernel: printk: console [tty1] enabled Sep 12 23:58:26.277371 kernel: ACPI: Core revision 20230628 Sep 12 23:58:26.277378 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 23:58:26.277386 kernel: pid_max: default: 32768 minimum: 301 Sep 12 23:58:26.277393 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 23:58:26.277400 kernel: landlock: Up and running. Sep 12 23:58:26.277407 kernel: SELinux: Initializing. Sep 12 23:58:26.277414 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:58:26.277421 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:58:26.277430 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 23:58:26.277437 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 23:58:26.277444 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 12 23:58:26.277451 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 12 23:58:26.277458 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 12 23:58:26.277465 kernel: rcu: Hierarchical SRCU implementation. Sep 12 23:58:26.277473 kernel: rcu: Max phase no-delay instances is 400. Sep 12 23:58:26.277486 kernel: Remapping and enabling EFI services. Sep 12 23:58:26.277493 kernel: smp: Bringing up secondary CPUs ... Sep 12 23:58:26.277500 kernel: Detected PIPT I-cache on CPU1 Sep 12 23:58:26.277508 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 12 23:58:26.277516 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:58:26.277524 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 23:58:26.277531 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 23:58:26.277539 kernel: SMP: Total of 2 processors activated. Sep 12 23:58:26.277546 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 23:58:26.277555 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 12 23:58:26.277563 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 23:58:26.277570 kernel: CPU features: detected: CRC32 instructions Sep 12 23:58:26.277577 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 23:58:26.277584 kernel: CPU features: detected: LSE atomic instructions Sep 12 23:58:26.277592 kernel: CPU features: detected: Privileged Access Never Sep 12 23:58:26.277599 kernel: CPU: All CPU(s) started at EL1 Sep 12 23:58:26.277607 kernel: alternatives: applying system-wide alternatives Sep 12 23:58:26.277614 kernel: devtmpfs: initialized Sep 12 23:58:26.277622 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 23:58:26.277630 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 23:58:26.277638 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 23:58:26.277645 kernel: SMBIOS 3.1.0 present. Sep 12 23:58:26.277652 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 12 23:58:26.277659 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 23:58:26.277667 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 23:58:26.277674 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 23:58:26.277681 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 23:58:26.277690 kernel: audit: initializing netlink subsys (disabled) Sep 12 23:58:26.277698 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Sep 12 23:58:26.277706 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 23:58:26.277714 kernel: cpuidle: using governor menu Sep 12 23:58:26.277721 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 23:58:26.277728 kernel: ASID allocator initialised with 32768 entries Sep 12 23:58:26.277735 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 23:58:26.277743 kernel: Serial: AMBA PL011 UART driver Sep 12 23:58:26.277750 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 23:58:26.277759 kernel: Modules: 0 pages in range for non-PLT usage Sep 12 23:58:26.277766 kernel: Modules: 508992 pages in range for PLT usage Sep 12 23:58:26.277774 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 23:58:26.277781 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 23:58:26.277789 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 23:58:26.277796 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 23:58:26.277804 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 23:58:26.277811 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 23:58:26.277818 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 23:58:26.277827 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 23:58:26.277834 kernel: ACPI: Added _OSI(Module Device) Sep 12 23:58:26.277842 kernel: ACPI: Added _OSI(Processor Device) Sep 12 23:58:26.277849 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 23:58:26.277856 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 23:58:26.277864 kernel: ACPI: Interpreter enabled Sep 12 23:58:26.277871 kernel: ACPI: Using GIC for interrupt routing Sep 12 23:58:26.277879 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 12 23:58:26.277886 kernel: printk: console [ttyAMA0] enabled Sep 12 23:58:26.277894 kernel: printk: bootconsole [pl11] disabled Sep 12 23:58:26.277902 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 12 23:58:26.277909 kernel: iommu: Default domain type: Translated Sep 12 23:58:26.277916 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 23:58:26.277924 kernel: efivars: Registered efivars operations Sep 12 23:58:26.277931 kernel: vgaarb: loaded Sep 12 23:58:26.277938 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 23:58:26.277945 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 23:58:26.277953 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 23:58:26.277961 kernel: pnp: PnP ACPI init Sep 12 23:58:26.277968 kernel: pnp: PnP ACPI: found 0 devices Sep 12 23:58:26.277976 kernel: NET: Registered PF_INET protocol family Sep 12 23:58:26.277983 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 23:58:26.277990 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 23:58:26.277998 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 23:58:26.278006 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 23:58:26.278013 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 23:58:26.278020 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 23:58:26.278029 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:58:26.278036 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:58:26.278044 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 23:58:26.278051 kernel: PCI: CLS 0 bytes, default 64 Sep 12 23:58:26.278058 kernel: kvm [1]: HYP mode not available Sep 12 23:58:26.278065 kernel: Initialise system trusted keyrings Sep 12 23:58:26.278073 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 23:58:26.278080 kernel: Key type asymmetric registered Sep 12 23:58:26.278088 kernel: Asymmetric key parser 'x509' registered Sep 12 23:58:26.278096 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 23:58:26.278103 kernel: io scheduler mq-deadline registered Sep 12 23:58:26.278111 kernel: io scheduler kyber registered Sep 12 23:58:26.278118 kernel: io scheduler bfq registered Sep 12 23:58:26.278125 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 23:58:26.278132 kernel: thunder_xcv, ver 1.0 Sep 12 23:58:26.278140 kernel: thunder_bgx, ver 1.0 Sep 12 23:58:26.278147 kernel: nicpf, ver 1.0 Sep 12 23:58:26.278154 kernel: nicvf, ver 1.0 Sep 12 23:58:26.278322 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 23:58:26.278405 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T23:58:25 UTC (1757721505) Sep 12 23:58:26.278416 kernel: efifb: probing for efifb Sep 12 23:58:26.278423 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 12 23:58:26.278431 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 12 23:58:26.278439 kernel: efifb: scrolling: redraw Sep 12 23:58:26.278446 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 23:58:26.278454 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 23:58:26.278463 kernel: fb0: EFI VGA frame buffer device Sep 12 23:58:26.278470 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 12 23:58:26.278478 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 23:58:26.278485 kernel: No ACPI PMU IRQ for CPU0 Sep 12 23:58:26.278492 kernel: No ACPI PMU IRQ for CPU1 Sep 12 23:58:26.278499 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 12 23:58:26.278507 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 23:58:26.278514 kernel: watchdog: Hard watchdog permanently disabled Sep 12 23:58:26.278521 kernel: NET: Registered PF_INET6 protocol family Sep 12 23:58:26.278531 kernel: Segment Routing with IPv6 Sep 12 23:58:26.278538 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 23:58:26.278545 kernel: NET: Registered PF_PACKET protocol family Sep 12 23:58:26.278552 kernel: Key type dns_resolver registered Sep 12 23:58:26.278560 kernel: registered taskstats version 1 Sep 12 23:58:26.278567 kernel: Loading compiled-in X.509 certificates Sep 12 23:58:26.278574 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 12 23:58:26.278581 kernel: Key type .fscrypt registered Sep 12 23:58:26.278588 kernel: Key type fscrypt-provisioning registered Sep 12 23:58:26.278597 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 23:58:26.278604 kernel: ima: Allocated hash algorithm: sha1 Sep 12 23:58:26.278611 kernel: ima: No architecture policies found Sep 12 23:58:26.278619 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 23:58:26.278626 kernel: clk: Disabling unused clocks Sep 12 23:58:26.278633 kernel: Freeing unused kernel memory: 39488K Sep 12 23:58:26.278641 kernel: Run /init as init process Sep 12 23:58:26.278648 kernel: with arguments: Sep 12 23:58:26.278655 kernel: /init Sep 12 23:58:26.278664 kernel: with environment: Sep 12 23:58:26.278672 kernel: HOME=/ Sep 12 23:58:26.278679 kernel: TERM=linux Sep 12 23:58:26.278686 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 23:58:26.278695 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:58:26.278705 systemd[1]: Detected virtualization microsoft. Sep 12 23:58:26.278713 systemd[1]: Detected architecture arm64. Sep 12 23:58:26.278720 systemd[1]: Running in initrd. Sep 12 23:58:26.278729 systemd[1]: No hostname configured, using default hostname. Sep 12 23:58:26.278737 systemd[1]: Hostname set to . Sep 12 23:58:26.278745 systemd[1]: Initializing machine ID from random generator. Sep 12 23:58:26.278753 systemd[1]: Queued start job for default target initrd.target. Sep 12 23:58:26.278761 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:58:26.278769 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:58:26.278777 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 23:58:26.278785 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:58:26.278795 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 23:58:26.278803 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 23:58:26.278812 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 23:58:26.278821 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 23:58:26.278828 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:58:26.278836 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:58:26.278845 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:58:26.278853 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:58:26.278861 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:58:26.278869 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:58:26.278877 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:58:26.278885 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:58:26.278894 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:58:26.278902 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 23:58:26.278910 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:58:26.278920 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:58:26.278928 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:58:26.278936 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:58:26.278944 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 23:58:26.278952 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:58:26.278960 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 23:58:26.278968 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 23:58:26.278975 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:58:26.278999 systemd-journald[217]: Collecting audit messages is disabled. Sep 12 23:58:26.279020 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:58:26.279030 systemd-journald[217]: Journal started Sep 12 23:58:26.279049 systemd-journald[217]: Runtime Journal (/run/log/journal/97d095b1165941bfb1353d090bb0b23f) is 8.0M, max 78.5M, 70.5M free. Sep 12 23:58:26.286426 systemd-modules-load[218]: Inserted module 'overlay' Sep 12 23:58:26.293575 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:58:26.305301 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:58:26.316298 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 23:58:26.324382 systemd-modules-load[218]: Inserted module 'br_netfilter' Sep 12 23:58:26.329611 kernel: Bridge firewalling registered Sep 12 23:58:26.325254 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 23:58:26.340790 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:58:26.347521 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 23:58:26.357661 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:58:26.367113 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:26.387492 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:58:26.396443 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:58:26.415741 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:58:26.426446 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:58:26.444798 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:58:26.464805 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:58:26.482310 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:58:26.489459 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:58:26.513507 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 23:58:26.522458 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:58:26.543667 dracut-cmdline[249]: dracut-dracut-053 Sep 12 23:58:26.551097 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:58:26.548455 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:58:26.604132 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:58:26.623333 systemd-resolved[250]: Positive Trust Anchors: Sep 12 23:58:26.623346 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:58:26.623378 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:58:26.630053 systemd-resolved[250]: Defaulting to hostname 'linux'. Sep 12 23:58:26.630967 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:58:26.647187 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:58:26.711299 kernel: SCSI subsystem initialized Sep 12 23:58:26.719303 kernel: Loading iSCSI transport class v2.0-870. Sep 12 23:58:26.731298 kernel: iscsi: registered transport (tcp) Sep 12 23:58:26.746341 kernel: iscsi: registered transport (qla4xxx) Sep 12 23:58:26.746385 kernel: QLogic iSCSI HBA Driver Sep 12 23:58:26.785631 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 23:58:26.800507 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 23:58:26.831921 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 23:58:26.831974 kernel: device-mapper: uevent: version 1.0.3 Sep 12 23:58:26.838867 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 23:58:26.887308 kernel: raid6: neonx8 gen() 15748 MB/s Sep 12 23:58:26.907294 kernel: raid6: neonx4 gen() 15688 MB/s Sep 12 23:58:26.927291 kernel: raid6: neonx2 gen() 13201 MB/s Sep 12 23:58:26.948293 kernel: raid6: neonx1 gen() 10526 MB/s Sep 12 23:58:26.968290 kernel: raid6: int64x8 gen() 6965 MB/s Sep 12 23:58:26.988290 kernel: raid6: int64x4 gen() 7352 MB/s Sep 12 23:58:27.009292 kernel: raid6: int64x2 gen() 6130 MB/s Sep 12 23:58:27.032553 kernel: raid6: int64x1 gen() 5062 MB/s Sep 12 23:58:27.032564 kernel: raid6: using algorithm neonx8 gen() 15748 MB/s Sep 12 23:58:27.056777 kernel: raid6: .... xor() 12062 MB/s, rmw enabled Sep 12 23:58:27.056804 kernel: raid6: using neon recovery algorithm Sep 12 23:58:27.068900 kernel: xor: measuring software checksum speed Sep 12 23:58:27.068920 kernel: 8regs : 19797 MB/sec Sep 12 23:58:27.072491 kernel: 32regs : 19650 MB/sec Sep 12 23:58:27.076009 kernel: arm64_neon : 27007 MB/sec Sep 12 23:58:27.080044 kernel: xor: using function: arm64_neon (27007 MB/sec) Sep 12 23:58:27.130301 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 23:58:27.141489 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:58:27.160445 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:58:27.182635 systemd-udevd[435]: Using default interface naming scheme 'v255'. Sep 12 23:58:27.187774 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:58:27.207409 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 23:58:27.236963 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Sep 12 23:58:27.269358 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:58:27.287497 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:58:27.328369 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:58:27.346489 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 23:58:27.380144 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 23:58:27.393947 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:58:27.408615 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:58:27.421673 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:58:27.446813 kernel: hv_vmbus: Vmbus version:5.3 Sep 12 23:58:27.441490 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 23:58:27.468303 kernel: hv_vmbus: registering driver hv_netvsc Sep 12 23:58:27.474299 kernel: hv_vmbus: registering driver hv_storvsc Sep 12 23:58:27.474348 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 12 23:58:27.484124 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 12 23:58:27.484987 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:58:27.532979 kernel: scsi host0: storvsc_host_t Sep 12 23:58:27.533155 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 12 23:58:27.533187 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 12 23:58:27.533197 kernel: scsi host1: storvsc_host_t Sep 12 23:58:27.533311 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 12 23:58:27.533323 kernel: hv_vmbus: registering driver hid_hyperv Sep 12 23:58:27.533332 kernel: PTP clock support registered Sep 12 23:58:27.497133 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:58:27.581235 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 12 23:58:27.581268 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 12 23:58:27.581316 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 12 23:58:27.497256 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:58:27.540541 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:58:27.559202 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:58:27.559345 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:27.573140 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:58:27.651885 kernel: hv_utils: Registering HyperV Utility Driver Sep 12 23:58:27.651910 kernel: hv_vmbus: registering driver hv_utils Sep 12 23:58:27.651920 kernel: hv_utils: Heartbeat IC version 3.0 Sep 12 23:58:27.651930 kernel: hv_utils: Shutdown IC version 3.2 Sep 12 23:58:27.651941 kernel: hv_utils: TimeSync IC version 4.0 Sep 12 23:58:27.609725 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:58:28.079711 systemd-resolved[250]: Clock change detected. Flushing caches. Sep 12 23:58:28.082664 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:28.112373 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 12 23:58:28.112593 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 23:58:28.114279 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:58:28.137133 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 12 23:58:28.137286 kernel: hv_netvsc 002248c2-3aa7-0022-48c2-3aa7002248c2 eth0: VF slot 1 added Sep 12 23:58:28.155133 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 12 23:58:28.155358 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 12 23:58:28.155410 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:58:28.189233 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 23:58:28.189415 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 12 23:58:28.189505 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 12 23:58:28.189590 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:58:28.189601 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 23:58:28.198342 kernel: hv_vmbus: registering driver hv_pci Sep 12 23:58:28.208147 kernel: hv_pci 4b938de2-ff2b-4fd5-b82e-85296756e3a0: PCI VMBus probing: Using version 0x10004 Sep 12 23:58:28.363603 kernel: hv_pci 4b938de2-ff2b-4fd5-b82e-85296756e3a0: PCI host bridge to bus ff2b:00 Sep 12 23:58:28.363802 kernel: pci_bus ff2b:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 12 23:58:28.363910 kernel: pci_bus ff2b:00: No busn resource found for root bus, will use [bus 00-ff] Sep 12 23:58:28.376610 kernel: pci ff2b:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 12 23:58:28.384172 kernel: pci ff2b:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 12 23:58:28.389559 kernel: pci ff2b:00:02.0: enabling Extended Tags Sep 12 23:58:28.407125 kernel: pci ff2b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ff2b:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 12 23:58:28.419021 kernel: pci_bus ff2b:00: busn_res: [bus 00-ff] end is updated to 00 Sep 12 23:58:28.419234 kernel: pci ff2b:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 12 23:58:28.458505 kernel: mlx5_core ff2b:00:02.0: enabling device (0000 -> 0002) Sep 12 23:58:28.467123 kernel: mlx5_core ff2b:00:02.0: firmware version: 16.31.2424 Sep 12 23:58:28.744483 kernel: hv_netvsc 002248c2-3aa7-0022-48c2-3aa7002248c2 eth0: VF registering: eth1 Sep 12 23:58:28.744681 kernel: mlx5_core ff2b:00:02.0 eth1: joined to eth0 Sep 12 23:58:28.754136 kernel: mlx5_core ff2b:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 12 23:58:28.767134 kernel: mlx5_core ff2b:00:02.0 enP65323s1: renamed from eth1 Sep 12 23:58:29.006171 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (481) Sep 12 23:58:29.020558 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 23:58:29.043134 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (492) Sep 12 23:58:29.052286 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 12 23:58:29.064251 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 12 23:58:29.075738 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 12 23:58:29.103310 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 23:58:29.129249 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 12 23:58:30.144153 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:58:30.144715 disk-uuid[600]: The operation has completed successfully. Sep 12 23:58:30.220545 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 23:58:30.220640 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 23:58:30.242337 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 23:58:30.254170 sh[716]: Success Sep 12 23:58:30.293137 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 23:58:30.716679 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 23:58:30.740232 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 23:58:30.749909 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 23:58:30.789974 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 12 23:58:30.790032 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:58:30.796611 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 23:58:30.801598 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 23:58:30.806142 kernel: BTRFS info (device dm-0): using free space tree Sep 12 23:58:31.244015 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 23:58:31.249514 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 23:58:31.266331 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 23:58:31.274254 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 23:58:31.309919 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:58:31.309953 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:58:31.315083 kernel: BTRFS info (device sda6): using free space tree Sep 12 23:58:31.386153 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 23:58:31.398894 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 23:58:31.412947 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:58:31.399343 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:58:31.425243 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:58:31.436780 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 23:58:31.449269 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 23:58:31.458409 systemd-networkd[897]: lo: Link UP Sep 12 23:58:31.458413 systemd-networkd[897]: lo: Gained carrier Sep 12 23:58:31.459945 systemd-networkd[897]: Enumeration completed Sep 12 23:58:31.466056 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:58:31.466059 systemd-networkd[897]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:58:31.469211 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:58:31.483554 systemd[1]: Reached target network.target - Network. Sep 12 23:58:31.566121 kernel: mlx5_core ff2b:00:02.0 enP65323s1: Link up Sep 12 23:58:31.566383 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 23:58:31.654130 kernel: hv_netvsc 002248c2-3aa7-0022-48c2-3aa7002248c2 eth0: Data path switched to VF: enP65323s1 Sep 12 23:58:31.654657 systemd-networkd[897]: enP65323s1: Link UP Sep 12 23:58:31.654894 systemd-networkd[897]: eth0: Link UP Sep 12 23:58:31.655327 systemd-networkd[897]: eth0: Gained carrier Sep 12 23:58:31.655337 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:58:31.677628 systemd-networkd[897]: enP65323s1: Gained carrier Sep 12 23:58:31.693141 systemd-networkd[897]: eth0: DHCPv4 address 10.200.20.16/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 23:58:32.807287 ignition[900]: Ignition 2.19.0 Sep 12 23:58:32.807297 ignition[900]: Stage: fetch-offline Sep 12 23:58:32.807332 ignition[900]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:32.812051 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:58:32.807339 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:32.827405 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 23:58:32.807430 ignition[900]: parsed url from cmdline: "" Sep 12 23:58:32.807433 ignition[900]: no config URL provided Sep 12 23:58:32.807438 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:58:32.807445 ignition[900]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:58:32.807449 ignition[900]: failed to fetch config: resource requires networking Sep 12 23:58:32.807610 ignition[900]: Ignition finished successfully Sep 12 23:58:32.847502 ignition[909]: Ignition 2.19.0 Sep 12 23:58:32.847508 ignition[909]: Stage: fetch Sep 12 23:58:32.847711 ignition[909]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:32.847720 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:32.847813 ignition[909]: parsed url from cmdline: "" Sep 12 23:58:32.847816 ignition[909]: no config URL provided Sep 12 23:58:32.847821 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:58:32.847831 ignition[909]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:58:32.847852 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 12 23:58:32.939269 ignition[909]: GET result: OK Sep 12 23:58:32.939330 ignition[909]: config has been read from IMDS userdata Sep 12 23:58:32.939375 ignition[909]: parsing config with SHA512: 7eb132286ab4753bb0f24c02cfe94269ad17aeb33525e3236fe6cfe59ba2885e1fdee48fbe0e1bcaf911ab3bf9bf7f74df575b10379514ce0ede84e0e2994b32 Sep 12 23:58:32.943040 unknown[909]: fetched base config from "system" Sep 12 23:58:32.943440 ignition[909]: fetch: fetch complete Sep 12 23:58:32.943046 unknown[909]: fetched base config from "system" Sep 12 23:58:32.943445 ignition[909]: fetch: fetch passed Sep 12 23:58:32.943052 unknown[909]: fetched user config from "azure" Sep 12 23:58:32.943485 ignition[909]: Ignition finished successfully Sep 12 23:58:32.948439 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 23:58:32.976383 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 23:58:32.999758 ignition[915]: Ignition 2.19.0 Sep 12 23:58:33.003084 ignition[915]: Stage: kargs Sep 12 23:58:33.003335 ignition[915]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:33.007736 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 23:58:33.003347 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:33.004280 ignition[915]: kargs: kargs passed Sep 12 23:58:33.004332 ignition[915]: Ignition finished successfully Sep 12 23:58:33.030240 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 23:58:33.035748 systemd-networkd[897]: eth0: Gained IPv6LL Sep 12 23:58:33.049375 ignition[921]: Ignition 2.19.0 Sep 12 23:58:33.049381 ignition[921]: Stage: disks Sep 12 23:58:33.052401 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 23:58:33.049622 ignition[921]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:33.061234 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 23:58:33.049634 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:33.069839 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:58:33.050888 ignition[921]: disks: disks passed Sep 12 23:58:33.081599 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:58:33.050942 ignition[921]: Ignition finished successfully Sep 12 23:58:33.091649 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:58:33.102780 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:58:33.128364 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 23:58:33.197658 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 12 23:58:33.204607 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 23:58:33.223232 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 23:58:33.284134 kernel: EXT4-fs (sda9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 12 23:58:33.285052 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 23:58:33.289927 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 23:58:33.335192 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:58:33.359284 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Sep 12 23:58:33.372095 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:58:33.372132 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:58:33.376450 kernel: BTRFS info (device sda6): using free space tree Sep 12 23:58:33.385136 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 23:58:33.385258 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 23:58:33.396223 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 23:58:33.409797 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 23:58:33.409834 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:58:33.434336 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:58:33.439211 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 23:58:33.459378 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 23:58:34.079918 coreos-metadata[957]: Sep 12 23:58:34.079 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 23:58:34.089459 coreos-metadata[957]: Sep 12 23:58:34.089 INFO Fetch successful Sep 12 23:58:34.095533 coreos-metadata[957]: Sep 12 23:58:34.095 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 12 23:58:34.106604 coreos-metadata[957]: Sep 12 23:58:34.106 INFO Fetch successful Sep 12 23:58:34.120152 coreos-metadata[957]: Sep 12 23:58:34.120 INFO wrote hostname ci-4081.3.5-n-4f403f96f8 to /sysroot/etc/hostname Sep 12 23:58:34.129204 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 23:58:34.430094 initrd-setup-root[969]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 23:58:34.485277 initrd-setup-root[976]: cut: /sysroot/etc/group: No such file or directory Sep 12 23:58:34.508410 initrd-setup-root[983]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 23:58:34.517373 initrd-setup-root[990]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 23:58:35.608054 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 23:58:35.623263 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 23:58:35.635089 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 23:58:35.654926 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:58:35.649552 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 23:58:35.680982 ignition[1057]: INFO : Ignition 2.19.0 Sep 12 23:58:35.680982 ignition[1057]: INFO : Stage: mount Sep 12 23:58:35.680982 ignition[1057]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:35.680982 ignition[1057]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:35.684929 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 23:58:35.716905 ignition[1057]: INFO : mount: mount passed Sep 12 23:58:35.716905 ignition[1057]: INFO : Ignition finished successfully Sep 12 23:58:35.692350 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 23:58:35.721337 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 23:58:35.743248 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:58:35.777126 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1069) Sep 12 23:58:35.793120 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:58:35.793160 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:58:35.793171 kernel: BTRFS info (device sda6): using free space tree Sep 12 23:58:35.802120 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 23:58:35.811268 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:58:35.838144 ignition[1087]: INFO : Ignition 2.19.0 Sep 12 23:58:35.838144 ignition[1087]: INFO : Stage: files Sep 12 23:58:35.846191 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:35.846191 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:35.846191 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Sep 12 23:58:35.870748 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 23:58:35.870748 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 23:58:35.956517 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 23:58:35.963547 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 23:58:35.963547 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 23:58:35.956985 unknown[1087]: wrote ssh authorized keys file for user: core Sep 12 23:58:36.016505 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 23:58:36.026889 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 12 23:58:36.063728 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 23:58:36.167237 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 23:58:36.167237 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 12 23:58:36.669238 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 23:58:36.893895 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 23:58:36.893895 ignition[1087]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 23:58:36.943590 ignition[1087]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:58:36.955571 ignition[1087]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:58:36.955571 ignition[1087]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 23:58:36.955571 ignition[1087]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 23:58:36.955571 ignition[1087]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 23:58:36.955571 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:58:36.955571 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:58:36.955571 ignition[1087]: INFO : files: files passed Sep 12 23:58:36.955571 ignition[1087]: INFO : Ignition finished successfully Sep 12 23:58:36.955348 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 23:58:36.979722 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 23:58:37.001266 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 23:58:37.026328 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 23:58:37.067404 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:58:37.067404 initrd-setup-root-after-ignition[1115]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:58:37.026441 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 23:58:37.101611 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:58:37.067875 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:58:37.082311 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 23:58:37.122358 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 23:58:37.151604 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 23:58:37.151747 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 23:58:37.163596 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 23:58:37.175586 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 23:58:37.186852 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 23:58:37.204386 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 23:58:37.225081 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:58:37.238370 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 23:58:37.255646 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:58:37.267656 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:58:37.273973 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 23:58:37.284674 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 23:58:37.284800 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:58:37.300197 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 23:58:37.305814 systemd[1]: Stopped target basic.target - Basic System. Sep 12 23:58:37.317069 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 23:58:37.328271 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:58:37.338796 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 23:58:37.350381 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 23:58:37.361547 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:58:37.373864 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 23:58:37.384310 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 23:58:37.395706 systemd[1]: Stopped target swap.target - Swaps. Sep 12 23:58:37.405007 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 23:58:37.405139 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:58:37.419213 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:58:37.425045 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:58:37.436380 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 23:58:37.436449 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:58:37.448035 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 23:58:37.448166 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 23:58:37.464940 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 23:58:37.465059 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:58:37.471778 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 23:58:37.471866 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 23:58:37.482364 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 23:58:37.482458 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 23:58:37.560990 ignition[1139]: INFO : Ignition 2.19.0 Sep 12 23:58:37.560990 ignition[1139]: INFO : Stage: umount Sep 12 23:58:37.560990 ignition[1139]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:37.560990 ignition[1139]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:37.512387 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 23:58:37.605158 ignition[1139]: INFO : umount: umount passed Sep 12 23:58:37.605158 ignition[1139]: INFO : Ignition finished successfully Sep 12 23:58:37.528325 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 23:58:37.528489 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:58:37.561319 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 23:58:37.572182 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 23:58:37.572333 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:58:37.580882 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 23:58:37.580989 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:58:37.610682 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 23:58:37.610966 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 23:58:37.620220 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 23:58:37.620462 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 23:58:37.630572 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 23:58:37.630622 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 23:58:37.647797 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 23:58:37.647861 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 23:58:37.658465 systemd[1]: Stopped target network.target - Network. Sep 12 23:58:37.663399 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 23:58:37.663476 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:58:37.680951 systemd[1]: Stopped target paths.target - Path Units. Sep 12 23:58:37.692625 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 23:58:37.698131 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:58:37.714432 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 23:58:37.723989 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 23:58:37.733834 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 23:58:37.733908 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:58:37.743920 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 23:58:37.743977 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:58:37.754617 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 23:58:37.754676 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 23:58:37.764680 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 23:58:37.764724 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 23:58:37.774816 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 23:58:37.785271 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 23:58:37.797157 systemd-networkd[897]: eth0: DHCPv6 lease lost Sep 12 23:58:37.799054 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 23:58:37.799728 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 23:58:37.799807 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 23:58:37.816655 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 23:58:37.816760 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 23:58:37.824537 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 23:58:37.825160 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 23:58:37.839988 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 23:58:37.840055 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:58:37.868296 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 23:58:37.877900 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 23:58:37.877971 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:58:37.889798 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:58:37.889851 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:58:37.900916 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 23:58:38.076504 kernel: hv_netvsc 002248c2-3aa7-0022-48c2-3aa7002248c2 eth0: Data path switched from VF: enP65323s1 Sep 12 23:58:37.900961 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 23:58:37.912466 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 23:58:37.912515 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:58:37.924926 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:58:37.961724 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 23:58:37.961913 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:58:37.972863 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 23:58:37.972914 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 23:58:37.983479 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 23:58:37.983510 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:58:38.005339 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 23:58:38.005399 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:58:38.021255 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 23:58:38.021311 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 23:58:38.037877 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:58:38.037925 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:58:38.077562 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 23:58:38.093614 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 23:58:38.093692 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:58:38.105803 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:58:38.105855 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:38.118867 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 23:58:38.119033 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 23:58:38.130673 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 23:58:38.130763 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 23:58:38.142646 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 23:58:38.142759 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 23:58:38.160995 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 23:58:38.161159 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 23:58:38.171881 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 23:58:38.327668 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Sep 12 23:58:38.199360 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 23:58:38.218266 systemd[1]: Switching root. Sep 12 23:58:38.340492 systemd-journald[217]: Journal stopped Sep 12 23:58:26.274600 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 23:58:26.274622 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 12 23:58:26.274630 kernel: KASLR enabled Sep 12 23:58:26.274636 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 12 23:58:26.274643 kernel: printk: bootconsole [pl11] enabled Sep 12 23:58:26.274649 kernel: efi: EFI v2.7 by EDK II Sep 12 23:58:26.274656 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Sep 12 23:58:26.274662 kernel: random: crng init done Sep 12 23:58:26.274668 kernel: ACPI: Early table checksum verification disabled Sep 12 23:58:26.274674 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Sep 12 23:58:26.274680 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:26.274686 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:26.274694 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 12 23:58:26.274700 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:26.274707 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:26.274714 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:26.274720 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:26.274728 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:26.274734 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:26.274741 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 12 23:58:26.274747 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 23:58:26.274753 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 12 23:58:26.274760 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 12 23:58:26.274766 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 12 23:58:26.274772 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 12 23:58:26.274779 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 12 23:58:26.274785 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 12 23:58:26.274791 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 12 23:58:26.274799 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 12 23:58:26.274806 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 12 23:58:26.274812 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 12 23:58:26.274819 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 12 23:58:26.274825 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 12 23:58:26.274832 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 12 23:58:26.274838 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Sep 12 23:58:26.274845 kernel: Zone ranges: Sep 12 23:58:26.274851 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 12 23:58:26.274857 kernel: DMA32 empty Sep 12 23:58:26.274863 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 23:58:26.274870 kernel: Movable zone start for each node Sep 12 23:58:26.274880 kernel: Early memory node ranges Sep 12 23:58:26.274887 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 12 23:58:26.274894 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Sep 12 23:58:26.274900 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Sep 12 23:58:26.274907 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Sep 12 23:58:26.274915 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Sep 12 23:58:26.274922 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Sep 12 23:58:26.274928 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 12 23:58:26.274935 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 12 23:58:26.274942 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 12 23:58:26.274948 kernel: psci: probing for conduit method from ACPI. Sep 12 23:58:26.274955 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 23:58:26.274962 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 23:58:26.274968 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 12 23:58:26.274975 kernel: psci: SMC Calling Convention v1.4 Sep 12 23:58:26.274982 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 12 23:58:26.274988 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 12 23:58:26.274997 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 23:58:26.275003 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 23:58:26.275010 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 23:58:26.275017 kernel: Detected PIPT I-cache on CPU0 Sep 12 23:58:26.275024 kernel: CPU features: detected: GIC system register CPU interface Sep 12 23:58:26.275031 kernel: CPU features: detected: Hardware dirty bit management Sep 12 23:58:26.275038 kernel: CPU features: detected: Spectre-BHB Sep 12 23:58:26.275045 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 23:58:26.275052 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 23:58:26.275059 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 23:58:26.275066 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 12 23:58:26.275074 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 23:58:26.275081 kernel: alternatives: applying boot alternatives Sep 12 23:58:26.275089 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:58:26.275096 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 23:58:26.275103 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 23:58:26.275110 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 23:58:26.275116 kernel: Fallback order for Node 0: 0 Sep 12 23:58:26.275123 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 12 23:58:26.275130 kernel: Policy zone: Normal Sep 12 23:58:26.275136 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 23:58:26.275143 kernel: software IO TLB: area num 2. Sep 12 23:58:26.275151 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Sep 12 23:58:26.275158 kernel: Memory: 3982564K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 211596K reserved, 0K cma-reserved) Sep 12 23:58:26.275165 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 23:58:26.275172 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 23:58:26.275179 kernel: rcu: RCU event tracing is enabled. Sep 12 23:58:26.275186 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 23:58:26.275193 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 23:58:26.275200 kernel: Tracing variant of Tasks RCU enabled. Sep 12 23:58:26.275207 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 23:58:26.275213 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 23:58:26.275220 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 23:58:26.275228 kernel: GICv3: 960 SPIs implemented Sep 12 23:58:26.275235 kernel: GICv3: 0 Extended SPIs implemented Sep 12 23:58:26.275242 kernel: Root IRQ handler: gic_handle_irq Sep 12 23:58:26.275248 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 23:58:26.275255 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 12 23:58:26.275262 kernel: ITS: No ITS available, not enabling LPIs Sep 12 23:58:26.275269 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 23:58:26.275276 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:58:26.277326 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 23:58:26.277337 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 23:58:26.277345 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 23:58:26.277356 kernel: Console: colour dummy device 80x25 Sep 12 23:58:26.277364 kernel: printk: console [tty1] enabled Sep 12 23:58:26.277371 kernel: ACPI: Core revision 20230628 Sep 12 23:58:26.277378 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 23:58:26.277386 kernel: pid_max: default: 32768 minimum: 301 Sep 12 23:58:26.277393 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 23:58:26.277400 kernel: landlock: Up and running. Sep 12 23:58:26.277407 kernel: SELinux: Initializing. Sep 12 23:58:26.277414 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:58:26.277421 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:58:26.277430 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 23:58:26.277437 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 23:58:26.277444 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 12 23:58:26.277451 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Sep 12 23:58:26.277458 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 12 23:58:26.277465 kernel: rcu: Hierarchical SRCU implementation. Sep 12 23:58:26.277473 kernel: rcu: Max phase no-delay instances is 400. Sep 12 23:58:26.277486 kernel: Remapping and enabling EFI services. Sep 12 23:58:26.277493 kernel: smp: Bringing up secondary CPUs ... Sep 12 23:58:26.277500 kernel: Detected PIPT I-cache on CPU1 Sep 12 23:58:26.277508 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 12 23:58:26.277516 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:58:26.277524 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 23:58:26.277531 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 23:58:26.277539 kernel: SMP: Total of 2 processors activated. Sep 12 23:58:26.277546 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 23:58:26.277555 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 12 23:58:26.277563 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 23:58:26.277570 kernel: CPU features: detected: CRC32 instructions Sep 12 23:58:26.277577 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 23:58:26.277584 kernel: CPU features: detected: LSE atomic instructions Sep 12 23:58:26.277592 kernel: CPU features: detected: Privileged Access Never Sep 12 23:58:26.277599 kernel: CPU: All CPU(s) started at EL1 Sep 12 23:58:26.277607 kernel: alternatives: applying system-wide alternatives Sep 12 23:58:26.277614 kernel: devtmpfs: initialized Sep 12 23:58:26.277622 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 23:58:26.277630 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 23:58:26.277638 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 23:58:26.277645 kernel: SMBIOS 3.1.0 present. Sep 12 23:58:26.277652 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Sep 12 23:58:26.277659 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 23:58:26.277667 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 23:58:26.277674 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 23:58:26.277681 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 23:58:26.277690 kernel: audit: initializing netlink subsys (disabled) Sep 12 23:58:26.277698 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Sep 12 23:58:26.277706 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 23:58:26.277714 kernel: cpuidle: using governor menu Sep 12 23:58:26.277721 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 23:58:26.277728 kernel: ASID allocator initialised with 32768 entries Sep 12 23:58:26.277735 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 23:58:26.277743 kernel: Serial: AMBA PL011 UART driver Sep 12 23:58:26.277750 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 23:58:26.277759 kernel: Modules: 0 pages in range for non-PLT usage Sep 12 23:58:26.277766 kernel: Modules: 508992 pages in range for PLT usage Sep 12 23:58:26.277774 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 23:58:26.277781 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 23:58:26.277789 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 23:58:26.277796 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 23:58:26.277804 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 23:58:26.277811 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 23:58:26.277818 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 23:58:26.277827 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 23:58:26.277834 kernel: ACPI: Added _OSI(Module Device) Sep 12 23:58:26.277842 kernel: ACPI: Added _OSI(Processor Device) Sep 12 23:58:26.277849 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 23:58:26.277856 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 23:58:26.277864 kernel: ACPI: Interpreter enabled Sep 12 23:58:26.277871 kernel: ACPI: Using GIC for interrupt routing Sep 12 23:58:26.277879 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 12 23:58:26.277886 kernel: printk: console [ttyAMA0] enabled Sep 12 23:58:26.277894 kernel: printk: bootconsole [pl11] disabled Sep 12 23:58:26.277902 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 12 23:58:26.277909 kernel: iommu: Default domain type: Translated Sep 12 23:58:26.277916 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 23:58:26.277924 kernel: efivars: Registered efivars operations Sep 12 23:58:26.277931 kernel: vgaarb: loaded Sep 12 23:58:26.277938 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 23:58:26.277945 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 23:58:26.277953 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 23:58:26.277961 kernel: pnp: PnP ACPI init Sep 12 23:58:26.277968 kernel: pnp: PnP ACPI: found 0 devices Sep 12 23:58:26.277976 kernel: NET: Registered PF_INET protocol family Sep 12 23:58:26.277983 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 23:58:26.277990 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 23:58:26.277998 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 23:58:26.278006 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 23:58:26.278013 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 23:58:26.278020 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 23:58:26.278029 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:58:26.278036 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:58:26.278044 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 23:58:26.278051 kernel: PCI: CLS 0 bytes, default 64 Sep 12 23:58:26.278058 kernel: kvm [1]: HYP mode not available Sep 12 23:58:26.278065 kernel: Initialise system trusted keyrings Sep 12 23:58:26.278073 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 23:58:26.278080 kernel: Key type asymmetric registered Sep 12 23:58:26.278088 kernel: Asymmetric key parser 'x509' registered Sep 12 23:58:26.278096 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 23:58:26.278103 kernel: io scheduler mq-deadline registered Sep 12 23:58:26.278111 kernel: io scheduler kyber registered Sep 12 23:58:26.278118 kernel: io scheduler bfq registered Sep 12 23:58:26.278125 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 23:58:26.278132 kernel: thunder_xcv, ver 1.0 Sep 12 23:58:26.278140 kernel: thunder_bgx, ver 1.0 Sep 12 23:58:26.278147 kernel: nicpf, ver 1.0 Sep 12 23:58:26.278154 kernel: nicvf, ver 1.0 Sep 12 23:58:26.278322 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 23:58:26.278405 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T23:58:25 UTC (1757721505) Sep 12 23:58:26.278416 kernel: efifb: probing for efifb Sep 12 23:58:26.278423 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 12 23:58:26.278431 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 12 23:58:26.278439 kernel: efifb: scrolling: redraw Sep 12 23:58:26.278446 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 23:58:26.278454 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 23:58:26.278463 kernel: fb0: EFI VGA frame buffer device Sep 12 23:58:26.278470 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 12 23:58:26.278478 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 23:58:26.278485 kernel: No ACPI PMU IRQ for CPU0 Sep 12 23:58:26.278492 kernel: No ACPI PMU IRQ for CPU1 Sep 12 23:58:26.278499 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 12 23:58:26.278507 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 23:58:26.278514 kernel: watchdog: Hard watchdog permanently disabled Sep 12 23:58:26.278521 kernel: NET: Registered PF_INET6 protocol family Sep 12 23:58:26.278531 kernel: Segment Routing with IPv6 Sep 12 23:58:26.278538 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 23:58:26.278545 kernel: NET: Registered PF_PACKET protocol family Sep 12 23:58:26.278552 kernel: Key type dns_resolver registered Sep 12 23:58:26.278560 kernel: registered taskstats version 1 Sep 12 23:58:26.278567 kernel: Loading compiled-in X.509 certificates Sep 12 23:58:26.278574 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 12 23:58:26.278581 kernel: Key type .fscrypt registered Sep 12 23:58:26.278588 kernel: Key type fscrypt-provisioning registered Sep 12 23:58:26.278597 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 23:58:26.278604 kernel: ima: Allocated hash algorithm: sha1 Sep 12 23:58:26.278611 kernel: ima: No architecture policies found Sep 12 23:58:26.278619 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 23:58:26.278626 kernel: clk: Disabling unused clocks Sep 12 23:58:26.278633 kernel: Freeing unused kernel memory: 39488K Sep 12 23:58:26.278641 kernel: Run /init as init process Sep 12 23:58:26.278648 kernel: with arguments: Sep 12 23:58:26.278655 kernel: /init Sep 12 23:58:26.278664 kernel: with environment: Sep 12 23:58:26.278672 kernel: HOME=/ Sep 12 23:58:26.278679 kernel: TERM=linux Sep 12 23:58:26.278686 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 23:58:26.278695 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:58:26.278705 systemd[1]: Detected virtualization microsoft. Sep 12 23:58:26.278713 systemd[1]: Detected architecture arm64. Sep 12 23:58:26.278720 systemd[1]: Running in initrd. Sep 12 23:58:26.278729 systemd[1]: No hostname configured, using default hostname. Sep 12 23:58:26.278737 systemd[1]: Hostname set to . Sep 12 23:58:26.278745 systemd[1]: Initializing machine ID from random generator. Sep 12 23:58:26.278753 systemd[1]: Queued start job for default target initrd.target. Sep 12 23:58:26.278761 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:58:26.278769 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:58:26.278777 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 23:58:26.278785 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:58:26.278795 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 23:58:26.278803 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 23:58:26.278812 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 23:58:26.278821 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 23:58:26.278828 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:58:26.278836 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:58:26.278845 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:58:26.278853 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:58:26.278861 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:58:26.278869 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:58:26.278877 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:58:26.278885 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:58:26.278894 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:58:26.278902 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 23:58:26.278910 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:58:26.278920 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:58:26.278928 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:58:26.278936 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:58:26.278944 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 23:58:26.278952 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:58:26.278960 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 23:58:26.278968 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 23:58:26.278975 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:58:26.278999 systemd-journald[217]: Collecting audit messages is disabled. Sep 12 23:58:26.279020 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:58:26.279030 systemd-journald[217]: Journal started Sep 12 23:58:26.279049 systemd-journald[217]: Runtime Journal (/run/log/journal/97d095b1165941bfb1353d090bb0b23f) is 8.0M, max 78.5M, 70.5M free. Sep 12 23:58:26.286426 systemd-modules-load[218]: Inserted module 'overlay' Sep 12 23:58:26.293575 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:58:26.305301 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:58:26.316298 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 23:58:26.324382 systemd-modules-load[218]: Inserted module 'br_netfilter' Sep 12 23:58:26.329611 kernel: Bridge firewalling registered Sep 12 23:58:26.325254 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 23:58:26.340790 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:58:26.347521 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 23:58:26.357661 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:58:26.367113 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:26.387492 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:58:26.396443 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:58:26.415741 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:58:26.426446 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:58:26.444798 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:58:26.464805 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:58:26.482310 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:58:26.489459 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:58:26.513507 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 23:58:26.522458 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:58:26.543667 dracut-cmdline[249]: dracut-dracut-053 Sep 12 23:58:26.551097 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:58:26.548455 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:58:26.604132 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:58:26.623333 systemd-resolved[250]: Positive Trust Anchors: Sep 12 23:58:26.623346 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:58:26.623378 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:58:26.630053 systemd-resolved[250]: Defaulting to hostname 'linux'. Sep 12 23:58:26.630967 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:58:26.647187 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:58:26.711299 kernel: SCSI subsystem initialized Sep 12 23:58:26.719303 kernel: Loading iSCSI transport class v2.0-870. Sep 12 23:58:26.731298 kernel: iscsi: registered transport (tcp) Sep 12 23:58:26.746341 kernel: iscsi: registered transport (qla4xxx) Sep 12 23:58:26.746385 kernel: QLogic iSCSI HBA Driver Sep 12 23:58:26.785631 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 23:58:26.800507 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 23:58:26.831921 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 23:58:26.831974 kernel: device-mapper: uevent: version 1.0.3 Sep 12 23:58:26.838867 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 23:58:26.887308 kernel: raid6: neonx8 gen() 15748 MB/s Sep 12 23:58:26.907294 kernel: raid6: neonx4 gen() 15688 MB/s Sep 12 23:58:26.927291 kernel: raid6: neonx2 gen() 13201 MB/s Sep 12 23:58:26.948293 kernel: raid6: neonx1 gen() 10526 MB/s Sep 12 23:58:26.968290 kernel: raid6: int64x8 gen() 6965 MB/s Sep 12 23:58:26.988290 kernel: raid6: int64x4 gen() 7352 MB/s Sep 12 23:58:27.009292 kernel: raid6: int64x2 gen() 6130 MB/s Sep 12 23:58:27.032553 kernel: raid6: int64x1 gen() 5062 MB/s Sep 12 23:58:27.032564 kernel: raid6: using algorithm neonx8 gen() 15748 MB/s Sep 12 23:58:27.056777 kernel: raid6: .... xor() 12062 MB/s, rmw enabled Sep 12 23:58:27.056804 kernel: raid6: using neon recovery algorithm Sep 12 23:58:27.068900 kernel: xor: measuring software checksum speed Sep 12 23:58:27.068920 kernel: 8regs : 19797 MB/sec Sep 12 23:58:27.072491 kernel: 32regs : 19650 MB/sec Sep 12 23:58:27.076009 kernel: arm64_neon : 27007 MB/sec Sep 12 23:58:27.080044 kernel: xor: using function: arm64_neon (27007 MB/sec) Sep 12 23:58:27.130301 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 23:58:27.141489 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:58:27.160445 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:58:27.182635 systemd-udevd[435]: Using default interface naming scheme 'v255'. Sep 12 23:58:27.187774 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:58:27.207409 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 23:58:27.236963 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Sep 12 23:58:27.269358 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:58:27.287497 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:58:27.328369 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:58:27.346489 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 23:58:27.380144 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 23:58:27.393947 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:58:27.408615 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:58:27.421673 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:58:27.446813 kernel: hv_vmbus: Vmbus version:5.3 Sep 12 23:58:27.441490 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 23:58:27.468303 kernel: hv_vmbus: registering driver hv_netvsc Sep 12 23:58:27.474299 kernel: hv_vmbus: registering driver hv_storvsc Sep 12 23:58:27.474348 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 12 23:58:27.484124 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 12 23:58:27.484987 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:58:27.532979 kernel: scsi host0: storvsc_host_t Sep 12 23:58:27.533155 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 12 23:58:27.533187 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 12 23:58:27.533197 kernel: scsi host1: storvsc_host_t Sep 12 23:58:27.533311 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 12 23:58:27.533323 kernel: hv_vmbus: registering driver hid_hyperv Sep 12 23:58:27.533332 kernel: PTP clock support registered Sep 12 23:58:27.497133 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:58:27.581235 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 12 23:58:27.581268 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 12 23:58:27.581316 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 12 23:58:27.497256 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:58:27.540541 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:58:27.559202 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:58:27.559345 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:27.573140 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:58:27.651885 kernel: hv_utils: Registering HyperV Utility Driver Sep 12 23:58:27.651910 kernel: hv_vmbus: registering driver hv_utils Sep 12 23:58:27.651920 kernel: hv_utils: Heartbeat IC version 3.0 Sep 12 23:58:27.651930 kernel: hv_utils: Shutdown IC version 3.2 Sep 12 23:58:27.651941 kernel: hv_utils: TimeSync IC version 4.0 Sep 12 23:58:27.609725 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:58:28.079711 systemd-resolved[250]: Clock change detected. Flushing caches. Sep 12 23:58:28.082664 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:28.112373 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 12 23:58:28.112593 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 23:58:28.114279 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:58:28.137133 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 12 23:58:28.137286 kernel: hv_netvsc 002248c2-3aa7-0022-48c2-3aa7002248c2 eth0: VF slot 1 added Sep 12 23:58:28.155133 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 12 23:58:28.155358 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 12 23:58:28.155410 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:58:28.189233 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 12 23:58:28.189415 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 12 23:58:28.189505 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 12 23:58:28.189590 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:58:28.189601 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 12 23:58:28.198342 kernel: hv_vmbus: registering driver hv_pci Sep 12 23:58:28.208147 kernel: hv_pci 4b938de2-ff2b-4fd5-b82e-85296756e3a0: PCI VMBus probing: Using version 0x10004 Sep 12 23:58:28.363603 kernel: hv_pci 4b938de2-ff2b-4fd5-b82e-85296756e3a0: PCI host bridge to bus ff2b:00 Sep 12 23:58:28.363802 kernel: pci_bus ff2b:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 12 23:58:28.363910 kernel: pci_bus ff2b:00: No busn resource found for root bus, will use [bus 00-ff] Sep 12 23:58:28.376610 kernel: pci ff2b:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 12 23:58:28.384172 kernel: pci ff2b:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 12 23:58:28.389559 kernel: pci ff2b:00:02.0: enabling Extended Tags Sep 12 23:58:28.407125 kernel: pci ff2b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ff2b:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 12 23:58:28.419021 kernel: pci_bus ff2b:00: busn_res: [bus 00-ff] end is updated to 00 Sep 12 23:58:28.419234 kernel: pci ff2b:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 12 23:58:28.458505 kernel: mlx5_core ff2b:00:02.0: enabling device (0000 -> 0002) Sep 12 23:58:28.467123 kernel: mlx5_core ff2b:00:02.0: firmware version: 16.31.2424 Sep 12 23:58:28.744483 kernel: hv_netvsc 002248c2-3aa7-0022-48c2-3aa7002248c2 eth0: VF registering: eth1 Sep 12 23:58:28.744681 kernel: mlx5_core ff2b:00:02.0 eth1: joined to eth0 Sep 12 23:58:28.754136 kernel: mlx5_core ff2b:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 12 23:58:28.767134 kernel: mlx5_core ff2b:00:02.0 enP65323s1: renamed from eth1 Sep 12 23:58:29.006171 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (481) Sep 12 23:58:29.020558 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 23:58:29.043134 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (492) Sep 12 23:58:29.052286 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 12 23:58:29.064251 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 12 23:58:29.075738 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 12 23:58:29.103310 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 23:58:29.129249 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 12 23:58:30.144153 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 12 23:58:30.144715 disk-uuid[600]: The operation has completed successfully. Sep 12 23:58:30.220545 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 23:58:30.220640 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 23:58:30.242337 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 23:58:30.254170 sh[716]: Success Sep 12 23:58:30.293137 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 23:58:30.716679 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 23:58:30.740232 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 23:58:30.749909 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 23:58:30.789974 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 12 23:58:30.790032 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:58:30.796611 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 23:58:30.801598 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 23:58:30.806142 kernel: BTRFS info (device dm-0): using free space tree Sep 12 23:58:31.244015 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 23:58:31.249514 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 23:58:31.266331 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 23:58:31.274254 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 23:58:31.309919 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:58:31.309953 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:58:31.315083 kernel: BTRFS info (device sda6): using free space tree Sep 12 23:58:31.386153 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 23:58:31.398894 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 23:58:31.412947 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:58:31.399343 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:58:31.425243 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:58:31.436780 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 23:58:31.449269 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 23:58:31.458409 systemd-networkd[897]: lo: Link UP Sep 12 23:58:31.458413 systemd-networkd[897]: lo: Gained carrier Sep 12 23:58:31.459945 systemd-networkd[897]: Enumeration completed Sep 12 23:58:31.466056 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:58:31.466059 systemd-networkd[897]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:58:31.469211 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:58:31.483554 systemd[1]: Reached target network.target - Network. Sep 12 23:58:31.566121 kernel: mlx5_core ff2b:00:02.0 enP65323s1: Link up Sep 12 23:58:31.566383 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 23:58:31.654130 kernel: hv_netvsc 002248c2-3aa7-0022-48c2-3aa7002248c2 eth0: Data path switched to VF: enP65323s1 Sep 12 23:58:31.654657 systemd-networkd[897]: enP65323s1: Link UP Sep 12 23:58:31.654894 systemd-networkd[897]: eth0: Link UP Sep 12 23:58:31.655327 systemd-networkd[897]: eth0: Gained carrier Sep 12 23:58:31.655337 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:58:31.677628 systemd-networkd[897]: enP65323s1: Gained carrier Sep 12 23:58:31.693141 systemd-networkd[897]: eth0: DHCPv4 address 10.200.20.16/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 23:58:32.807287 ignition[900]: Ignition 2.19.0 Sep 12 23:58:32.807297 ignition[900]: Stage: fetch-offline Sep 12 23:58:32.807332 ignition[900]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:32.812051 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:58:32.807339 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:32.827405 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 23:58:32.807430 ignition[900]: parsed url from cmdline: "" Sep 12 23:58:32.807433 ignition[900]: no config URL provided Sep 12 23:58:32.807438 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:58:32.807445 ignition[900]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:58:32.807449 ignition[900]: failed to fetch config: resource requires networking Sep 12 23:58:32.807610 ignition[900]: Ignition finished successfully Sep 12 23:58:32.847502 ignition[909]: Ignition 2.19.0 Sep 12 23:58:32.847508 ignition[909]: Stage: fetch Sep 12 23:58:32.847711 ignition[909]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:32.847720 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:32.847813 ignition[909]: parsed url from cmdline: "" Sep 12 23:58:32.847816 ignition[909]: no config URL provided Sep 12 23:58:32.847821 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:58:32.847831 ignition[909]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:58:32.847852 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 12 23:58:32.939269 ignition[909]: GET result: OK Sep 12 23:58:32.939330 ignition[909]: config has been read from IMDS userdata Sep 12 23:58:32.939375 ignition[909]: parsing config with SHA512: 7eb132286ab4753bb0f24c02cfe94269ad17aeb33525e3236fe6cfe59ba2885e1fdee48fbe0e1bcaf911ab3bf9bf7f74df575b10379514ce0ede84e0e2994b32 Sep 12 23:58:32.943040 unknown[909]: fetched base config from "system" Sep 12 23:58:32.943440 ignition[909]: fetch: fetch complete Sep 12 23:58:32.943046 unknown[909]: fetched base config from "system" Sep 12 23:58:32.943445 ignition[909]: fetch: fetch passed Sep 12 23:58:32.943052 unknown[909]: fetched user config from "azure" Sep 12 23:58:32.943485 ignition[909]: Ignition finished successfully Sep 12 23:58:32.948439 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 23:58:32.976383 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 23:58:32.999758 ignition[915]: Ignition 2.19.0 Sep 12 23:58:33.003084 ignition[915]: Stage: kargs Sep 12 23:58:33.003335 ignition[915]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:33.007736 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 23:58:33.003347 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:33.004280 ignition[915]: kargs: kargs passed Sep 12 23:58:33.004332 ignition[915]: Ignition finished successfully Sep 12 23:58:33.030240 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 23:58:33.035748 systemd-networkd[897]: eth0: Gained IPv6LL Sep 12 23:58:33.049375 ignition[921]: Ignition 2.19.0 Sep 12 23:58:33.049381 ignition[921]: Stage: disks Sep 12 23:58:33.052401 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 23:58:33.049622 ignition[921]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:33.061234 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 23:58:33.049634 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:33.069839 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:58:33.050888 ignition[921]: disks: disks passed Sep 12 23:58:33.081599 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:58:33.050942 ignition[921]: Ignition finished successfully Sep 12 23:58:33.091649 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:58:33.102780 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:58:33.128364 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 23:58:33.197658 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 12 23:58:33.204607 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 23:58:33.223232 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 23:58:33.284134 kernel: EXT4-fs (sda9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 12 23:58:33.285052 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 23:58:33.289927 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 23:58:33.335192 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:58:33.359284 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Sep 12 23:58:33.372095 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:58:33.372132 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:58:33.376450 kernel: BTRFS info (device sda6): using free space tree Sep 12 23:58:33.385136 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 23:58:33.385258 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 23:58:33.396223 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 23:58:33.409797 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 23:58:33.409834 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:58:33.434336 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:58:33.439211 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 23:58:33.459378 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 23:58:34.079918 coreos-metadata[957]: Sep 12 23:58:34.079 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 23:58:34.089459 coreos-metadata[957]: Sep 12 23:58:34.089 INFO Fetch successful Sep 12 23:58:34.095533 coreos-metadata[957]: Sep 12 23:58:34.095 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 12 23:58:34.106604 coreos-metadata[957]: Sep 12 23:58:34.106 INFO Fetch successful Sep 12 23:58:34.120152 coreos-metadata[957]: Sep 12 23:58:34.120 INFO wrote hostname ci-4081.3.5-n-4f403f96f8 to /sysroot/etc/hostname Sep 12 23:58:34.129204 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 23:58:34.430094 initrd-setup-root[969]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 23:58:34.485277 initrd-setup-root[976]: cut: /sysroot/etc/group: No such file or directory Sep 12 23:58:34.508410 initrd-setup-root[983]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 23:58:34.517373 initrd-setup-root[990]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 23:58:35.608054 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 23:58:35.623263 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 23:58:35.635089 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 23:58:35.654926 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:58:35.649552 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 23:58:35.680982 ignition[1057]: INFO : Ignition 2.19.0 Sep 12 23:58:35.680982 ignition[1057]: INFO : Stage: mount Sep 12 23:58:35.680982 ignition[1057]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:35.680982 ignition[1057]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:35.684929 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 23:58:35.716905 ignition[1057]: INFO : mount: mount passed Sep 12 23:58:35.716905 ignition[1057]: INFO : Ignition finished successfully Sep 12 23:58:35.692350 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 23:58:35.721337 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 23:58:35.743248 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:58:35.777126 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1069) Sep 12 23:58:35.793120 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:58:35.793160 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:58:35.793171 kernel: BTRFS info (device sda6): using free space tree Sep 12 23:58:35.802120 kernel: BTRFS info (device sda6): auto enabling async discard Sep 12 23:58:35.811268 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:58:35.838144 ignition[1087]: INFO : Ignition 2.19.0 Sep 12 23:58:35.838144 ignition[1087]: INFO : Stage: files Sep 12 23:58:35.846191 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:35.846191 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:35.846191 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Sep 12 23:58:35.870748 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 23:58:35.870748 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 23:58:35.956517 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 23:58:35.963547 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 23:58:35.963547 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 23:58:35.956985 unknown[1087]: wrote ssh authorized keys file for user: core Sep 12 23:58:36.016505 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 23:58:36.026889 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 12 23:58:36.063728 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 23:58:36.167237 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 23:58:36.167237 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 23:58:36.186792 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 12 23:58:36.669238 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 23:58:36.893895 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 23:58:36.893895 ignition[1087]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 23:58:36.943590 ignition[1087]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:58:36.955571 ignition[1087]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:58:36.955571 ignition[1087]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 23:58:36.955571 ignition[1087]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 23:58:36.955571 ignition[1087]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 23:58:36.955571 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:58:36.955571 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:58:36.955571 ignition[1087]: INFO : files: files passed Sep 12 23:58:36.955571 ignition[1087]: INFO : Ignition finished successfully Sep 12 23:58:36.955348 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 23:58:36.979722 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 23:58:37.001266 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 23:58:37.026328 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 23:58:37.067404 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:58:37.067404 initrd-setup-root-after-ignition[1115]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:58:37.026441 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 23:58:37.101611 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:58:37.067875 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:58:37.082311 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 23:58:37.122358 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 23:58:37.151604 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 23:58:37.151747 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 23:58:37.163596 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 23:58:37.175586 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 23:58:37.186852 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 23:58:37.204386 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 23:58:37.225081 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:58:37.238370 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 23:58:37.255646 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:58:37.267656 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:58:37.273973 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 23:58:37.284674 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 23:58:37.284800 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:58:37.300197 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 23:58:37.305814 systemd[1]: Stopped target basic.target - Basic System. Sep 12 23:58:37.317069 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 23:58:37.328271 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:58:37.338796 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 23:58:37.350381 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 23:58:37.361547 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:58:37.373864 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 23:58:37.384310 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 23:58:37.395706 systemd[1]: Stopped target swap.target - Swaps. Sep 12 23:58:37.405007 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 23:58:37.405139 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:58:37.419213 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:58:37.425045 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:58:37.436380 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 23:58:37.436449 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:58:37.448035 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 23:58:37.448166 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 23:58:37.464940 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 23:58:37.465059 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:58:37.471778 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 23:58:37.471866 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 23:58:37.482364 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 23:58:37.482458 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 23:58:37.560990 ignition[1139]: INFO : Ignition 2.19.0 Sep 12 23:58:37.560990 ignition[1139]: INFO : Stage: umount Sep 12 23:58:37.560990 ignition[1139]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:58:37.560990 ignition[1139]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 23:58:37.512387 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 23:58:37.605158 ignition[1139]: INFO : umount: umount passed Sep 12 23:58:37.605158 ignition[1139]: INFO : Ignition finished successfully Sep 12 23:58:37.528325 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 23:58:37.528489 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:58:37.561319 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 23:58:37.572182 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 23:58:37.572333 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:58:37.580882 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 23:58:37.580989 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:58:37.610682 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 23:58:37.610966 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 23:58:37.620220 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 23:58:37.620462 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 23:58:37.630572 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 23:58:37.630622 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 23:58:37.647797 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 23:58:37.647861 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 23:58:37.658465 systemd[1]: Stopped target network.target - Network. Sep 12 23:58:37.663399 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 23:58:37.663476 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:58:37.680951 systemd[1]: Stopped target paths.target - Path Units. Sep 12 23:58:37.692625 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 23:58:37.698131 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:58:37.714432 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 23:58:37.723989 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 23:58:37.733834 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 23:58:37.733908 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:58:37.743920 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 23:58:37.743977 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:58:37.754617 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 23:58:37.754676 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 23:58:37.764680 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 23:58:37.764724 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 23:58:37.774816 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 23:58:37.785271 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 23:58:37.797157 systemd-networkd[897]: eth0: DHCPv6 lease lost Sep 12 23:58:37.799054 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 23:58:37.799728 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 23:58:37.799807 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 23:58:37.816655 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 23:58:37.816760 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 23:58:37.824537 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 23:58:37.825160 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 23:58:37.839988 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 23:58:37.840055 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:58:37.868296 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 23:58:37.877900 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 23:58:37.877971 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:58:37.889798 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:58:37.889851 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:58:37.900916 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 23:58:38.076504 kernel: hv_netvsc 002248c2-3aa7-0022-48c2-3aa7002248c2 eth0: Data path switched from VF: enP65323s1 Sep 12 23:58:37.900961 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 23:58:37.912466 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 23:58:37.912515 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:58:37.924926 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:58:37.961724 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 23:58:37.961913 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:58:37.972863 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 23:58:37.972914 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 23:58:37.983479 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 23:58:37.983510 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:58:38.005339 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 23:58:38.005399 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:58:38.021255 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 23:58:38.021311 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 23:58:38.037877 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:58:38.037925 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:58:38.077562 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 23:58:38.093614 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 23:58:38.093692 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:58:38.105803 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:58:38.105855 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:38.118867 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 23:58:38.119033 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 23:58:38.130673 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 23:58:38.130763 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 23:58:38.142646 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 23:58:38.142759 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 23:58:38.160995 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 23:58:38.161159 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 23:58:38.171881 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 23:58:38.327668 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Sep 12 23:58:38.199360 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 23:58:38.218266 systemd[1]: Switching root. Sep 12 23:58:38.340492 systemd-journald[217]: Journal stopped Sep 12 23:58:47.038832 kernel: mlx5_core ff2b:00:02.0: poll_health:835:(pid 0): device's health compromised - reached miss count Sep 12 23:58:47.038859 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 23:58:47.038872 kernel: SELinux: policy capability open_perms=1 Sep 12 23:58:47.038880 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 23:58:47.038888 kernel: SELinux: policy capability always_check_network=0 Sep 12 23:58:47.038895 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 23:58:47.038904 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 23:58:47.038912 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 23:58:47.038920 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 23:58:47.038928 kernel: audit: type=1403 audit(1757721520.282:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 23:58:47.038939 systemd[1]: Successfully loaded SELinux policy in 262.304ms. Sep 12 23:58:47.038949 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.418ms. Sep 12 23:58:47.038959 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:58:47.038968 systemd[1]: Detected virtualization microsoft. Sep 12 23:58:47.038979 systemd[1]: Detected architecture arm64. Sep 12 23:58:47.038988 systemd[1]: Detected first boot. Sep 12 23:58:47.038997 systemd[1]: Hostname set to . Sep 12 23:58:47.039006 systemd[1]: Initializing machine ID from random generator. Sep 12 23:58:47.039015 zram_generator::config[1180]: No configuration found. Sep 12 23:58:47.039025 systemd[1]: Populated /etc with preset unit settings. Sep 12 23:58:47.039034 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 23:58:47.039045 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 23:58:47.039055 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 23:58:47.039064 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 23:58:47.039074 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 23:58:47.039083 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 23:58:47.039092 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 23:58:47.039136 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 23:58:47.039152 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 23:58:47.039162 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 23:58:47.039171 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 23:58:47.039181 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:58:47.039190 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:58:47.039199 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 23:58:47.039209 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 23:58:47.039218 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 23:58:47.039229 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:58:47.039238 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 23:58:47.039248 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:58:47.039257 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 23:58:47.039269 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 23:58:47.039278 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 23:58:47.039289 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 23:58:47.039299 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:58:47.039310 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:58:47.039320 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:58:47.039329 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:58:47.039339 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 23:58:47.039348 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 23:58:47.039357 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:58:47.039367 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:58:47.039378 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:58:47.039388 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 23:58:47.039397 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 23:58:47.039407 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 23:58:47.039416 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 23:58:47.039426 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 23:58:47.039437 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 23:58:47.039447 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 23:58:47.039457 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 23:58:47.039467 systemd[1]: Reached target machines.target - Containers. Sep 12 23:58:47.039477 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 23:58:47.039486 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:58:47.039497 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:58:47.039506 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 23:58:47.039518 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:58:47.039527 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:58:47.039537 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:58:47.039546 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 23:58:47.039556 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:58:47.039566 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 23:58:47.039575 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 23:58:47.039585 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 23:58:47.039596 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 23:58:47.039606 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 23:58:47.039615 kernel: fuse: init (API version 7.39) Sep 12 23:58:47.039624 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:58:47.039633 kernel: loop: module loaded Sep 12 23:58:47.039642 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:58:47.039652 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:58:47.039662 kernel: ACPI: bus type drm_connector registered Sep 12 23:58:47.039671 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 23:58:47.039702 systemd-journald[1266]: Collecting audit messages is disabled. Sep 12 23:58:47.039723 systemd-journald[1266]: Journal started Sep 12 23:58:47.039745 systemd-journald[1266]: Runtime Journal (/run/log/journal/09fa408d60994c24848c3401d66be53a) is 8.0M, max 78.5M, 70.5M free. Sep 12 23:58:45.860042 systemd[1]: Queued start job for default target multi-user.target. Sep 12 23:58:46.052545 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 12 23:58:46.052905 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 23:58:46.053222 systemd[1]: systemd-journald.service: Consumed 3.002s CPU time. Sep 12 23:58:47.059565 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:58:47.061186 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 23:58:47.071889 systemd[1]: Stopped verity-setup.service. Sep 12 23:58:47.089121 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:58:47.089945 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 23:58:47.095812 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 23:58:47.102294 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 23:58:47.107809 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 23:58:47.114477 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 23:58:47.121353 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 23:58:47.128135 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 23:58:47.134954 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:58:47.142369 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 23:58:47.142497 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 23:58:47.149303 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:58:47.149440 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:58:47.155911 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:58:47.156042 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:58:47.162960 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:58:47.163086 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:58:47.170276 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 23:58:47.170402 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 23:58:47.176670 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:58:47.176790 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:58:47.183005 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:58:47.189404 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:58:47.196981 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 23:58:47.206149 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:58:47.224013 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:58:47.236352 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 23:58:47.244318 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 23:58:47.250574 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 23:58:47.250615 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:58:47.257488 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 23:58:47.265730 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 23:58:47.273117 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 23:58:47.278981 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:58:47.332257 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 23:58:47.339305 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 23:58:47.345945 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:58:47.349347 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 23:58:47.357809 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:58:47.358876 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:58:47.367285 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 23:58:47.381331 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 23:58:47.398285 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 23:58:47.407155 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 23:58:47.413720 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 23:58:47.427870 systemd-journald[1266]: Time spent on flushing to /var/log/journal/09fa408d60994c24848c3401d66be53a is 12.930ms for 893 entries. Sep 12 23:58:47.427870 systemd-journald[1266]: System Journal (/var/log/journal/09fa408d60994c24848c3401d66be53a) is 8.0M, max 2.6G, 2.6G free. Sep 12 23:58:47.463763 systemd-journald[1266]: Received client request to flush runtime journal. Sep 12 23:58:47.421617 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 23:58:47.434512 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 23:58:47.447060 udevadm[1317]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 23:58:47.447804 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 23:58:47.464269 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 23:58:47.472679 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 23:58:47.504129 kernel: loop0: detected capacity change from 0 to 114432 Sep 12 23:58:47.530024 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 23:58:47.532247 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 23:58:47.545680 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:58:48.067149 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 23:58:48.074800 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 23:58:48.086300 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:58:48.174138 kernel: loop1: detected capacity change from 0 to 114328 Sep 12 23:58:48.290530 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Sep 12 23:58:48.290550 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Sep 12 23:58:48.294889 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:58:48.714129 kernel: loop2: detected capacity change from 0 to 31320 Sep 12 23:58:49.444434 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 23:58:49.458421 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:58:49.477963 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Sep 12 23:58:49.617130 kernel: loop3: detected capacity change from 0 to 211168 Sep 12 23:58:49.664132 kernel: loop4: detected capacity change from 0 to 114432 Sep 12 23:58:49.679135 kernel: loop5: detected capacity change from 0 to 114328 Sep 12 23:58:49.694130 kernel: loop6: detected capacity change from 0 to 31320 Sep 12 23:58:49.710132 kernel: loop7: detected capacity change from 0 to 211168 Sep 12 23:58:49.725154 (sd-merge)[1340]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 12 23:58:49.725585 (sd-merge)[1340]: Merged extensions into '/usr'. Sep 12 23:58:49.729339 systemd[1]: Reloading requested from client PID 1314 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 23:58:49.729457 systemd[1]: Reloading... Sep 12 23:58:49.786171 zram_generator::config[1362]: No configuration found. Sep 12 23:58:50.155968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:58:50.212154 systemd[1]: Reloading finished in 482 ms. Sep 12 23:58:50.241048 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 23:58:50.259299 systemd[1]: Starting ensure-sysext.service... Sep 12 23:58:50.265299 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:58:50.304487 systemd[1]: Reloading requested from client PID 1421 ('systemctl') (unit ensure-sysext.service)... Sep 12 23:58:50.304509 systemd[1]: Reloading... Sep 12 23:58:50.362958 systemd-tmpfiles[1422]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 23:58:50.363265 systemd-tmpfiles[1422]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 23:58:50.363932 systemd-tmpfiles[1422]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 23:58:50.364170 systemd-tmpfiles[1422]: ACLs are not supported, ignoring. Sep 12 23:58:50.364214 systemd-tmpfiles[1422]: ACLs are not supported, ignoring. Sep 12 23:58:50.379225 zram_generator::config[1450]: No configuration found. Sep 12 23:58:50.435844 systemd-tmpfiles[1422]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:58:50.435865 systemd-tmpfiles[1422]: Skipping /boot Sep 12 23:58:50.458891 systemd-tmpfiles[1422]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:58:50.458906 systemd-tmpfiles[1422]: Skipping /boot Sep 12 23:58:50.513693 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:58:50.584927 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 23:58:50.585392 systemd[1]: Reloading finished in 280 ms. Sep 12 23:58:50.599348 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:58:50.622816 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:58:50.670722 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 23:58:50.670803 kernel: hv_vmbus: registering driver hv_balloon Sep 12 23:58:50.673938 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 23:58:50.692397 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 12 23:58:50.692480 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 12 23:58:50.693260 kernel: hv_vmbus: registering driver hyperv_fb Sep 12 23:58:50.702418 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 12 23:58:50.711383 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 12 23:58:50.717705 kernel: Console: switching to colour dummy device 80x25 Sep 12 23:58:50.725187 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 23:58:50.726402 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 23:58:50.734234 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:58:50.735484 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:58:50.745366 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:58:50.757706 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:58:50.766985 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:58:50.773600 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 23:58:50.802286 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1515) Sep 12 23:58:50.803483 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:58:50.817441 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:58:50.839058 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 23:58:50.852637 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:58:50.853486 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:58:50.868782 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:58:50.870491 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:58:50.880783 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:58:50.881929 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:58:50.905743 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 23:58:50.919827 systemd[1]: Finished ensure-sysext.service. Sep 12 23:58:50.942455 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 12 23:58:50.955419 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:58:50.961303 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:58:50.968356 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:58:50.977937 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:58:50.991257 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:58:50.998658 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:58:51.007320 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 23:58:51.015696 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 23:58:51.028256 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 23:58:51.038264 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:58:51.055891 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 23:58:51.065469 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:58:51.065645 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:58:51.073997 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:58:51.074161 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:58:51.081709 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 23:58:51.089094 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:58:51.090219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:58:51.100740 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:58:51.100883 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:58:51.121253 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 23:58:51.132403 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:58:51.132477 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:58:51.149300 augenrules[1618]: No rules Sep 12 23:58:51.152144 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 23:58:51.166719 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 23:58:51.181145 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 23:58:51.216881 systemd-resolved[1575]: Positive Trust Anchors: Sep 12 23:58:51.216898 systemd-resolved[1575]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:58:51.216932 systemd-resolved[1575]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:58:51.228038 lvm[1631]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 23:58:51.274069 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 23:58:51.281888 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:58:51.293248 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 23:58:51.309984 lvm[1643]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 23:58:51.324752 systemd-resolved[1575]: Using system hostname 'ci-4081.3.5-n-4f403f96f8'. Sep 12 23:58:51.326060 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:58:51.333849 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:58:51.343730 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 23:58:51.371853 systemd-networkd[1566]: lo: Link UP Sep 12 23:58:51.371862 systemd-networkd[1566]: lo: Gained carrier Sep 12 23:58:51.374147 systemd-networkd[1566]: Enumeration completed Sep 12 23:58:51.374579 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:58:51.380260 systemd-networkd[1566]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:58:51.380264 systemd-networkd[1566]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:58:51.382777 systemd[1]: Reached target network.target - Network. Sep 12 23:58:51.396258 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 23:58:51.452620 kernel: mlx5_core ff2b:00:02.0 enP65323s1: Link up Sep 12 23:58:51.452933 kernel: buffer_size[0]=0 is not enough for lossless buffer Sep 12 23:58:51.499470 kernel: hv_netvsc 002248c2-3aa7-0022-48c2-3aa7002248c2 eth0: Data path switched to VF: enP65323s1 Sep 12 23:58:51.501063 systemd-networkd[1566]: enP65323s1: Link UP Sep 12 23:58:51.501507 systemd-networkd[1566]: eth0: Link UP Sep 12 23:58:51.501515 systemd-networkd[1566]: eth0: Gained carrier Sep 12 23:58:51.501530 systemd-networkd[1566]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:58:51.508509 systemd-networkd[1566]: enP65323s1: Gained carrier Sep 12 23:58:51.520157 systemd-networkd[1566]: eth0: DHCPv4 address 10.200.20.16/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 23:58:52.819740 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:58:53.511213 systemd-networkd[1566]: eth0: Gained IPv6LL Sep 12 23:58:53.513557 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 23:58:53.522713 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 23:58:54.099777 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 23:58:54.107746 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 23:58:58.799735 ldconfig[1309]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 23:58:58.816662 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 23:58:58.831346 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 23:58:58.855275 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 23:58:58.861843 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:58:58.869338 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 23:58:58.876344 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 23:58:58.883797 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 23:58:58.889846 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 23:58:58.896941 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 23:58:58.904137 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 23:58:58.904170 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:58:58.909335 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:58:58.930908 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 23:58:58.939284 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 23:58:58.973811 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 23:58:58.980801 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 23:58:58.986798 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:58:58.992200 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:58:58.997545 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:58:58.997572 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:58:59.018193 systemd[1]: Starting chronyd.service - NTP client/server... Sep 12 23:58:59.025255 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 23:58:59.041896 (chronyd)[1658]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 12 23:58:59.044260 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 23:58:59.053302 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 23:58:59.065258 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 23:58:59.066136 chronyd[1666]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 12 23:58:59.083345 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 23:58:59.089514 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 23:58:59.089556 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Sep 12 23:58:59.091418 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 12 23:58:59.097609 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 12 23:58:59.100727 jq[1664]: false Sep 12 23:58:59.102872 KVP[1668]: KVP starting; pid is:1668 Sep 12 23:58:59.104307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:58:59.114973 KVP[1668]: KVP LIC Version: 3.1 Sep 12 23:58:59.115158 kernel: hv_utils: KVP IC version 4.0 Sep 12 23:58:59.119238 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 23:58:59.126717 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 23:58:59.133585 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 23:58:59.141232 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 23:58:59.141781 chronyd[1666]: Timezone right/UTC failed leap second check, ignoring Sep 12 23:58:59.141979 chronyd[1666]: Loaded seccomp filter (level 2) Sep 12 23:58:59.153647 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 23:58:59.161981 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 23:58:59.170642 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 23:58:59.171139 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 23:58:59.175706 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 23:58:59.182836 extend-filesystems[1667]: Found loop4 Sep 12 23:58:59.182836 extend-filesystems[1667]: Found loop5 Sep 12 23:58:59.182836 extend-filesystems[1667]: Found loop6 Sep 12 23:58:59.182836 extend-filesystems[1667]: Found loop7 Sep 12 23:58:59.182836 extend-filesystems[1667]: Found sda Sep 12 23:58:59.182836 extend-filesystems[1667]: Found sda1 Sep 12 23:58:59.182836 extend-filesystems[1667]: Found sda2 Sep 12 23:58:59.182836 extend-filesystems[1667]: Found sda3 Sep 12 23:58:59.182836 extend-filesystems[1667]: Found usr Sep 12 23:58:59.182836 extend-filesystems[1667]: Found sda4 Sep 12 23:58:59.182836 extend-filesystems[1667]: Found sda6 Sep 12 23:58:59.182836 extend-filesystems[1667]: Found sda7 Sep 12 23:58:59.182836 extend-filesystems[1667]: Found sda9 Sep 12 23:58:59.182836 extend-filesystems[1667]: Checking size of /dev/sda9 Sep 12 23:58:59.188219 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 23:58:59.346303 extend-filesystems[1667]: Old size kept for /dev/sda9 Sep 12 23:58:59.346303 extend-filesystems[1667]: Found sr0 Sep 12 23:58:59.201453 systemd[1]: Started chronyd.service - NTP client/server. Sep 12 23:58:59.356338 update_engine[1679]: I20250912 23:58:59.312654 1679 main.cc:92] Flatcar Update Engine starting Sep 12 23:58:59.219353 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 23:58:59.356757 jq[1682]: true Sep 12 23:58:59.219538 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 23:58:59.222791 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 23:58:59.222952 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 23:58:59.273561 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 23:58:59.359973 jq[1690]: true Sep 12 23:58:59.288555 (ntainerd)[1701]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 23:58:59.290210 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 23:58:59.290386 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 23:58:59.291847 systemd-logind[1678]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 23:58:59.294451 systemd-logind[1678]: New seat seat0. Sep 12 23:58:59.304534 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 23:58:59.313555 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 23:58:59.316931 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 23:58:59.375340 tar[1689]: linux-arm64/LICENSE Sep 12 23:58:59.379059 tar[1689]: linux-arm64/helm Sep 12 23:58:59.411130 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1709) Sep 12 23:58:59.490544 bash[1742]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:58:59.494415 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 23:58:59.516099 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 23:58:59.525124 dbus-daemon[1661]: [system] SELinux support is enabled Sep 12 23:58:59.525328 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 23:58:59.543141 update_engine[1679]: I20250912 23:58:59.536273 1679 update_check_scheduler.cc:74] Next update check in 2m48s Sep 12 23:58:59.539202 dbus-daemon[1661]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 23:58:59.538646 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 23:58:59.538673 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 23:58:59.545877 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 23:58:59.545899 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 23:58:59.552665 systemd[1]: Started update-engine.service - Update Engine. Sep 12 23:58:59.570455 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 23:58:59.642288 coreos-metadata[1660]: Sep 12 23:58:59.641 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 23:58:59.645571 coreos-metadata[1660]: Sep 12 23:58:59.645 INFO Fetch successful Sep 12 23:58:59.645571 coreos-metadata[1660]: Sep 12 23:58:59.645 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 12 23:58:59.650754 coreos-metadata[1660]: Sep 12 23:58:59.650 INFO Fetch successful Sep 12 23:58:59.651028 coreos-metadata[1660]: Sep 12 23:58:59.650 INFO Fetching http://168.63.129.16/machine/8c23331f-ac56-4f8a-b631-45f73eca2764/b0bc3190%2D0fcc%2D4e56%2D9487%2D1024b58d00fe.%5Fci%2D4081.3.5%2Dn%2D4f403f96f8?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 12 23:58:59.653013 coreos-metadata[1660]: Sep 12 23:58:59.652 INFO Fetch successful Sep 12 23:58:59.653256 coreos-metadata[1660]: Sep 12 23:58:59.653 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 12 23:58:59.669141 coreos-metadata[1660]: Sep 12 23:58:59.668 INFO Fetch successful Sep 12 23:58:59.707372 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 23:58:59.719011 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 23:58:59.924361 locksmithd[1767]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 23:59:00.166435 tar[1689]: linux-arm64/README.md Sep 12 23:59:00.186734 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 23:59:00.297562 containerd[1701]: time="2025-09-12T23:59:00.296357980Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 23:59:00.353766 sshd_keygen[1704]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 23:59:00.356482 containerd[1701]: time="2025-09-12T23:59:00.356302660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:59:00.359717 containerd[1701]: time="2025-09-12T23:59:00.359294180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:59:00.359717 containerd[1701]: time="2025-09-12T23:59:00.359329660Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 23:59:00.359717 containerd[1701]: time="2025-09-12T23:59:00.359348300Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 23:59:00.359717 containerd[1701]: time="2025-09-12T23:59:00.359494860Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 23:59:00.359717 containerd[1701]: time="2025-09-12T23:59:00.359511700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 23:59:00.359717 containerd[1701]: time="2025-09-12T23:59:00.359577700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:59:00.359717 containerd[1701]: time="2025-09-12T23:59:00.359589420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:59:00.359905 containerd[1701]: time="2025-09-12T23:59:00.359748340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:59:00.359905 containerd[1701]: time="2025-09-12T23:59:00.359763620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 23:59:00.359905 containerd[1701]: time="2025-09-12T23:59:00.359776180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:59:00.359905 containerd[1701]: time="2025-09-12T23:59:00.359786540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 23:59:00.359905 containerd[1701]: time="2025-09-12T23:59:00.359855820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:59:00.360067 containerd[1701]: time="2025-09-12T23:59:00.360039740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:59:00.360268 containerd[1701]: time="2025-09-12T23:59:00.360243620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:59:00.360309 containerd[1701]: time="2025-09-12T23:59:00.360281140Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 23:59:00.360471 containerd[1701]: time="2025-09-12T23:59:00.360369660Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 23:59:00.360471 containerd[1701]: time="2025-09-12T23:59:00.360421300Z" level=info msg="metadata content store policy set" policy=shared Sep 12 23:59:00.375523 containerd[1701]: time="2025-09-12T23:59:00.375473860Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 23:59:00.375523 containerd[1701]: time="2025-09-12T23:59:00.375535660Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 23:59:00.375665 containerd[1701]: time="2025-09-12T23:59:00.375556220Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 23:59:00.375665 containerd[1701]: time="2025-09-12T23:59:00.375576660Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 23:59:00.375665 containerd[1701]: time="2025-09-12T23:59:00.375596140Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 23:59:00.375781 containerd[1701]: time="2025-09-12T23:59:00.375754660Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 23:59:00.376615 containerd[1701]: time="2025-09-12T23:59:00.375989420Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 23:59:00.376615 containerd[1701]: time="2025-09-12T23:59:00.376092540Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 23:59:00.376615 containerd[1701]: time="2025-09-12T23:59:00.376126460Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 23:59:00.376615 containerd[1701]: time="2025-09-12T23:59:00.376142340Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 23:59:00.376615 containerd[1701]: time="2025-09-12T23:59:00.376155460Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 23:59:00.376615 containerd[1701]: time="2025-09-12T23:59:00.376169460Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 23:59:00.376615 containerd[1701]: time="2025-09-12T23:59:00.376182740Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 23:59:00.376615 containerd[1701]: time="2025-09-12T23:59:00.376196620Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 23:59:00.376615 containerd[1701]: time="2025-09-12T23:59:00.376211260Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 23:59:00.376615 containerd[1701]: time="2025-09-12T23:59:00.376223380Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 23:59:00.376615 containerd[1701]: time="2025-09-12T23:59:00.376235700Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 23:59:00.376615 containerd[1701]: time="2025-09-12T23:59:00.376248900Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 23:59:00.376615 containerd[1701]: time="2025-09-12T23:59:00.376268900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 23:59:00.376615 containerd[1701]: time="2025-09-12T23:59:00.376283540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 23:59:00.376895 containerd[1701]: time="2025-09-12T23:59:00.376301700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 23:59:00.376895 containerd[1701]: time="2025-09-12T23:59:00.376316220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 23:59:00.376895 containerd[1701]: time="2025-09-12T23:59:00.376328460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 23:59:00.376895 containerd[1701]: time="2025-09-12T23:59:00.376345780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 23:59:00.376895 containerd[1701]: time="2025-09-12T23:59:00.376357900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 23:59:00.376895 containerd[1701]: time="2025-09-12T23:59:00.376371460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 23:59:00.376895 containerd[1701]: time="2025-09-12T23:59:00.376385060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 23:59:00.376895 containerd[1701]: time="2025-09-12T23:59:00.376399220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 23:59:00.376895 containerd[1701]: time="2025-09-12T23:59:00.376412100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 23:59:00.376895 containerd[1701]: time="2025-09-12T23:59:00.376423860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 23:59:00.376895 containerd[1701]: time="2025-09-12T23:59:00.376436140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 23:59:00.376895 containerd[1701]: time="2025-09-12T23:59:00.376452660Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 23:59:00.376895 containerd[1701]: time="2025-09-12T23:59:00.376472660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 23:59:00.376895 containerd[1701]: time="2025-09-12T23:59:00.376484180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 23:59:00.376895 containerd[1701]: time="2025-09-12T23:59:00.376495700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 23:59:00.377214 containerd[1701]: time="2025-09-12T23:59:00.377030700Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 23:59:00.377214 containerd[1701]: time="2025-09-12T23:59:00.377059940Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 23:59:00.377214 containerd[1701]: time="2025-09-12T23:59:00.377071580Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 23:59:00.377214 containerd[1701]: time="2025-09-12T23:59:00.377083980Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 23:59:00.377214 containerd[1701]: time="2025-09-12T23:59:00.377093380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 23:59:00.377214 containerd[1701]: time="2025-09-12T23:59:00.377121660Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 23:59:00.377214 containerd[1701]: time="2025-09-12T23:59:00.377133860Z" level=info msg="NRI interface is disabled by configuration." Sep 12 23:59:00.377214 containerd[1701]: time="2025-09-12T23:59:00.377144500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 23:59:00.379139 containerd[1701]: time="2025-09-12T23:59:00.377414740Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 23:59:00.379139 containerd[1701]: time="2025-09-12T23:59:00.377699940Z" level=info msg="Connect containerd service" Sep 12 23:59:00.379139 containerd[1701]: time="2025-09-12T23:59:00.377748340Z" level=info msg="using legacy CRI server" Sep 12 23:59:00.379139 containerd[1701]: time="2025-09-12T23:59:00.377755700Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 23:59:00.379139 containerd[1701]: time="2025-09-12T23:59:00.378043020Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 23:59:00.379894 containerd[1701]: time="2025-09-12T23:59:00.379861940Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:59:00.380166 containerd[1701]: time="2025-09-12T23:59:00.380142900Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 23:59:00.380203 containerd[1701]: time="2025-09-12T23:59:00.380187460Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 23:59:00.380250 containerd[1701]: time="2025-09-12T23:59:00.380218860Z" level=info msg="Start subscribing containerd event" Sep 12 23:59:00.380288 containerd[1701]: time="2025-09-12T23:59:00.380258100Z" level=info msg="Start recovering state" Sep 12 23:59:00.380338 containerd[1701]: time="2025-09-12T23:59:00.380320100Z" level=info msg="Start event monitor" Sep 12 23:59:00.380338 containerd[1701]: time="2025-09-12T23:59:00.380336260Z" level=info msg="Start snapshots syncer" Sep 12 23:59:00.380387 containerd[1701]: time="2025-09-12T23:59:00.380345660Z" level=info msg="Start cni network conf syncer for default" Sep 12 23:59:00.380387 containerd[1701]: time="2025-09-12T23:59:00.380353460Z" level=info msg="Start streaming server" Sep 12 23:59:00.387520 containerd[1701]: time="2025-09-12T23:59:00.380411260Z" level=info msg="containerd successfully booted in 0.087811s" Sep 12 23:59:00.380497 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 23:59:00.396460 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 23:59:00.410374 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 23:59:00.418008 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 12 23:59:00.424700 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 23:59:00.426138 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 23:59:00.443425 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 23:59:00.470441 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 12 23:59:00.490717 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 23:59:00.499812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:00.508248 (kubelet)[1813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:59:00.514323 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 23:59:00.528427 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 23:59:00.535405 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 23:59:00.540868 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 23:59:00.547324 systemd[1]: Startup finished in 648ms (kernel) + 13.861s (initrd) + 20.525s (userspace) = 35.035s. Sep 12 23:59:01.018801 kubelet[1813]: E0912 23:59:01.018739 1813 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:59:01.021697 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:59:01.021835 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:59:01.399468 login[1815]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 12 23:59:01.400830 login[1816]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:59:01.412301 systemd-logind[1678]: New session 1 of user core. Sep 12 23:59:01.412773 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 23:59:01.423352 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 23:59:01.462326 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 23:59:01.469499 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 23:59:01.516486 (systemd)[1829]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 23:59:01.836464 systemd[1829]: Queued start job for default target default.target. Sep 12 23:59:01.844037 systemd[1829]: Created slice app.slice - User Application Slice. Sep 12 23:59:01.844069 systemd[1829]: Reached target paths.target - Paths. Sep 12 23:59:01.844081 systemd[1829]: Reached target timers.target - Timers. Sep 12 23:59:01.845267 systemd[1829]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 23:59:01.855926 systemd[1829]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 23:59:01.855986 systemd[1829]: Reached target sockets.target - Sockets. Sep 12 23:59:01.855998 systemd[1829]: Reached target basic.target - Basic System. Sep 12 23:59:01.856044 systemd[1829]: Reached target default.target - Main User Target. Sep 12 23:59:01.856071 systemd[1829]: Startup finished in 333ms. Sep 12 23:59:01.856214 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 23:59:01.857538 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 23:59:02.399835 login[1815]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:59:02.403583 systemd-logind[1678]: New session 2 of user core. Sep 12 23:59:02.413234 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 23:59:02.658326 waagent[1808]: 2025-09-12T23:59:02.658189Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 12 23:59:02.668130 waagent[1808]: 2025-09-12T23:59:02.664451Z INFO Daemon Daemon OS: flatcar 4081.3.5 Sep 12 23:59:02.669118 waagent[1808]: 2025-09-12T23:59:02.669054Z INFO Daemon Daemon Python: 3.11.9 Sep 12 23:59:02.673485 waagent[1808]: 2025-09-12T23:59:02.673434Z INFO Daemon Daemon Run daemon Sep 12 23:59:02.677622 waagent[1808]: 2025-09-12T23:59:02.677579Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.5' Sep 12 23:59:02.686469 waagent[1808]: 2025-09-12T23:59:02.686415Z INFO Daemon Daemon Using waagent for provisioning Sep 12 23:59:02.691760 waagent[1808]: 2025-09-12T23:59:02.691715Z INFO Daemon Daemon Activate resource disk Sep 12 23:59:02.696425 waagent[1808]: 2025-09-12T23:59:02.696383Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 12 23:59:02.707446 waagent[1808]: 2025-09-12T23:59:02.707399Z INFO Daemon Daemon Found device: None Sep 12 23:59:02.712192 waagent[1808]: 2025-09-12T23:59:02.712149Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 12 23:59:02.720336 waagent[1808]: 2025-09-12T23:59:02.720296Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 12 23:59:02.733947 waagent[1808]: 2025-09-12T23:59:02.733891Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 23:59:02.739612 waagent[1808]: 2025-09-12T23:59:02.739568Z INFO Daemon Daemon Running default provisioning handler Sep 12 23:59:02.751142 waagent[1808]: 2025-09-12T23:59:02.751065Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 12 23:59:02.765528 waagent[1808]: 2025-09-12T23:59:02.765469Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 12 23:59:02.775352 waagent[1808]: 2025-09-12T23:59:02.775305Z INFO Daemon Daemon cloud-init is enabled: False Sep 12 23:59:02.780257 waagent[1808]: 2025-09-12T23:59:02.780218Z INFO Daemon Daemon Copying ovf-env.xml Sep 12 23:59:02.898407 waagent[1808]: 2025-09-12T23:59:02.898312Z INFO Daemon Daemon Successfully mounted dvd Sep 12 23:59:02.932131 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 12 23:59:02.934363 waagent[1808]: 2025-09-12T23:59:02.934286Z INFO Daemon Daemon Detect protocol endpoint Sep 12 23:59:02.939570 waagent[1808]: 2025-09-12T23:59:02.939518Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 23:59:02.945535 waagent[1808]: 2025-09-12T23:59:02.945487Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 12 23:59:02.952598 waagent[1808]: 2025-09-12T23:59:02.952555Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 12 23:59:02.958212 waagent[1808]: 2025-09-12T23:59:02.958166Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 12 23:59:02.963550 waagent[1808]: 2025-09-12T23:59:02.963508Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 12 23:59:03.014747 waagent[1808]: 2025-09-12T23:59:03.014701Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 12 23:59:03.021915 waagent[1808]: 2025-09-12T23:59:03.021885Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 12 23:59:03.027480 waagent[1808]: 2025-09-12T23:59:03.027437Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 12 23:59:03.282666 waagent[1808]: 2025-09-12T23:59:03.282521Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 12 23:59:03.288990 waagent[1808]: 2025-09-12T23:59:03.288935Z INFO Daemon Daemon Forcing an update of the goal state. Sep 12 23:59:03.298282 waagent[1808]: 2025-09-12T23:59:03.298237Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 23:59:03.344178 waagent[1808]: 2025-09-12T23:59:03.344130Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 12 23:59:03.349950 waagent[1808]: 2025-09-12T23:59:03.349905Z INFO Daemon Sep 12 23:59:03.352823 waagent[1808]: 2025-09-12T23:59:03.352783Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 9ee3909d-580c-4dfd-a715-cb212a795835 eTag: 8418390041134854273 source: Fabric] Sep 12 23:59:03.364227 waagent[1808]: 2025-09-12T23:59:03.364183Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 12 23:59:03.371098 waagent[1808]: 2025-09-12T23:59:03.371049Z INFO Daemon Sep 12 23:59:03.374200 waagent[1808]: 2025-09-12T23:59:03.374159Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 12 23:59:03.385312 waagent[1808]: 2025-09-12T23:59:03.385277Z INFO Daemon Daemon Downloading artifacts profile blob Sep 12 23:59:03.527886 waagent[1808]: 2025-09-12T23:59:03.527798Z INFO Daemon Downloaded certificate {'thumbprint': 'FA0BAE858E0C1A255CBF08F6508846E8CC517BFF', 'hasPrivateKey': True} Sep 12 23:59:03.538039 waagent[1808]: 2025-09-12T23:59:03.537959Z INFO Daemon Fetch goal state completed Sep 12 23:59:03.583815 waagent[1808]: 2025-09-12T23:59:03.583753Z INFO Daemon Daemon Starting provisioning Sep 12 23:59:03.589262 waagent[1808]: 2025-09-12T23:59:03.589203Z INFO Daemon Daemon Handle ovf-env.xml. Sep 12 23:59:03.594678 waagent[1808]: 2025-09-12T23:59:03.594628Z INFO Daemon Daemon Set hostname [ci-4081.3.5-n-4f403f96f8] Sep 12 23:59:03.647121 waagent[1808]: 2025-09-12T23:59:03.644917Z INFO Daemon Daemon Publish hostname [ci-4081.3.5-n-4f403f96f8] Sep 12 23:59:03.651644 waagent[1808]: 2025-09-12T23:59:03.651583Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 12 23:59:03.658286 waagent[1808]: 2025-09-12T23:59:03.658231Z INFO Daemon Daemon Primary interface is [eth0] Sep 12 23:59:03.721317 systemd-networkd[1566]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:59:03.721324 systemd-networkd[1566]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:59:03.721348 systemd-networkd[1566]: eth0: DHCP lease lost Sep 12 23:59:03.722682 waagent[1808]: 2025-09-12T23:59:03.722598Z INFO Daemon Daemon Create user account if not exists Sep 12 23:59:03.728213 waagent[1808]: 2025-09-12T23:59:03.728160Z INFO Daemon Daemon User core already exists, skip useradd Sep 12 23:59:03.733732 waagent[1808]: 2025-09-12T23:59:03.733688Z INFO Daemon Daemon Configure sudoer Sep 12 23:59:03.738164 waagent[1808]: 2025-09-12T23:59:03.738098Z INFO Daemon Daemon Configure sshd Sep 12 23:59:03.739215 systemd-networkd[1566]: eth0: DHCPv6 lease lost Sep 12 23:59:03.742520 waagent[1808]: 2025-09-12T23:59:03.742466Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 12 23:59:03.755362 waagent[1808]: 2025-09-12T23:59:03.755309Z INFO Daemon Daemon Deploy ssh public key. Sep 12 23:59:03.770213 systemd-networkd[1566]: eth0: DHCPv4 address 10.200.20.16/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 12 23:59:04.926783 waagent[1808]: 2025-09-12T23:59:04.926709Z INFO Daemon Daemon Provisioning complete Sep 12 23:59:04.946063 waagent[1808]: 2025-09-12T23:59:04.946010Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 12 23:59:04.952848 waagent[1808]: 2025-09-12T23:59:04.952792Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 12 23:59:04.963622 waagent[1808]: 2025-09-12T23:59:04.963567Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 12 23:59:05.093972 waagent[1879]: 2025-09-12T23:59:05.093309Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 12 23:59:05.093972 waagent[1879]: 2025-09-12T23:59:05.093466Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.5 Sep 12 23:59:05.093972 waagent[1879]: 2025-09-12T23:59:05.093522Z INFO ExtHandler ExtHandler Python: 3.11.9 Sep 12 23:59:05.265588 waagent[1879]: 2025-09-12T23:59:05.265454Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.5; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 12 23:59:05.265896 waagent[1879]: 2025-09-12T23:59:05.265857Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 23:59:05.266036 waagent[1879]: 2025-09-12T23:59:05.266001Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 23:59:05.274393 waagent[1879]: 2025-09-12T23:59:05.274330Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 23:59:05.280426 waagent[1879]: 2025-09-12T23:59:05.280378Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 12 23:59:05.281004 waagent[1879]: 2025-09-12T23:59:05.280966Z INFO ExtHandler Sep 12 23:59:05.281200 waagent[1879]: 2025-09-12T23:59:05.281164Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 68a86395-c544-44ad-91a4-843997e391a4 eTag: 8418390041134854273 source: Fabric] Sep 12 23:59:05.281597 waagent[1879]: 2025-09-12T23:59:05.281557Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 12 23:59:05.282895 waagent[1879]: 2025-09-12T23:59:05.282235Z INFO ExtHandler Sep 12 23:59:05.282895 waagent[1879]: 2025-09-12T23:59:05.282309Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 12 23:59:05.287125 waagent[1879]: 2025-09-12T23:59:05.286176Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 12 23:59:05.362972 waagent[1879]: 2025-09-12T23:59:05.362897Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FA0BAE858E0C1A255CBF08F6508846E8CC517BFF', 'hasPrivateKey': True} Sep 12 23:59:05.363643 waagent[1879]: 2025-09-12T23:59:05.363602Z INFO ExtHandler Fetch goal state completed Sep 12 23:59:05.382568 waagent[1879]: 2025-09-12T23:59:05.382507Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1879 Sep 12 23:59:05.382835 waagent[1879]: 2025-09-12T23:59:05.382801Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 12 23:59:05.384526 waagent[1879]: 2025-09-12T23:59:05.384485Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.5', '', 'Flatcar Container Linux by Kinvolk'] Sep 12 23:59:05.384962 waagent[1879]: 2025-09-12T23:59:05.384926Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 12 23:59:05.446352 waagent[1879]: 2025-09-12T23:59:05.446314Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 12 23:59:05.446676 waagent[1879]: 2025-09-12T23:59:05.446640Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 12 23:59:05.453163 waagent[1879]: 2025-09-12T23:59:05.453128Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 12 23:59:05.459678 systemd[1]: Reloading requested from client PID 1892 ('systemctl') (unit waagent.service)... Sep 12 23:59:05.459695 systemd[1]: Reloading... Sep 12 23:59:05.554150 zram_generator::config[1930]: No configuration found. Sep 12 23:59:05.650356 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:59:05.729692 systemd[1]: Reloading finished in 269 ms. Sep 12 23:59:05.756145 waagent[1879]: 2025-09-12T23:59:05.754438Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 12 23:59:05.760210 systemd[1]: Reloading requested from client PID 1981 ('systemctl') (unit waagent.service)... Sep 12 23:59:05.760228 systemd[1]: Reloading... Sep 12 23:59:05.827217 zram_generator::config[2012]: No configuration found. Sep 12 23:59:05.946201 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:59:06.021993 systemd[1]: Reloading finished in 261 ms. Sep 12 23:59:06.047438 waagent[1879]: 2025-09-12T23:59:06.047343Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 12 23:59:06.047536 waagent[1879]: 2025-09-12T23:59:06.047499Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 12 23:59:06.546722 waagent[1879]: 2025-09-12T23:59:06.546624Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 12 23:59:06.547301 waagent[1879]: 2025-09-12T23:59:06.547248Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 12 23:59:06.548091 waagent[1879]: 2025-09-12T23:59:06.548035Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 12 23:59:06.548606 waagent[1879]: 2025-09-12T23:59:06.548464Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 12 23:59:06.548926 waagent[1879]: 2025-09-12T23:59:06.548791Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 12 23:59:06.549154 waagent[1879]: 2025-09-12T23:59:06.548929Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 12 23:59:06.549154 waagent[1879]: 2025-09-12T23:59:06.549048Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 23:59:06.549357 waagent[1879]: 2025-09-12T23:59:06.549318Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 23:59:06.550136 waagent[1879]: 2025-09-12T23:59:06.549527Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 23:59:06.550136 waagent[1879]: 2025-09-12T23:59:06.549676Z INFO EnvHandler ExtHandler Configure routes Sep 12 23:59:06.550136 waagent[1879]: 2025-09-12T23:59:06.549741Z INFO EnvHandler ExtHandler Gateway:None Sep 12 23:59:06.550136 waagent[1879]: 2025-09-12T23:59:06.549782Z INFO EnvHandler ExtHandler Routes:None Sep 12 23:59:06.550469 waagent[1879]: 2025-09-12T23:59:06.550367Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 12 23:59:06.551280 waagent[1879]: 2025-09-12T23:59:06.550536Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 23:59:06.551280 waagent[1879]: 2025-09-12T23:59:06.551211Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 12 23:59:06.551608 waagent[1879]: 2025-09-12T23:59:06.551570Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 12 23:59:06.551608 waagent[1879]: 2025-09-12T23:59:06.551492Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 12 23:59:06.554969 waagent[1879]: 2025-09-12T23:59:06.554431Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 12 23:59:06.554969 waagent[1879]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 12 23:59:06.554969 waagent[1879]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 12 23:59:06.554969 waagent[1879]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 12 23:59:06.554969 waagent[1879]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 12 23:59:06.554969 waagent[1879]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 23:59:06.554969 waagent[1879]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 23:59:06.561126 waagent[1879]: 2025-09-12T23:59:06.561045Z INFO ExtHandler ExtHandler Sep 12 23:59:06.561232 waagent[1879]: 2025-09-12T23:59:06.561193Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 482346f9-0cd2-4af6-b9e2-aa1eb1b29946 correlation a8aaacd7-849b-4d9a-bf52-a86c52c235c2 created: 2025-09-12T23:57:36.198860Z] Sep 12 23:59:06.561612 waagent[1879]: 2025-09-12T23:59:06.561563Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 12 23:59:06.562191 waagent[1879]: 2025-09-12T23:59:06.562150Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 12 23:59:06.601053 waagent[1879]: 2025-09-12T23:59:06.600984Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 01735A66-1A02-4F41-AA5C-98F436B1B38A;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 12 23:59:06.664060 waagent[1879]: 2025-09-12T23:59:06.663966Z INFO MonitorHandler ExtHandler Network interfaces: Sep 12 23:59:06.664060 waagent[1879]: Executing ['ip', '-a', '-o', 'link']: Sep 12 23:59:06.664060 waagent[1879]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 12 23:59:06.664060 waagent[1879]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:c2:3a:a7 brd ff:ff:ff:ff:ff:ff Sep 12 23:59:06.664060 waagent[1879]: 3: enP65323s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:c2:3a:a7 brd ff:ff:ff:ff:ff:ff\ altname enP65323p0s2 Sep 12 23:59:06.664060 waagent[1879]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 12 23:59:06.664060 waagent[1879]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 12 23:59:06.664060 waagent[1879]: 2: eth0 inet 10.200.20.16/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 12 23:59:06.664060 waagent[1879]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 12 23:59:06.664060 waagent[1879]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 12 23:59:06.664060 waagent[1879]: 2: eth0 inet6 fe80::222:48ff:fec2:3aa7/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 12 23:59:06.807190 waagent[1879]: 2025-09-12T23:59:06.806927Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 12 23:59:06.807190 waagent[1879]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 23:59:06.807190 waagent[1879]: pkts bytes target prot opt in out source destination Sep 12 23:59:06.807190 waagent[1879]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 23:59:06.807190 waagent[1879]: pkts bytes target prot opt in out source destination Sep 12 23:59:06.807190 waagent[1879]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 23:59:06.807190 waagent[1879]: pkts bytes target prot opt in out source destination Sep 12 23:59:06.807190 waagent[1879]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 23:59:06.807190 waagent[1879]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 23:59:06.807190 waagent[1879]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 23:59:06.810040 waagent[1879]: 2025-09-12T23:59:06.809976Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 12 23:59:06.810040 waagent[1879]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 23:59:06.810040 waagent[1879]: pkts bytes target prot opt in out source destination Sep 12 23:59:06.810040 waagent[1879]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 23:59:06.810040 waagent[1879]: pkts bytes target prot opt in out source destination Sep 12 23:59:06.810040 waagent[1879]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 23:59:06.810040 waagent[1879]: pkts bytes target prot opt in out source destination Sep 12 23:59:06.810040 waagent[1879]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 23:59:06.810040 waagent[1879]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 23:59:06.810040 waagent[1879]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 23:59:06.810304 waagent[1879]: 2025-09-12T23:59:06.810265Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 12 23:59:11.092386 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 23:59:11.103294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:59:11.246708 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:11.250891 (kubelet)[2108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:59:11.306526 kubelet[2108]: E0912 23:59:11.306470 2108 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:59:11.309099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:59:11.309242 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:59:21.342631 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 23:59:21.350278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:59:21.653357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:21.656751 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:59:21.692778 kubelet[2123]: E0912 23:59:21.692710 2123 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:59:21.695294 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:59:21.695425 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:59:22.869070 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 23:59:22.870272 systemd[1]: Started sshd@0-10.200.20.16:22-10.200.16.10:45346.service - OpenSSH per-connection server daemon (10.200.16.10:45346). Sep 12 23:59:22.926857 chronyd[1666]: Selected source PHC0 Sep 12 23:59:23.451186 sshd[2131]: Accepted publickey for core from 10.200.16.10 port 45346 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 12 23:59:23.452522 sshd[2131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:59:23.457321 systemd-logind[1678]: New session 3 of user core. Sep 12 23:59:23.463282 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 23:59:23.847521 systemd[1]: Started sshd@1-10.200.20.16:22-10.200.16.10:45362.service - OpenSSH per-connection server daemon (10.200.16.10:45362). Sep 12 23:59:24.261145 sshd[2136]: Accepted publickey for core from 10.200.16.10 port 45362 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 12 23:59:24.262605 sshd[2136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:59:24.266893 systemd-logind[1678]: New session 4 of user core. Sep 12 23:59:24.281256 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 23:59:24.570488 sshd[2136]: pam_unix(sshd:session): session closed for user core Sep 12 23:59:24.573805 systemd[1]: sshd@1-10.200.20.16:22-10.200.16.10:45362.service: Deactivated successfully. Sep 12 23:59:24.575381 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 23:59:24.576217 systemd-logind[1678]: Session 4 logged out. Waiting for processes to exit. Sep 12 23:59:24.577176 systemd-logind[1678]: Removed session 4. Sep 12 23:59:24.647525 systemd[1]: Started sshd@2-10.200.20.16:22-10.200.16.10:45372.service - OpenSSH per-connection server daemon (10.200.16.10:45372). Sep 12 23:59:25.057484 sshd[2143]: Accepted publickey for core from 10.200.16.10 port 45372 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 12 23:59:25.058801 sshd[2143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:59:25.063082 systemd-logind[1678]: New session 5 of user core. Sep 12 23:59:25.069248 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 23:59:25.370046 sshd[2143]: pam_unix(sshd:session): session closed for user core Sep 12 23:59:25.373447 systemd[1]: sshd@2-10.200.20.16:22-10.200.16.10:45372.service: Deactivated successfully. Sep 12 23:59:25.377151 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 23:59:25.377835 systemd-logind[1678]: Session 5 logged out. Waiting for processes to exit. Sep 12 23:59:25.378662 systemd-logind[1678]: Removed session 5. Sep 12 23:59:25.443903 systemd[1]: Started sshd@3-10.200.20.16:22-10.200.16.10:45386.service - OpenSSH per-connection server daemon (10.200.16.10:45386). Sep 12 23:59:25.855962 sshd[2150]: Accepted publickey for core from 10.200.16.10 port 45386 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 12 23:59:25.857289 sshd[2150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:59:25.861985 systemd-logind[1678]: New session 6 of user core. Sep 12 23:59:25.872274 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 23:59:26.175184 sshd[2150]: pam_unix(sshd:session): session closed for user core Sep 12 23:59:26.178446 systemd[1]: sshd@3-10.200.20.16:22-10.200.16.10:45386.service: Deactivated successfully. Sep 12 23:59:26.179991 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 23:59:26.180659 systemd-logind[1678]: Session 6 logged out. Waiting for processes to exit. Sep 12 23:59:26.181587 systemd-logind[1678]: Removed session 6. Sep 12 23:59:26.253017 systemd[1]: Started sshd@4-10.200.20.16:22-10.200.16.10:45396.service - OpenSSH per-connection server daemon (10.200.16.10:45396). Sep 12 23:59:26.660696 sshd[2157]: Accepted publickey for core from 10.200.16.10 port 45396 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 12 23:59:26.662014 sshd[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:59:26.665664 systemd-logind[1678]: New session 7 of user core. Sep 12 23:59:26.677252 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 23:59:27.170507 sudo[2160]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 23:59:27.170789 sudo[2160]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:59:27.201847 sudo[2160]: pam_unix(sudo:session): session closed for user root Sep 12 23:59:27.280943 sshd[2157]: pam_unix(sshd:session): session closed for user core Sep 12 23:59:27.284707 systemd[1]: sshd@4-10.200.20.16:22-10.200.16.10:45396.service: Deactivated successfully. Sep 12 23:59:27.286483 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 23:59:27.287346 systemd-logind[1678]: Session 7 logged out. Waiting for processes to exit. Sep 12 23:59:27.288314 systemd-logind[1678]: Removed session 7. Sep 12 23:59:27.355790 systemd[1]: Started sshd@5-10.200.20.16:22-10.200.16.10:45402.service - OpenSSH per-connection server daemon (10.200.16.10:45402). Sep 12 23:59:27.766887 sshd[2165]: Accepted publickey for core from 10.200.16.10 port 45402 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 12 23:59:27.768280 sshd[2165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:59:27.772928 systemd-logind[1678]: New session 8 of user core. Sep 12 23:59:27.778267 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 23:59:28.002970 sudo[2169]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 23:59:28.003603 sudo[2169]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:59:28.006755 sudo[2169]: pam_unix(sudo:session): session closed for user root Sep 12 23:59:28.011300 sudo[2168]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 23:59:28.011555 sudo[2168]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:59:28.024588 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 23:59:28.026156 auditctl[2172]: No rules Sep 12 23:59:28.026485 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:59:28.026683 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 23:59:28.028867 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 23:59:28.052034 augenrules[2190]: No rules Sep 12 23:59:28.053435 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 23:59:28.054688 sudo[2168]: pam_unix(sudo:session): session closed for user root Sep 12 23:59:28.156610 sshd[2165]: pam_unix(sshd:session): session closed for user core Sep 12 23:59:28.160045 systemd[1]: sshd@5-10.200.20.16:22-10.200.16.10:45402.service: Deactivated successfully. Sep 12 23:59:28.161725 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 23:59:28.162475 systemd-logind[1678]: Session 8 logged out. Waiting for processes to exit. Sep 12 23:59:28.163480 systemd-logind[1678]: Removed session 8. Sep 12 23:59:28.234480 systemd[1]: Started sshd@6-10.200.20.16:22-10.200.16.10:45418.service - OpenSSH per-connection server daemon (10.200.16.10:45418). Sep 12 23:59:28.656605 sshd[2198]: Accepted publickey for core from 10.200.16.10 port 45418 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 12 23:59:28.657884 sshd[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:59:28.661599 systemd-logind[1678]: New session 9 of user core. Sep 12 23:59:28.669290 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 23:59:28.894352 sudo[2201]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 23:59:28.894619 sudo[2201]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:59:30.196345 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 23:59:30.196522 (dockerd)[2216]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 23:59:31.161593 dockerd[2216]: time="2025-09-12T23:59:31.161527627Z" level=info msg="Starting up" Sep 12 23:59:31.637123 dockerd[2216]: time="2025-09-12T23:59:31.637077281Z" level=info msg="Loading containers: start." Sep 12 23:59:31.842335 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 23:59:31.848280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:59:31.876162 kernel: Initializing XFRM netlink socket Sep 12 23:59:31.973639 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:31.977452 (kubelet)[2289]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:59:32.037370 kubelet[2289]: E0912 23:59:32.037317 2289 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:59:32.040205 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:59:32.040471 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:59:32.628508 systemd-networkd[1566]: docker0: Link UP Sep 12 23:59:32.656243 dockerd[2216]: time="2025-09-12T23:59:32.656204999Z" level=info msg="Loading containers: done." Sep 12 23:59:32.722117 dockerd[2216]: time="2025-09-12T23:59:32.722054418Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 23:59:32.722307 dockerd[2216]: time="2025-09-12T23:59:32.722280859Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 23:59:32.722449 dockerd[2216]: time="2025-09-12T23:59:32.722428379Z" level=info msg="Daemon has completed initialization" Sep 12 23:59:32.786004 dockerd[2216]: time="2025-09-12T23:59:32.785499033Z" level=info msg="API listen on /run/docker.sock" Sep 12 23:59:32.785716 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 23:59:33.939060 containerd[1701]: time="2025-09-12T23:59:33.938798480Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 23:59:34.898509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3258717427.mount: Deactivated successfully. Sep 12 23:59:36.121699 containerd[1701]: time="2025-09-12T23:59:36.121657913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:36.124711 containerd[1701]: time="2025-09-12T23:59:36.124673719Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390228" Sep 12 23:59:36.128077 containerd[1701]: time="2025-09-12T23:59:36.128032526Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:36.133888 containerd[1701]: time="2025-09-12T23:59:36.133826299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:36.135708 containerd[1701]: time="2025-09-12T23:59:36.135059541Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 2.196211941s" Sep 12 23:59:36.135708 containerd[1701]: time="2025-09-12T23:59:36.135099381Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 12 23:59:36.136763 containerd[1701]: time="2025-09-12T23:59:36.136731705Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 23:59:37.418911 containerd[1701]: time="2025-09-12T23:59:37.418866586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:37.422324 containerd[1701]: time="2025-09-12T23:59:37.422290873Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547917" Sep 12 23:59:37.425932 containerd[1701]: time="2025-09-12T23:59:37.425887881Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:37.431334 containerd[1701]: time="2025-09-12T23:59:37.430943731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:37.432056 containerd[1701]: time="2025-09-12T23:59:37.432017694Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.295241149s" Sep 12 23:59:37.432056 containerd[1701]: time="2025-09-12T23:59:37.432051694Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 12 23:59:37.432507 containerd[1701]: time="2025-09-12T23:59:37.432464214Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 23:59:38.589145 containerd[1701]: time="2025-09-12T23:59:38.588617028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:38.591601 containerd[1701]: time="2025-09-12T23:59:38.591568234Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295977" Sep 12 23:59:38.594981 containerd[1701]: time="2025-09-12T23:59:38.594950841Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:38.600799 containerd[1701]: time="2025-09-12T23:59:38.600767894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:38.602870 containerd[1701]: time="2025-09-12T23:59:38.602834618Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.170332723s" Sep 12 23:59:38.602897 containerd[1701]: time="2025-09-12T23:59:38.602874498Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 12 23:59:38.603335 containerd[1701]: time="2025-09-12T23:59:38.603305619Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 23:59:38.805130 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 12 23:59:39.637824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2459504334.mount: Deactivated successfully. Sep 12 23:59:39.966140 containerd[1701]: time="2025-09-12T23:59:39.965886932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:39.969113 containerd[1701]: time="2025-09-12T23:59:39.969070859Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240106" Sep 12 23:59:39.974098 containerd[1701]: time="2025-09-12T23:59:39.974066190Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:39.979311 containerd[1701]: time="2025-09-12T23:59:39.979275401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:39.980147 containerd[1701]: time="2025-09-12T23:59:39.979732162Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.376392743s" Sep 12 23:59:39.980147 containerd[1701]: time="2025-09-12T23:59:39.979764962Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 12 23:59:39.980253 containerd[1701]: time="2025-09-12T23:59:39.980232843Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 23:59:40.719817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3846469620.mount: Deactivated successfully. Sep 12 23:59:42.092746 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 12 23:59:42.100285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:59:42.217862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:42.223013 (kubelet)[2501]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:59:42.304939 kubelet[2501]: E0912 23:59:42.304844 2501 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:59:42.307375 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:59:42.307530 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:59:42.586421 containerd[1701]: time="2025-09-12T23:59:42.586299029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:42.590356 containerd[1701]: time="2025-09-12T23:59:42.590113957Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Sep 12 23:59:42.593485 containerd[1701]: time="2025-09-12T23:59:42.593452164Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:42.599073 containerd[1701]: time="2025-09-12T23:59:42.599034536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:42.600768 containerd[1701]: time="2025-09-12T23:59:42.600271419Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.620011456s" Sep 12 23:59:42.600768 containerd[1701]: time="2025-09-12T23:59:42.600309539Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 12 23:59:42.601085 containerd[1701]: time="2025-09-12T23:59:42.601065660Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 23:59:43.171765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1540813392.mount: Deactivated successfully. Sep 12 23:59:43.194854 containerd[1701]: time="2025-09-12T23:59:43.194796253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:43.199909 containerd[1701]: time="2025-09-12T23:59:43.199714943Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 12 23:59:43.203219 containerd[1701]: time="2025-09-12T23:59:43.203165471Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:43.209000 containerd[1701]: time="2025-09-12T23:59:43.208621043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:43.209540 containerd[1701]: time="2025-09-12T23:59:43.209503884Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 608.344783ms" Sep 12 23:59:43.209581 containerd[1701]: time="2025-09-12T23:59:43.209540765Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 23:59:43.210291 containerd[1701]: time="2025-09-12T23:59:43.210263326Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 23:59:43.800217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3805214408.mount: Deactivated successfully. Sep 12 23:59:44.525595 update_engine[1679]: I20250912 23:59:44.525512 1679 update_attempter.cc:509] Updating boot flags... Sep 12 23:59:45.042162 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2543) Sep 12 23:59:45.119923 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2543) Sep 12 23:59:45.334228 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2543) Sep 12 23:59:46.245657 containerd[1701]: time="2025-09-12T23:59:46.245610912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:46.248715 containerd[1701]: time="2025-09-12T23:59:46.248685478Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465857" Sep 12 23:59:46.251866 containerd[1701]: time="2025-09-12T23:59:46.251836965Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:46.258835 containerd[1701]: time="2025-09-12T23:59:46.258805380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:59:46.262135 containerd[1701]: time="2025-09-12T23:59:46.261613186Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.05130926s" Sep 12 23:59:46.262135 containerd[1701]: time="2025-09-12T23:59:46.261660226Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 12 23:59:52.342642 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 12 23:59:52.351562 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:59:52.486254 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:52.490975 (kubelet)[2689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:59:52.527732 kubelet[2689]: E0912 23:59:52.527690 2689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:59:52.529426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:59:52.529546 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:59:53.502057 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:53.508331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:59:53.545373 systemd[1]: Reloading requested from client PID 2704 ('systemctl') (unit session-9.scope)... Sep 12 23:59:53.545394 systemd[1]: Reloading... Sep 12 23:59:53.670149 zram_generator::config[2747]: No configuration found. Sep 12 23:59:53.772633 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:59:53.851519 systemd[1]: Reloading finished in 305 ms. Sep 12 23:59:53.904968 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:59:53.909832 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 23:59:53.911147 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:53.915327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:59:54.040657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:59:54.045402 (kubelet)[2813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:59:54.176344 kubelet[2813]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:59:54.176344 kubelet[2813]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 23:59:54.176344 kubelet[2813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:59:54.436181 kubelet[2813]: I0912 23:59:54.176412 2813 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:59:55.132445 kubelet[2813]: I0912 23:59:55.132407 2813 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 23:59:55.134142 kubelet[2813]: I0912 23:59:55.132661 2813 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:59:55.134142 kubelet[2813]: I0912 23:59:55.132905 2813 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 23:59:55.423212 kubelet[2813]: E0912 23:59:55.421643 2813 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 23:59:55.424882 kubelet[2813]: I0912 23:59:55.424856 2813 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:59:55.432753 kubelet[2813]: E0912 23:59:55.432710 2813 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 23:59:55.432753 kubelet[2813]: I0912 23:59:55.432748 2813 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 23:59:55.436668 kubelet[2813]: I0912 23:59:55.436642 2813 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:59:55.438011 kubelet[2813]: I0912 23:59:55.437969 2813 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:59:55.438218 kubelet[2813]: I0912 23:59:55.438014 2813 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-4f403f96f8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:59:55.438308 kubelet[2813]: I0912 23:59:55.438229 2813 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:59:55.438308 kubelet[2813]: I0912 23:59:55.438240 2813 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 23:59:55.438378 kubelet[2813]: I0912 23:59:55.438360 2813 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:59:55.442051 kubelet[2813]: I0912 23:59:55.442030 2813 kubelet.go:480] "Attempting to sync node with API server" Sep 12 23:59:55.442110 kubelet[2813]: I0912 23:59:55.442070 2813 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:59:55.442110 kubelet[2813]: I0912 23:59:55.442098 2813 kubelet.go:386] "Adding apiserver pod source" Sep 12 23:59:55.443655 kubelet[2813]: I0912 23:59:55.443353 2813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:59:55.446693 kubelet[2813]: E0912 23:59:55.446667 2813 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-4f403f96f8&limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 23:59:55.447177 kubelet[2813]: I0912 23:59:55.447159 2813 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 23:59:55.449888 kubelet[2813]: I0912 23:59:55.447826 2813 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 23:59:55.449888 kubelet[2813]: W0912 23:59:55.447891 2813 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 23:59:55.450866 kubelet[2813]: I0912 23:59:55.450851 2813 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 23:59:55.450970 kubelet[2813]: I0912 23:59:55.450961 2813 server.go:1289] "Started kubelet" Sep 12 23:59:55.451886 kubelet[2813]: E0912 23:59:55.451853 2813 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 23:59:55.451995 kubelet[2813]: I0912 23:59:55.451957 2813 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:59:55.452822 kubelet[2813]: I0912 23:59:55.452802 2813 server.go:317] "Adding debug handlers to kubelet server" Sep 12 23:59:55.453386 kubelet[2813]: I0912 23:59:55.453333 2813 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:59:55.453745 kubelet[2813]: I0912 23:59:55.453726 2813 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:59:55.455014 kubelet[2813]: E0912 23:59:55.453946 2813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.16:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.16:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-n-4f403f96f8.1864ae7a6ee2d610 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-n-4f403f96f8,UID:ci-4081.3.5-n-4f403f96f8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-n-4f403f96f8,},FirstTimestamp:2025-09-12 23:59:55.450938896 +0000 UTC m=+1.402108599,LastTimestamp:2025-09-12 23:59:55.450938896 +0000 UTC m=+1.402108599,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-n-4f403f96f8,}" Sep 12 23:59:55.457087 kubelet[2813]: I0912 23:59:55.457062 2813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:59:55.458957 kubelet[2813]: I0912 23:59:55.458928 2813 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 23:59:55.459505 kubelet[2813]: E0912 23:59:55.459479 2813 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-4f403f96f8\" not found" Sep 12 23:59:55.460228 kubelet[2813]: I0912 23:59:55.460204 2813 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 23:59:55.460325 kubelet[2813]: I0912 23:59:55.458949 2813 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:59:55.460479 kubelet[2813]: I0912 23:59:55.460466 2813 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:59:55.462050 kubelet[2813]: E0912 23:59:55.462019 2813 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 23:59:55.462300 kubelet[2813]: E0912 23:59:55.462275 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-4f403f96f8?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="200ms" Sep 12 23:59:55.463809 kubelet[2813]: I0912 23:59:55.463781 2813 factory.go:223] Registration of the systemd container factory successfully Sep 12 23:59:55.463993 kubelet[2813]: I0912 23:59:55.463966 2813 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:59:55.466411 kubelet[2813]: E0912 23:59:55.466393 2813 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:59:55.466924 kubelet[2813]: I0912 23:59:55.466906 2813 factory.go:223] Registration of the containerd container factory successfully Sep 12 23:59:55.535406 kubelet[2813]: I0912 23:59:55.535377 2813 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 23:59:55.535591 kubelet[2813]: I0912 23:59:55.535570 2813 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 23:59:55.535731 kubelet[2813]: I0912 23:59:55.535664 2813 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:59:55.559960 kubelet[2813]: E0912 23:59:55.559918 2813 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-4f403f96f8\" not found" Sep 12 23:59:55.660078 kubelet[2813]: E0912 23:59:55.660051 2813 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-4f403f96f8\" not found" Sep 12 23:59:55.663779 kubelet[2813]: E0912 23:59:55.663750 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-4f403f96f8?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="400ms" Sep 12 23:59:55.725820 kubelet[2813]: I0912 23:59:55.725728 2813 policy_none.go:49] "None policy: Start" Sep 12 23:59:55.726279 kubelet[2813]: I0912 23:59:55.725997 2813 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 23:59:55.726279 kubelet[2813]: I0912 23:59:55.726019 2813 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:59:55.735879 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 23:59:55.750696 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 23:59:55.761706 kubelet[2813]: E0912 23:59:55.760857 2813 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-4f403f96f8\" not found" Sep 12 23:59:55.763226 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 23:59:55.764450 kubelet[2813]: E0912 23:59:55.764418 2813 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 23:59:55.764666 kubelet[2813]: I0912 23:59:55.764646 2813 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:59:55.764697 kubelet[2813]: I0912 23:59:55.764666 2813 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:59:55.765382 kubelet[2813]: I0912 23:59:55.765335 2813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:59:55.766963 kubelet[2813]: E0912 23:59:55.766938 2813 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 23:59:55.767046 kubelet[2813]: E0912 23:59:55.766995 2813 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-n-4f403f96f8\" not found" Sep 12 23:59:55.789512 kubelet[2813]: I0912 23:59:55.789457 2813 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 23:59:55.791074 kubelet[2813]: I0912 23:59:55.790957 2813 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 23:59:55.791074 kubelet[2813]: I0912 23:59:55.790983 2813 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 23:59:55.791074 kubelet[2813]: I0912 23:59:55.791003 2813 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 23:59:55.791074 kubelet[2813]: I0912 23:59:55.791009 2813 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 23:59:55.791074 kubelet[2813]: E0912 23:59:55.791045 2813 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 12 23:59:55.793024 kubelet[2813]: E0912 23:59:55.792828 2813 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 23:59:55.868806 kubelet[2813]: I0912 23:59:55.868745 2813 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:55.869210 kubelet[2813]: E0912 23:59:55.869181 2813 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:55.904024 systemd[1]: Created slice kubepods-burstable-podaa47bbc07fdc4af9fd15185a9183c792.slice - libcontainer container kubepods-burstable-podaa47bbc07fdc4af9fd15185a9183c792.slice. Sep 12 23:59:55.914905 kubelet[2813]: E0912 23:59:55.914867 2813 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-4f403f96f8\" not found" node="ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:55.919324 systemd[1]: Created slice kubepods-burstable-pode7cfd0d4beee0c578caf4b8b6b9c9f5c.slice - libcontainer container kubepods-burstable-pode7cfd0d4beee0c578caf4b8b6b9c9f5c.slice. Sep 12 23:59:55.929333 kubelet[2813]: E0912 23:59:55.929303 2813 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-4f403f96f8\" not found" node="ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:55.932314 systemd[1]: Created slice kubepods-burstable-pod8db38a138095458eb0f6d00c72b277a4.slice - libcontainer container kubepods-burstable-pod8db38a138095458eb0f6d00c72b277a4.slice. Sep 12 23:59:55.934053 kubelet[2813]: E0912 23:59:55.934028 2813 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-4f403f96f8\" not found" node="ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:55.962596 kubelet[2813]: I0912 23:59:55.962559 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7cfd0d4beee0c578caf4b8b6b9c9f5c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-4f403f96f8\" (UID: \"e7cfd0d4beee0c578caf4b8b6b9c9f5c\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:55.962596 kubelet[2813]: I0912 23:59:55.962603 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8db38a138095458eb0f6d00c72b277a4-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-4f403f96f8\" (UID: \"8db38a138095458eb0f6d00c72b277a4\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:55.962716 kubelet[2813]: I0912 23:59:55.962623 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa47bbc07fdc4af9fd15185a9183c792-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-4f403f96f8\" (UID: \"aa47bbc07fdc4af9fd15185a9183c792\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:55.962716 kubelet[2813]: I0912 23:59:55.962640 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7cfd0d4beee0c578caf4b8b6b9c9f5c-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-4f403f96f8\" (UID: \"e7cfd0d4beee0c578caf4b8b6b9c9f5c\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:55.962716 kubelet[2813]: I0912 23:59:55.962654 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa47bbc07fdc4af9fd15185a9183c792-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-4f403f96f8\" (UID: \"aa47bbc07fdc4af9fd15185a9183c792\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:55.962716 kubelet[2813]: I0912 23:59:55.962675 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa47bbc07fdc4af9fd15185a9183c792-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-4f403f96f8\" (UID: \"aa47bbc07fdc4af9fd15185a9183c792\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:55.962716 kubelet[2813]: I0912 23:59:55.962690 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e7cfd0d4beee0c578caf4b8b6b9c9f5c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-4f403f96f8\" (UID: \"e7cfd0d4beee0c578caf4b8b6b9c9f5c\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:55.962827 kubelet[2813]: I0912 23:59:55.962705 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7cfd0d4beee0c578caf4b8b6b9c9f5c-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-4f403f96f8\" (UID: \"e7cfd0d4beee0c578caf4b8b6b9c9f5c\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:55.962827 kubelet[2813]: I0912 23:59:55.962719 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7cfd0d4beee0c578caf4b8b6b9c9f5c-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-4f403f96f8\" (UID: \"e7cfd0d4beee0c578caf4b8b6b9c9f5c\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:56.064286 kubelet[2813]: E0912 23:59:56.064185 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-4f403f96f8?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="800ms" Sep 12 23:59:56.068592 kubelet[2813]: E0912 23:59:56.068493 2813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.16:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.16:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-n-4f403f96f8.1864ae7a6ee2d610 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-n-4f403f96f8,UID:ci-4081.3.5-n-4f403f96f8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-n-4f403f96f8,},FirstTimestamp:2025-09-12 23:59:55.450938896 +0000 UTC m=+1.402108599,LastTimestamp:2025-09-12 23:59:55.450938896 +0000 UTC m=+1.402108599,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-n-4f403f96f8,}" Sep 12 23:59:56.071040 kubelet[2813]: I0912 23:59:56.070806 2813 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:56.071149 kubelet[2813]: E0912 23:59:56.071095 2813 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:56.216741 containerd[1701]: time="2025-09-12T23:59:56.216468304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-4f403f96f8,Uid:aa47bbc07fdc4af9fd15185a9183c792,Namespace:kube-system,Attempt:0,}" Sep 12 23:59:56.230930 containerd[1701]: time="2025-09-12T23:59:56.230677948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-4f403f96f8,Uid:e7cfd0d4beee0c578caf4b8b6b9c9f5c,Namespace:kube-system,Attempt:0,}" Sep 12 23:59:56.234792 containerd[1701]: time="2025-09-12T23:59:56.234750560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-4f403f96f8,Uid:8db38a138095458eb0f6d00c72b277a4,Namespace:kube-system,Attempt:0,}" Sep 12 23:59:56.411114 kubelet[2813]: E0912 23:59:56.411074 2813 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 23:59:56.436827 kubelet[2813]: E0912 23:59:56.436790 2813 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 23:59:56.473760 kubelet[2813]: I0912 23:59:56.473725 2813 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:56.474065 kubelet[2813]: E0912 23:59:56.474036 2813 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:56.576173 kubelet[2813]: E0912 23:59:56.576133 2813 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-4f403f96f8&limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 23:59:56.865277 kubelet[2813]: E0912 23:59:56.865210 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-4f403f96f8?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="1.6s" Sep 12 23:59:56.865663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3666142215.mount: Deactivated successfully. Sep 12 23:59:56.887388 containerd[1701]: time="2025-09-12T23:59:56.887336819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:59:56.896904 containerd[1701]: time="2025-09-12T23:59:56.896866368Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 12 23:59:56.899343 containerd[1701]: time="2025-09-12T23:59:56.899313136Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:59:56.902278 containerd[1701]: time="2025-09-12T23:59:56.902250545Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 23:59:56.907152 containerd[1701]: time="2025-09-12T23:59:56.906454998Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:59:56.910262 containerd[1701]: time="2025-09-12T23:59:56.910133209Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:59:56.911777 containerd[1701]: time="2025-09-12T23:59:56.911740254Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 23:59:56.915300 containerd[1701]: time="2025-09-12T23:59:56.915257145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:59:56.917917 containerd[1701]: time="2025-09-12T23:59:56.916068347Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 699.524323ms" Sep 12 23:59:56.920584 containerd[1701]: time="2025-09-12T23:59:56.920434641Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 685.60792ms" Sep 12 23:59:56.943562 containerd[1701]: time="2025-09-12T23:59:56.943503392Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 712.746204ms" Sep 12 23:59:57.264412 kubelet[2813]: E0912 23:59:57.264284 2813 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 23:59:57.276915 kubelet[2813]: I0912 23:59:57.276153 2813 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:57.276915 kubelet[2813]: E0912 23:59:57.276433 2813 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:57.554872 containerd[1701]: time="2025-09-12T23:59:57.554700123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:59:57.556568 containerd[1701]: time="2025-09-12T23:59:57.556176607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:59:57.556568 containerd[1701]: time="2025-09-12T23:59:57.556197567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:57.557261 containerd[1701]: time="2025-09-12T23:59:57.557226650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:57.558558 containerd[1701]: time="2025-09-12T23:59:57.556885889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:59:57.558558 containerd[1701]: time="2025-09-12T23:59:57.558047533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:59:57.558558 containerd[1701]: time="2025-09-12T23:59:57.558060733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:57.558558 containerd[1701]: time="2025-09-12T23:59:57.558365934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:57.563549 containerd[1701]: time="2025-09-12T23:59:57.563446510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:59:57.563549 containerd[1701]: time="2025-09-12T23:59:57.563510430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:59:57.563549 containerd[1701]: time="2025-09-12T23:59:57.563526550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:57.563759 containerd[1701]: time="2025-09-12T23:59:57.563593630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:59:57.605246 systemd[1]: Started cri-containerd-1a18ee04558556618302c529f15673f608044d4f1b9dfa673828be639a23cb6e.scope - libcontainer container 1a18ee04558556618302c529f15673f608044d4f1b9dfa673828be639a23cb6e. Sep 12 23:59:57.606286 systemd[1]: Started cri-containerd-3e9c218bbbf08c1170948b21e25e6af2cf748597cd49e30fd8e4ef58ee7dd244.scope - libcontainer container 3e9c218bbbf08c1170948b21e25e6af2cf748597cd49e30fd8e4ef58ee7dd244. Sep 12 23:59:57.607812 systemd[1]: Started cri-containerd-4bd9824a73116e7a740ca5ae37b0348d5d90922d9ea7993c757e9a2060dc194f.scope - libcontainer container 4bd9824a73116e7a740ca5ae37b0348d5d90922d9ea7993c757e9a2060dc194f. Sep 12 23:59:57.618818 kubelet[2813]: E0912 23:59:57.618755 2813 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 23:59:57.652477 containerd[1701]: time="2025-09-12T23:59:57.652024584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-4f403f96f8,Uid:8db38a138095458eb0f6d00c72b277a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e9c218bbbf08c1170948b21e25e6af2cf748597cd49e30fd8e4ef58ee7dd244\"" Sep 12 23:59:57.666803 containerd[1701]: time="2025-09-12T23:59:57.666478268Z" level=info msg="CreateContainer within sandbox \"3e9c218bbbf08c1170948b21e25e6af2cf748597cd49e30fd8e4ef58ee7dd244\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 23:59:57.672364 containerd[1701]: time="2025-09-12T23:59:57.672332206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-4f403f96f8,Uid:aa47bbc07fdc4af9fd15185a9183c792,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bd9824a73116e7a740ca5ae37b0348d5d90922d9ea7993c757e9a2060dc194f\"" Sep 12 23:59:57.674578 containerd[1701]: time="2025-09-12T23:59:57.674532893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-4f403f96f8,Uid:e7cfd0d4beee0c578caf4b8b6b9c9f5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a18ee04558556618302c529f15673f608044d4f1b9dfa673828be639a23cb6e\"" Sep 12 23:59:57.690011 containerd[1701]: time="2025-09-12T23:59:57.689946061Z" level=info msg="CreateContainer within sandbox \"4bd9824a73116e7a740ca5ae37b0348d5d90922d9ea7993c757e9a2060dc194f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 23:59:57.695493 containerd[1701]: time="2025-09-12T23:59:57.695454158Z" level=info msg="CreateContainer within sandbox \"1a18ee04558556618302c529f15673f608044d4f1b9dfa673828be639a23cb6e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 23:59:57.732086 containerd[1701]: time="2025-09-12T23:59:57.732036031Z" level=info msg="CreateContainer within sandbox \"3e9c218bbbf08c1170948b21e25e6af2cf748597cd49e30fd8e4ef58ee7dd244\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"88a7f8cfa3e10937c728359aa590474821eecc59666bf2c484b55e0930bb3e73\"" Sep 12 23:59:57.733134 containerd[1701]: time="2025-09-12T23:59:57.732702873Z" level=info msg="StartContainer for \"88a7f8cfa3e10937c728359aa590474821eecc59666bf2c484b55e0930bb3e73\"" Sep 12 23:59:57.755319 systemd[1]: Started cri-containerd-88a7f8cfa3e10937c728359aa590474821eecc59666bf2c484b55e0930bb3e73.scope - libcontainer container 88a7f8cfa3e10937c728359aa590474821eecc59666bf2c484b55e0930bb3e73. Sep 12 23:59:57.768607 containerd[1701]: time="2025-09-12T23:59:57.768378543Z" level=info msg="CreateContainer within sandbox \"1a18ee04558556618302c529f15673f608044d4f1b9dfa673828be639a23cb6e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2b781ca3ede9c37b76ab8fcb71e6e99f08d57d9a7b774284ca6d986506d67dc4\"" Sep 12 23:59:57.770173 containerd[1701]: time="2025-09-12T23:59:57.769556987Z" level=info msg="StartContainer for \"2b781ca3ede9c37b76ab8fcb71e6e99f08d57d9a7b774284ca6d986506d67dc4\"" Sep 12 23:59:57.776336 containerd[1701]: time="2025-09-12T23:59:57.776302088Z" level=info msg="CreateContainer within sandbox \"4bd9824a73116e7a740ca5ae37b0348d5d90922d9ea7993c757e9a2060dc194f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6738e3993278e2ba32e9c018ac28be41efb7f898d0b9f80ed78847fecf7e3b23\"" Sep 12 23:59:57.778230 containerd[1701]: time="2025-09-12T23:59:57.777252131Z" level=info msg="StartContainer for \"6738e3993278e2ba32e9c018ac28be41efb7f898d0b9f80ed78847fecf7e3b23\"" Sep 12 23:59:57.801343 containerd[1701]: time="2025-09-12T23:59:57.801291445Z" level=info msg="StartContainer for \"88a7f8cfa3e10937c728359aa590474821eecc59666bf2c484b55e0930bb3e73\" returns successfully" Sep 12 23:59:57.811127 kubelet[2813]: E0912 23:59:57.810468 2813 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-4f403f96f8\" not found" node="ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:57.816302 systemd[1]: Started cri-containerd-2b781ca3ede9c37b76ab8fcb71e6e99f08d57d9a7b774284ca6d986506d67dc4.scope - libcontainer container 2b781ca3ede9c37b76ab8fcb71e6e99f08d57d9a7b774284ca6d986506d67dc4. Sep 12 23:59:57.830289 systemd[1]: Started cri-containerd-6738e3993278e2ba32e9c018ac28be41efb7f898d0b9f80ed78847fecf7e3b23.scope - libcontainer container 6738e3993278e2ba32e9c018ac28be41efb7f898d0b9f80ed78847fecf7e3b23. Sep 12 23:59:57.882582 containerd[1701]: time="2025-09-12T23:59:57.882433096Z" level=info msg="StartContainer for \"2b781ca3ede9c37b76ab8fcb71e6e99f08d57d9a7b774284ca6d986506d67dc4\" returns successfully" Sep 12 23:59:57.896727 containerd[1701]: time="2025-09-12T23:59:57.896685780Z" level=info msg="StartContainer for \"6738e3993278e2ba32e9c018ac28be41efb7f898d0b9f80ed78847fecf7e3b23\" returns successfully" Sep 12 23:59:58.820531 kubelet[2813]: E0912 23:59:58.820411 2813 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-4f403f96f8\" not found" node="ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:58.823869 kubelet[2813]: E0912 23:59:58.823685 2813 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-4f403f96f8\" not found" node="ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:58.825439 kubelet[2813]: E0912 23:59:58.825423 2813 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-4f403f96f8\" not found" node="ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:58.880185 kubelet[2813]: I0912 23:59:58.879836 2813 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:59.826136 kubelet[2813]: E0912 23:59:59.825855 2813 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-4f403f96f8\" not found" node="ci-4081.3.5-n-4f403f96f8" Sep 12 23:59:59.826136 kubelet[2813]: E0912 23:59:59.825966 2813 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.5-n-4f403f96f8\" not found" node="ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:00.028406 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Sep 13 00:00:00.036372 systemd[1]: logrotate.service: Deactivated successfully. Sep 13 00:00:00.453973 kubelet[2813]: I0913 00:00:00.453928 2813 apiserver.go:52] "Watching apiserver" Sep 13 00:00:00.459400 kubelet[2813]: E0913 00:00:00.459356 2813 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.5-n-4f403f96f8\" not found" node="ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:00.561225 kubelet[2813]: I0913 00:00:00.561156 2813 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:00:00.641283 kubelet[2813]: I0913 00:00:00.640665 2813 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:00.641283 kubelet[2813]: E0913 00:00:00.640703 2813 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.5-n-4f403f96f8\": node \"ci-4081.3.5-n-4f403f96f8\" not found" Sep 13 00:00:00.660930 kubelet[2813]: I0913 00:00:00.660636 2813 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:00.760611 kubelet[2813]: E0913 00:00:00.760509 2813 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-n-4f403f96f8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:00.761307 kubelet[2813]: I0913 00:00:00.760763 2813 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:00.765136 kubelet[2813]: E0913 00:00:00.765081 2813 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.5-n-4f403f96f8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:00.765774 kubelet[2813]: I0913 00:00:00.765510 2813 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:00.781350 kubelet[2813]: E0913 00:00:00.781311 2813 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-n-4f403f96f8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:00.824946 kubelet[2813]: I0913 00:00:00.824897 2813 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:00.826889 kubelet[2813]: E0913 00:00:00.826853 2813 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-n-4f403f96f8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:02.630669 systemd[1]: Reloading requested from client PID 3102 ('systemctl') (unit session-9.scope)... Sep 13 00:00:02.630944 systemd[1]: Reloading... Sep 13 00:00:02.729220 zram_generator::config[3143]: No configuration found. Sep 13 00:00:02.843117 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:00:02.933745 systemd[1]: Reloading finished in 302 ms. Sep 13 00:00:02.946194 kubelet[2813]: I0913 00:00:02.946010 2813 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:02.955241 kubelet[2813]: I0913 00:00:02.955211 2813 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 00:00:02.971802 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:00:02.986502 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:00:02.986717 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:00:02.986766 systemd[1]: kubelet.service: Consumed 1.370s CPU time, 129.2M memory peak, 0B memory swap peak. Sep 13 00:00:02.996801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:00:03.191843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:00:03.201414 (kubelet)[3206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:00:03.249157 kubelet[3206]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:00:03.249157 kubelet[3206]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:00:03.249157 kubelet[3206]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:00:03.249157 kubelet[3206]: I0913 00:00:03.247418 3206 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:00:03.255204 kubelet[3206]: I0913 00:00:03.255172 3206 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:00:03.255204 kubelet[3206]: I0913 00:00:03.255199 3206 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:00:03.255418 kubelet[3206]: I0913 00:00:03.255398 3206 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:00:03.256709 kubelet[3206]: I0913 00:00:03.256689 3206 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 00:00:03.259766 kubelet[3206]: I0913 00:00:03.259150 3206 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:00:03.266610 kubelet[3206]: E0913 00:00:03.266569 3206 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:00:03.266610 kubelet[3206]: I0913 00:00:03.266607 3206 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:00:03.270127 kubelet[3206]: I0913 00:00:03.269755 3206 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:00:03.270127 kubelet[3206]: I0913 00:00:03.269966 3206 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:00:03.270263 kubelet[3206]: I0913 00:00:03.269988 3206 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-4f403f96f8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:00:03.270388 kubelet[3206]: I0913 00:00:03.270375 3206 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:00:03.270441 kubelet[3206]: I0913 00:00:03.270432 3206 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:00:03.270526 kubelet[3206]: I0913 00:00:03.270517 3206 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:00:03.270723 kubelet[3206]: I0913 00:00:03.270711 3206 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:00:03.270797 kubelet[3206]: I0913 00:00:03.270787 3206 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:00:03.270858 kubelet[3206]: I0913 00:00:03.270851 3206 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:00:03.270913 kubelet[3206]: I0913 00:00:03.270905 3206 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:00:03.283134 kubelet[3206]: I0913 00:00:03.281238 3206 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:00:03.284692 kubelet[3206]: I0913 00:00:03.283859 3206 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:00:03.289311 kubelet[3206]: I0913 00:00:03.289297 3206 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:00:03.289531 kubelet[3206]: I0913 00:00:03.289452 3206 server.go:1289] "Started kubelet" Sep 13 00:00:03.297133 kubelet[3206]: I0913 00:00:03.297117 3206 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:00:03.301352 kubelet[3206]: I0913 00:00:03.301294 3206 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:00:03.302166 kubelet[3206]: I0913 00:00:03.302094 3206 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:00:03.306858 kubelet[3206]: I0913 00:00:03.306393 3206 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:00:03.307883 kubelet[3206]: E0913 00:00:03.307847 3206 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-4f403f96f8\" not found" Sep 13 00:00:03.309547 kubelet[3206]: I0913 00:00:03.309179 3206 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:00:03.309547 kubelet[3206]: I0913 00:00:03.309344 3206 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:00:03.311233 kubelet[3206]: I0913 00:00:03.310097 3206 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:00:03.311420 kubelet[3206]: I0913 00:00:03.311393 3206 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:00:03.311595 kubelet[3206]: I0913 00:00:03.311559 3206 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:00:03.319235 kubelet[3206]: E0913 00:00:03.318479 3206 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:00:03.319235 kubelet[3206]: I0913 00:00:03.319054 3206 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:00:03.319673 kubelet[3206]: I0913 00:00:03.319552 3206 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:00:03.322928 kubelet[3206]: I0913 00:00:03.322886 3206 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:00:03.329348 kubelet[3206]: I0913 00:00:03.329313 3206 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:00:03.332961 kubelet[3206]: I0913 00:00:03.332939 3206 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:00:03.333164 kubelet[3206]: I0913 00:00:03.333079 3206 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:00:03.333164 kubelet[3206]: I0913 00:00:03.333117 3206 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:00:03.333164 kubelet[3206]: I0913 00:00:03.333125 3206 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:00:03.333302 kubelet[3206]: E0913 00:00:03.333285 3206 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:00:03.362225 kubelet[3206]: I0913 00:00:03.362191 3206 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:00:03.362225 kubelet[3206]: I0913 00:00:03.362213 3206 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:00:03.362225 kubelet[3206]: I0913 00:00:03.362238 3206 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:00:03.362404 kubelet[3206]: I0913 00:00:03.362368 3206 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:00:03.362404 kubelet[3206]: I0913 00:00:03.362378 3206 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:00:03.362404 kubelet[3206]: I0913 00:00:03.362395 3206 policy_none.go:49] "None policy: Start" Sep 13 00:00:03.362404 kubelet[3206]: I0913 00:00:03.362404 3206 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:00:03.362494 kubelet[3206]: I0913 00:00:03.362413 3206 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:00:03.362516 kubelet[3206]: I0913 00:00:03.362495 3206 state_mem.go:75] "Updated machine memory state" Sep 13 00:00:03.366134 kubelet[3206]: E0913 00:00:03.366075 3206 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:00:03.366280 kubelet[3206]: I0913 00:00:03.366261 3206 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:00:03.366311 kubelet[3206]: I0913 00:00:03.366279 3206 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:00:03.366834 kubelet[3206]: I0913 00:00:03.366749 3206 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:00:03.370379 kubelet[3206]: E0913 00:00:03.370338 3206 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:00:03.434623 kubelet[3206]: I0913 00:00:03.433961 3206 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:03.434623 kubelet[3206]: I0913 00:00:03.434068 3206 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:03.434623 kubelet[3206]: I0913 00:00:03.434323 3206 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:03.442918 kubelet[3206]: I0913 00:00:03.442807 3206 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 00:00:03.448188 kubelet[3206]: I0913 00:00:03.448056 3206 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 00:00:03.448738 kubelet[3206]: I0913 00:00:03.448651 3206 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 00:00:03.448738 kubelet[3206]: E0913 00:00:03.448694 3206 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.5-n-4f403f96f8\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:03.469705 kubelet[3206]: I0913 00:00:03.468790 3206 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:03.480419 kubelet[3206]: I0913 00:00:03.480269 3206 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:03.480419 kubelet[3206]: I0913 00:00:03.480371 3206 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:03.510742 kubelet[3206]: I0913 00:00:03.510463 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa47bbc07fdc4af9fd15185a9183c792-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-4f403f96f8\" (UID: \"aa47bbc07fdc4af9fd15185a9183c792\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:03.510742 kubelet[3206]: I0913 00:00:03.510497 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa47bbc07fdc4af9fd15185a9183c792-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-4f403f96f8\" (UID: \"aa47bbc07fdc4af9fd15185a9183c792\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:03.510742 kubelet[3206]: I0913 00:00:03.510518 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7cfd0d4beee0c578caf4b8b6b9c9f5c-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-4f403f96f8\" (UID: \"e7cfd0d4beee0c578caf4b8b6b9c9f5c\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:03.510742 kubelet[3206]: I0913 00:00:03.510535 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7cfd0d4beee0c578caf4b8b6b9c9f5c-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-4f403f96f8\" (UID: \"e7cfd0d4beee0c578caf4b8b6b9c9f5c\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:03.510742 kubelet[3206]: I0913 00:00:03.510553 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7cfd0d4beee0c578caf4b8b6b9c9f5c-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-4f403f96f8\" (UID: \"e7cfd0d4beee0c578caf4b8b6b9c9f5c\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:03.510952 kubelet[3206]: I0913 00:00:03.510568 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7cfd0d4beee0c578caf4b8b6b9c9f5c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-4f403f96f8\" (UID: \"e7cfd0d4beee0c578caf4b8b6b9c9f5c\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:03.510952 kubelet[3206]: I0913 00:00:03.510584 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e7cfd0d4beee0c578caf4b8b6b9c9f5c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-4f403f96f8\" (UID: \"e7cfd0d4beee0c578caf4b8b6b9c9f5c\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:03.510952 kubelet[3206]: I0913 00:00:03.510605 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8db38a138095458eb0f6d00c72b277a4-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-4f403f96f8\" (UID: \"8db38a138095458eb0f6d00c72b277a4\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:03.510952 kubelet[3206]: I0913 00:00:03.510621 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa47bbc07fdc4af9fd15185a9183c792-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-4f403f96f8\" (UID: \"aa47bbc07fdc4af9fd15185a9183c792\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:04.273903 kubelet[3206]: I0913 00:00:04.273863 3206 apiserver.go:52] "Watching apiserver" Sep 13 00:00:04.309376 kubelet[3206]: I0913 00:00:04.309329 3206 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:00:04.350596 kubelet[3206]: I0913 00:00:04.350144 3206 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:04.350596 kubelet[3206]: I0913 00:00:04.350423 3206 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:04.362006 kubelet[3206]: I0913 00:00:04.361460 3206 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 00:00:04.362006 kubelet[3206]: E0913 00:00:04.361527 3206 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.5-n-4f403f96f8\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:04.363752 kubelet[3206]: I0913 00:00:04.363324 3206 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 00:00:04.363752 kubelet[3206]: E0913 00:00:04.363586 3206 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.5-n-4f403f96f8\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-n-4f403f96f8" Sep 13 00:00:04.392692 kubelet[3206]: I0913 00:00:04.392557 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-4f403f96f8" podStartSLOduration=2.392540217 podStartE2EDuration="2.392540217s" podCreationTimestamp="2025-09-13 00:00:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:00:04.392421977 +0000 UTC m=+1.187192842" watchObservedRunningTime="2025-09-13 00:00:04.392540217 +0000 UTC m=+1.187311162" Sep 13 00:00:04.393219 kubelet[3206]: I0913 00:00:04.392821 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-n-4f403f96f8" podStartSLOduration=1.392812778 podStartE2EDuration="1.392812778s" podCreationTimestamp="2025-09-13 00:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:00:04.379312385 +0000 UTC m=+1.174083250" watchObservedRunningTime="2025-09-13 00:00:04.392812778 +0000 UTC m=+1.187583683" Sep 13 00:00:04.404463 kubelet[3206]: I0913 00:00:04.404275 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-n-4f403f96f8" podStartSLOduration=1.404261486 podStartE2EDuration="1.404261486s" podCreationTimestamp="2025-09-13 00:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:00:04.403582324 +0000 UTC m=+1.198353269" watchObservedRunningTime="2025-09-13 00:00:04.404261486 +0000 UTC m=+1.199032391" Sep 13 00:00:08.346623 kubelet[3206]: I0913 00:00:08.345977 3206 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:00:08.348166 containerd[1701]: time="2025-09-13T00:00:08.347333827Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:00:08.348541 kubelet[3206]: I0913 00:00:08.347869 3206 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:00:09.366515 systemd[1]: Created slice kubepods-besteffort-pod01ac31b5_c6c3_4883_9e13_52bc1e6d06ce.slice - libcontainer container kubepods-besteffort-pod01ac31b5_c6c3_4883_9e13_52bc1e6d06ce.slice. Sep 13 00:00:09.444932 kubelet[3206]: I0913 00:00:09.444787 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/01ac31b5-c6c3-4883-9e13-52bc1e6d06ce-kube-proxy\") pod \"kube-proxy-tbj5v\" (UID: \"01ac31b5-c6c3-4883-9e13-52bc1e6d06ce\") " pod="kube-system/kube-proxy-tbj5v" Sep 13 00:00:09.444932 kubelet[3206]: I0913 00:00:09.444828 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01ac31b5-c6c3-4883-9e13-52bc1e6d06ce-xtables-lock\") pod \"kube-proxy-tbj5v\" (UID: \"01ac31b5-c6c3-4883-9e13-52bc1e6d06ce\") " pod="kube-system/kube-proxy-tbj5v" Sep 13 00:00:09.444932 kubelet[3206]: I0913 00:00:09.444845 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01ac31b5-c6c3-4883-9e13-52bc1e6d06ce-lib-modules\") pod \"kube-proxy-tbj5v\" (UID: \"01ac31b5-c6c3-4883-9e13-52bc1e6d06ce\") " pod="kube-system/kube-proxy-tbj5v" Sep 13 00:00:09.444932 kubelet[3206]: I0913 00:00:09.444862 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68td7\" (UniqueName: \"kubernetes.io/projected/01ac31b5-c6c3-4883-9e13-52bc1e6d06ce-kube-api-access-68td7\") pod \"kube-proxy-tbj5v\" (UID: \"01ac31b5-c6c3-4883-9e13-52bc1e6d06ce\") " pod="kube-system/kube-proxy-tbj5v" Sep 13 00:00:09.480952 systemd[1]: Created slice kubepods-besteffort-podb561d506_7f0c_469b_b136_121dcfc37b52.slice - libcontainer container kubepods-besteffort-podb561d506_7f0c_469b_b136_121dcfc37b52.slice. Sep 13 00:00:09.545853 kubelet[3206]: I0913 00:00:09.545805 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b561d506-7f0c-469b-b136-121dcfc37b52-var-lib-calico\") pod \"tigera-operator-755d956888-x5j2j\" (UID: \"b561d506-7f0c-469b-b136-121dcfc37b52\") " pod="tigera-operator/tigera-operator-755d956888-x5j2j" Sep 13 00:00:09.545853 kubelet[3206]: I0913 00:00:09.545857 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jmlw\" (UniqueName: \"kubernetes.io/projected/b561d506-7f0c-469b-b136-121dcfc37b52-kube-api-access-4jmlw\") pod \"tigera-operator-755d956888-x5j2j\" (UID: \"b561d506-7f0c-469b-b136-121dcfc37b52\") " pod="tigera-operator/tigera-operator-755d956888-x5j2j" Sep 13 00:00:09.680412 containerd[1701]: time="2025-09-13T00:00:09.679982986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tbj5v,Uid:01ac31b5-c6c3-4883-9e13-52bc1e6d06ce,Namespace:kube-system,Attempt:0,}" Sep 13 00:00:09.739286 containerd[1701]: time="2025-09-13T00:00:09.739058211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:00:09.739286 containerd[1701]: time="2025-09-13T00:00:09.739143611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:00:09.739286 containerd[1701]: time="2025-09-13T00:00:09.739159451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:09.739484 containerd[1701]: time="2025-09-13T00:00:09.739291492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:09.756284 systemd[1]: Started cri-containerd-3c20ae6dc02904438c1fb06b85b78fe6306b182922c7c6b9468a7b962cccbda0.scope - libcontainer container 3c20ae6dc02904438c1fb06b85b78fe6306b182922c7c6b9468a7b962cccbda0. Sep 13 00:00:09.780999 containerd[1701]: time="2025-09-13T00:00:09.780951114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tbj5v,Uid:01ac31b5-c6c3-4883-9e13-52bc1e6d06ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c20ae6dc02904438c1fb06b85b78fe6306b182922c7c6b9468a7b962cccbda0\"" Sep 13 00:00:09.786826 containerd[1701]: time="2025-09-13T00:00:09.786513368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-x5j2j,Uid:b561d506-7f0c-469b-b136-121dcfc37b52,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:00:09.794863 containerd[1701]: time="2025-09-13T00:00:09.794266987Z" level=info msg="CreateContainer within sandbox \"3c20ae6dc02904438c1fb06b85b78fe6306b182922c7c6b9468a7b962cccbda0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:00:09.851267 containerd[1701]: time="2025-09-13T00:00:09.851214007Z" level=info msg="CreateContainer within sandbox \"3c20ae6dc02904438c1fb06b85b78fe6306b182922c7c6b9468a7b962cccbda0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5720ec02fa65beb7d9ad8c5d804c2f64a09a20c2d9538f422d64e0e7435be45a\"" Sep 13 00:00:09.854041 containerd[1701]: time="2025-09-13T00:00:09.853990614Z" level=info msg="StartContainer for \"5720ec02fa65beb7d9ad8c5d804c2f64a09a20c2d9538f422d64e0e7435be45a\"" Sep 13 00:00:09.858686 containerd[1701]: time="2025-09-13T00:00:09.858276344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:00:09.858686 containerd[1701]: time="2025-09-13T00:00:09.858335024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:00:09.858686 containerd[1701]: time="2025-09-13T00:00:09.858347144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:09.858686 containerd[1701]: time="2025-09-13T00:00:09.858421265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:09.881302 systemd[1]: Started cri-containerd-d37906a47d4aaf5ef4e574538bb56af8cf21000cd08986066850447abcba4025.scope - libcontainer container d37906a47d4aaf5ef4e574538bb56af8cf21000cd08986066850447abcba4025. Sep 13 00:00:09.894273 systemd[1]: Started cri-containerd-5720ec02fa65beb7d9ad8c5d804c2f64a09a20c2d9538f422d64e0e7435be45a.scope - libcontainer container 5720ec02fa65beb7d9ad8c5d804c2f64a09a20c2d9538f422d64e0e7435be45a. Sep 13 00:00:09.937898 containerd[1701]: time="2025-09-13T00:00:09.936303016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-x5j2j,Uid:b561d506-7f0c-469b-b136-121dcfc37b52,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d37906a47d4aaf5ef4e574538bb56af8cf21000cd08986066850447abcba4025\"" Sep 13 00:00:09.937898 containerd[1701]: time="2025-09-13T00:00:09.936321816Z" level=info msg="StartContainer for \"5720ec02fa65beb7d9ad8c5d804c2f64a09a20c2d9538f422d64e0e7435be45a\" returns successfully" Sep 13 00:00:09.940169 containerd[1701]: time="2025-09-13T00:00:09.939727025Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:00:12.226831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3119512529.mount: Deactivated successfully. Sep 13 00:00:12.603138 containerd[1701]: time="2025-09-13T00:00:12.602739536Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:12.606653 containerd[1701]: time="2025-09-13T00:00:12.606621265Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 13 00:00:12.611338 containerd[1701]: time="2025-09-13T00:00:12.611241796Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:12.616647 containerd[1701]: time="2025-09-13T00:00:12.616597369Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:12.617568 containerd[1701]: time="2025-09-13T00:00:12.617420851Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.677656466s" Sep 13 00:00:12.617568 containerd[1701]: time="2025-09-13T00:00:12.617453971Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 13 00:00:12.626611 containerd[1701]: time="2025-09-13T00:00:12.626575393Z" level=info msg="CreateContainer within sandbox \"d37906a47d4aaf5ef4e574538bb56af8cf21000cd08986066850447abcba4025\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:00:12.659252 containerd[1701]: time="2025-09-13T00:00:12.659134709Z" level=info msg="CreateContainer within sandbox \"d37906a47d4aaf5ef4e574538bb56af8cf21000cd08986066850447abcba4025\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c3bfc5e99754d127f231ee2c0f397ea8eed3977b2ab27f32038ad23793f71f39\"" Sep 13 00:00:12.659691 containerd[1701]: time="2025-09-13T00:00:12.659648751Z" level=info msg="StartContainer for \"c3bfc5e99754d127f231ee2c0f397ea8eed3977b2ab27f32038ad23793f71f39\"" Sep 13 00:00:12.685278 systemd[1]: Started cri-containerd-c3bfc5e99754d127f231ee2c0f397ea8eed3977b2ab27f32038ad23793f71f39.scope - libcontainer container c3bfc5e99754d127f231ee2c0f397ea8eed3977b2ab27f32038ad23793f71f39. Sep 13 00:00:12.716458 containerd[1701]: time="2025-09-13T00:00:12.714985241Z" level=info msg="StartContainer for \"c3bfc5e99754d127f231ee2c0f397ea8eed3977b2ab27f32038ad23793f71f39\" returns successfully" Sep 13 00:00:13.349321 kubelet[3206]: I0913 00:00:13.349050 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tbj5v" podStartSLOduration=4.349035018 podStartE2EDuration="4.349035018s" podCreationTimestamp="2025-09-13 00:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:00:10.375584417 +0000 UTC m=+7.170355322" watchObservedRunningTime="2025-09-13 00:00:13.349035018 +0000 UTC m=+10.143805923" Sep 13 00:00:13.384309 kubelet[3206]: I0913 00:00:13.383982 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-x5j2j" podStartSLOduration=1.704049629 podStartE2EDuration="4.38396594s" podCreationTimestamp="2025-09-13 00:00:09 +0000 UTC" firstStartedPulling="2025-09-13 00:00:09.938596702 +0000 UTC m=+6.733367727" lastFinishedPulling="2025-09-13 00:00:12.618513133 +0000 UTC m=+9.413284038" observedRunningTime="2025-09-13 00:00:13.3839261 +0000 UTC m=+10.178697005" watchObservedRunningTime="2025-09-13 00:00:13.38396594 +0000 UTC m=+10.178736845" Sep 13 00:00:18.672547 sudo[2201]: pam_unix(sudo:session): session closed for user root Sep 13 00:00:18.740669 sshd[2198]: pam_unix(sshd:session): session closed for user core Sep 13 00:00:18.746707 systemd[1]: sshd@6-10.200.20.16:22-10.200.16.10:45418.service: Deactivated successfully. Sep 13 00:00:18.748176 systemd-logind[1678]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:00:18.748701 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:00:18.748881 systemd[1]: session-9.scope: Consumed 8.289s CPU time, 151.2M memory peak, 0B memory swap peak. Sep 13 00:00:18.751607 systemd-logind[1678]: Removed session 9. Sep 13 00:00:26.272696 systemd[1]: Created slice kubepods-besteffort-pod2b80b7fd_367f_4f6c_ba98_e1121d937d32.slice - libcontainer container kubepods-besteffort-pod2b80b7fd_367f_4f6c_ba98_e1121d937d32.slice. Sep 13 00:00:26.349732 kubelet[3206]: I0913 00:00:26.348694 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b80b7fd-367f-4f6c-ba98-e1121d937d32-tigera-ca-bundle\") pod \"calico-typha-5db4f7f57-blxrh\" (UID: \"2b80b7fd-367f-4f6c-ba98-e1121d937d32\") " pod="calico-system/calico-typha-5db4f7f57-blxrh" Sep 13 00:00:26.349732 kubelet[3206]: I0913 00:00:26.348743 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpgrh\" (UniqueName: \"kubernetes.io/projected/2b80b7fd-367f-4f6c-ba98-e1121d937d32-kube-api-access-rpgrh\") pod \"calico-typha-5db4f7f57-blxrh\" (UID: \"2b80b7fd-367f-4f6c-ba98-e1121d937d32\") " pod="calico-system/calico-typha-5db4f7f57-blxrh" Sep 13 00:00:26.349732 kubelet[3206]: I0913 00:00:26.348761 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2b80b7fd-367f-4f6c-ba98-e1121d937d32-typha-certs\") pod \"calico-typha-5db4f7f57-blxrh\" (UID: \"2b80b7fd-367f-4f6c-ba98-e1121d937d32\") " pod="calico-system/calico-typha-5db4f7f57-blxrh" Sep 13 00:00:26.428940 systemd[1]: Created slice kubepods-besteffort-podba15f285_5c75_4969_9991_66e6493e144f.slice - libcontainer container kubepods-besteffort-podba15f285_5c75_4969_9991_66e6493e144f.slice. Sep 13 00:00:26.450359 kubelet[3206]: I0913 00:00:26.449598 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ba15f285-5c75-4969-9991-66e6493e144f-cni-net-dir\") pod \"calico-node-fstqq\" (UID: \"ba15f285-5c75-4969-9991-66e6493e144f\") " pod="calico-system/calico-node-fstqq" Sep 13 00:00:26.450359 kubelet[3206]: I0913 00:00:26.449647 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba15f285-5c75-4969-9991-66e6493e144f-xtables-lock\") pod \"calico-node-fstqq\" (UID: \"ba15f285-5c75-4969-9991-66e6493e144f\") " pod="calico-system/calico-node-fstqq" Sep 13 00:00:26.450359 kubelet[3206]: I0913 00:00:26.449686 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ba15f285-5c75-4969-9991-66e6493e144f-flexvol-driver-host\") pod \"calico-node-fstqq\" (UID: \"ba15f285-5c75-4969-9991-66e6493e144f\") " pod="calico-system/calico-node-fstqq" Sep 13 00:00:26.450359 kubelet[3206]: I0913 00:00:26.449713 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ba15f285-5c75-4969-9991-66e6493e144f-cni-bin-dir\") pod \"calico-node-fstqq\" (UID: \"ba15f285-5c75-4969-9991-66e6493e144f\") " pod="calico-system/calico-node-fstqq" Sep 13 00:00:26.450359 kubelet[3206]: I0913 00:00:26.449730 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ba15f285-5c75-4969-9991-66e6493e144f-node-certs\") pod \"calico-node-fstqq\" (UID: \"ba15f285-5c75-4969-9991-66e6493e144f\") " pod="calico-system/calico-node-fstqq" Sep 13 00:00:26.450622 kubelet[3206]: I0913 00:00:26.449744 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba15f285-5c75-4969-9991-66e6493e144f-tigera-ca-bundle\") pod \"calico-node-fstqq\" (UID: \"ba15f285-5c75-4969-9991-66e6493e144f\") " pod="calico-system/calico-node-fstqq" Sep 13 00:00:26.450622 kubelet[3206]: I0913 00:00:26.449759 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ba15f285-5c75-4969-9991-66e6493e144f-var-run-calico\") pod \"calico-node-fstqq\" (UID: \"ba15f285-5c75-4969-9991-66e6493e144f\") " pod="calico-system/calico-node-fstqq" Sep 13 00:00:26.450622 kubelet[3206]: I0913 00:00:26.449777 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ba15f285-5c75-4969-9991-66e6493e144f-var-lib-calico\") pod \"calico-node-fstqq\" (UID: \"ba15f285-5c75-4969-9991-66e6493e144f\") " pod="calico-system/calico-node-fstqq" Sep 13 00:00:26.450622 kubelet[3206]: I0913 00:00:26.449791 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk7wn\" (UniqueName: \"kubernetes.io/projected/ba15f285-5c75-4969-9991-66e6493e144f-kube-api-access-rk7wn\") pod \"calico-node-fstqq\" (UID: \"ba15f285-5c75-4969-9991-66e6493e144f\") " pod="calico-system/calico-node-fstqq" Sep 13 00:00:26.450622 kubelet[3206]: I0913 00:00:26.449808 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ba15f285-5c75-4969-9991-66e6493e144f-cni-log-dir\") pod \"calico-node-fstqq\" (UID: \"ba15f285-5c75-4969-9991-66e6493e144f\") " pod="calico-system/calico-node-fstqq" Sep 13 00:00:26.450729 kubelet[3206]: I0913 00:00:26.449821 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba15f285-5c75-4969-9991-66e6493e144f-lib-modules\") pod \"calico-node-fstqq\" (UID: \"ba15f285-5c75-4969-9991-66e6493e144f\") " pod="calico-system/calico-node-fstqq" Sep 13 00:00:26.450729 kubelet[3206]: I0913 00:00:26.449835 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ba15f285-5c75-4969-9991-66e6493e144f-policysync\") pod \"calico-node-fstqq\" (UID: \"ba15f285-5c75-4969-9991-66e6493e144f\") " pod="calico-system/calico-node-fstqq" Sep 13 00:00:26.565195 kubelet[3206]: E0913 00:00:26.564420 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.565195 kubelet[3206]: W0913 00:00:26.564445 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.565848 kubelet[3206]: E0913 00:00:26.565803 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.578669 containerd[1701]: time="2025-09-13T00:00:26.577798453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5db4f7f57-blxrh,Uid:2b80b7fd-367f-4f6c-ba98-e1121d937d32,Namespace:calico-system,Attempt:0,}" Sep 13 00:00:26.579965 kubelet[3206]: E0913 00:00:26.579342 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.579965 kubelet[3206]: W0913 00:00:26.579366 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.579965 kubelet[3206]: E0913 00:00:26.579387 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.624056 containerd[1701]: time="2025-09-13T00:00:26.623606795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:00:26.624056 containerd[1701]: time="2025-09-13T00:00:26.623672676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:00:26.624056 containerd[1701]: time="2025-09-13T00:00:26.623696996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:26.625376 containerd[1701]: time="2025-09-13T00:00:26.625268759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:26.651989 systemd[1]: Started cri-containerd-b1d6291513622cdf703c2a11120c373808f249f3d8f211700b727070b64741c1.scope - libcontainer container b1d6291513622cdf703c2a11120c373808f249f3d8f211700b727070b64741c1. Sep 13 00:00:26.659756 kubelet[3206]: E0913 00:00:26.659710 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tlnn9" podUID="3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67" Sep 13 00:00:26.732319 containerd[1701]: time="2025-09-13T00:00:26.732256118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5db4f7f57-blxrh,Uid:2b80b7fd-367f-4f6c-ba98-e1121d937d32,Namespace:calico-system,Attempt:0,} returns sandbox id \"b1d6291513622cdf703c2a11120c373808f249f3d8f211700b727070b64741c1\"" Sep 13 00:00:26.737413 containerd[1701]: time="2025-09-13T00:00:26.737344569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fstqq,Uid:ba15f285-5c75-4969-9991-66e6493e144f,Namespace:calico-system,Attempt:0,}" Sep 13 00:00:26.743664 containerd[1701]: time="2025-09-13T00:00:26.743463063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:00:26.746619 kubelet[3206]: E0913 00:00:26.746575 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.746619 kubelet[3206]: W0913 00:00:26.746607 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.746851 kubelet[3206]: E0913 00:00:26.746629 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.747271 kubelet[3206]: E0913 00:00:26.747018 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.747271 kubelet[3206]: W0913 00:00:26.747034 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.747271 kubelet[3206]: E0913 00:00:26.747073 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.748121 kubelet[3206]: E0913 00:00:26.747986 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.749179 kubelet[3206]: W0913 00:00:26.749152 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.749406 kubelet[3206]: E0913 00:00:26.749275 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.751469 kubelet[3206]: E0913 00:00:26.751196 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.751469 kubelet[3206]: W0913 00:00:26.751216 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.751469 kubelet[3206]: E0913 00:00:26.751232 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.751710 kubelet[3206]: E0913 00:00:26.751695 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.751839 kubelet[3206]: W0913 00:00:26.751824 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.751895 kubelet[3206]: E0913 00:00:26.751884 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.754233 kubelet[3206]: E0913 00:00:26.754211 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.754439 kubelet[3206]: W0913 00:00:26.754325 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.754439 kubelet[3206]: E0913 00:00:26.754344 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.754583 kubelet[3206]: E0913 00:00:26.754571 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.754652 kubelet[3206]: W0913 00:00:26.754640 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.754780 kubelet[3206]: E0913 00:00:26.754696 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.754893 kubelet[3206]: E0913 00:00:26.754882 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.754950 kubelet[3206]: W0913 00:00:26.754939 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.755006 kubelet[3206]: E0913 00:00:26.754995 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.755475 kubelet[3206]: E0913 00:00:26.755453 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.756056 kubelet[3206]: W0913 00:00:26.755558 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.756056 kubelet[3206]: E0913 00:00:26.755578 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.756265 kubelet[3206]: E0913 00:00:26.756251 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.757208 kubelet[3206]: W0913 00:00:26.757188 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.757633 kubelet[3206]: E0913 00:00:26.757517 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.757814 kubelet[3206]: E0913 00:00:26.757800 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.757891 kubelet[3206]: W0913 00:00:26.757878 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.757948 kubelet[3206]: E0913 00:00:26.757937 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.759126 kubelet[3206]: E0913 00:00:26.758220 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.759126 kubelet[3206]: W0913 00:00:26.758232 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.759126 kubelet[3206]: E0913 00:00:26.758243 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.759468 kubelet[3206]: E0913 00:00:26.759371 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.759468 kubelet[3206]: W0913 00:00:26.759385 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.759468 kubelet[3206]: E0913 00:00:26.759396 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.759869 kubelet[3206]: E0913 00:00:26.759785 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.759869 kubelet[3206]: W0913 00:00:26.759798 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.759869 kubelet[3206]: E0913 00:00:26.759809 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.760204 kubelet[3206]: E0913 00:00:26.760070 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.760204 kubelet[3206]: W0913 00:00:26.760082 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.760204 kubelet[3206]: E0913 00:00:26.760093 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.760604 kubelet[3206]: E0913 00:00:26.760416 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.760604 kubelet[3206]: W0913 00:00:26.760429 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.760604 kubelet[3206]: E0913 00:00:26.760439 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.761608 kubelet[3206]: E0913 00:00:26.761576 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.761608 kubelet[3206]: W0913 00:00:26.761599 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.761703 kubelet[3206]: E0913 00:00:26.761615 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.762982 kubelet[3206]: E0913 00:00:26.762947 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.762982 kubelet[3206]: W0913 00:00:26.762967 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.763186 kubelet[3206]: E0913 00:00:26.763154 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.763783 kubelet[3206]: E0913 00:00:26.763493 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.763783 kubelet[3206]: W0913 00:00:26.763510 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.763783 kubelet[3206]: E0913 00:00:26.763708 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.765505 kubelet[3206]: E0913 00:00:26.765482 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.765505 kubelet[3206]: W0913 00:00:26.765499 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.765612 kubelet[3206]: E0913 00:00:26.765511 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.767046 kubelet[3206]: E0913 00:00:26.765950 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.767046 kubelet[3206]: W0913 00:00:26.765977 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.767046 kubelet[3206]: E0913 00:00:26.765991 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.767046 kubelet[3206]: I0913 00:00:26.766030 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62vrm\" (UniqueName: \"kubernetes.io/projected/3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67-kube-api-access-62vrm\") pod \"csi-node-driver-tlnn9\" (UID: \"3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67\") " pod="calico-system/csi-node-driver-tlnn9" Sep 13 00:00:26.767490 kubelet[3206]: E0913 00:00:26.767384 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.767490 kubelet[3206]: W0913 00:00:26.767404 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.767490 kubelet[3206]: E0913 00:00:26.767423 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.767490 kubelet[3206]: I0913 00:00:26.767455 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67-socket-dir\") pod \"csi-node-driver-tlnn9\" (UID: \"3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67\") " pod="calico-system/csi-node-driver-tlnn9" Sep 13 00:00:26.767898 kubelet[3206]: E0913 00:00:26.767853 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.768147 kubelet[3206]: W0913 00:00:26.768095 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.768251 kubelet[3206]: E0913 00:00:26.768151 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.768527 kubelet[3206]: E0913 00:00:26.768441 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.768527 kubelet[3206]: W0913 00:00:26.768461 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.769303 kubelet[3206]: E0913 00:00:26.768475 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.769381 kubelet[3206]: E0913 00:00:26.769332 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.769381 kubelet[3206]: W0913 00:00:26.769348 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.769381 kubelet[3206]: E0913 00:00:26.769362 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.769447 kubelet[3206]: I0913 00:00:26.769384 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67-kubelet-dir\") pod \"csi-node-driver-tlnn9\" (UID: \"3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67\") " pod="calico-system/csi-node-driver-tlnn9" Sep 13 00:00:26.769979 kubelet[3206]: E0913 00:00:26.769953 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.769979 kubelet[3206]: W0913 00:00:26.769974 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.770332 kubelet[3206]: E0913 00:00:26.769988 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.770332 kubelet[3206]: I0913 00:00:26.770138 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67-registration-dir\") pod \"csi-node-driver-tlnn9\" (UID: \"3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67\") " pod="calico-system/csi-node-driver-tlnn9" Sep 13 00:00:26.772702 kubelet[3206]: E0913 00:00:26.772670 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.772702 kubelet[3206]: W0913 00:00:26.772696 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.772874 kubelet[3206]: E0913 00:00:26.772713 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.773306 kubelet[3206]: E0913 00:00:26.773283 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.773306 kubelet[3206]: W0913 00:00:26.773300 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.773442 kubelet[3206]: E0913 00:00:26.773313 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.773886 kubelet[3206]: E0913 00:00:26.773858 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.773886 kubelet[3206]: W0913 00:00:26.773875 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.773886 kubelet[3206]: E0913 00:00:26.773888 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.774084 kubelet[3206]: I0913 00:00:26.773982 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67-varrun\") pod \"csi-node-driver-tlnn9\" (UID: \"3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67\") " pod="calico-system/csi-node-driver-tlnn9" Sep 13 00:00:26.775579 kubelet[3206]: E0913 00:00:26.775548 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.775579 kubelet[3206]: W0913 00:00:26.775572 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.775579 kubelet[3206]: E0913 00:00:26.775588 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.776046 kubelet[3206]: E0913 00:00:26.776024 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.776046 kubelet[3206]: W0913 00:00:26.776040 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.776138 kubelet[3206]: E0913 00:00:26.776052 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.776620 kubelet[3206]: E0913 00:00:26.776599 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.776620 kubelet[3206]: W0913 00:00:26.776616 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.777379 kubelet[3206]: E0913 00:00:26.777277 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.779156 kubelet[3206]: E0913 00:00:26.777516 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.779156 kubelet[3206]: W0913 00:00:26.777530 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.779156 kubelet[3206]: E0913 00:00:26.777543 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.779156 kubelet[3206]: E0913 00:00:26.777748 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.779156 kubelet[3206]: W0913 00:00:26.777758 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.779156 kubelet[3206]: E0913 00:00:26.777768 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.779156 kubelet[3206]: E0913 00:00:26.778100 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.779156 kubelet[3206]: W0913 00:00:26.778293 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.779156 kubelet[3206]: E0913 00:00:26.778306 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.807919 containerd[1701]: time="2025-09-13T00:00:26.796408061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:00:26.807919 containerd[1701]: time="2025-09-13T00:00:26.796568261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:00:26.807919 containerd[1701]: time="2025-09-13T00:00:26.796596661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:26.807919 containerd[1701]: time="2025-09-13T00:00:26.796785701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:00:26.821302 systemd[1]: Started cri-containerd-4af6ab61f0df576e77e31d1bc80e1984da3f834615166f7f746d4a675ed58565.scope - libcontainer container 4af6ab61f0df576e77e31d1bc80e1984da3f834615166f7f746d4a675ed58565. Sep 13 00:00:26.851632 containerd[1701]: time="2025-09-13T00:00:26.850475421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fstqq,Uid:ba15f285-5c75-4969-9991-66e6493e144f,Namespace:calico-system,Attempt:0,} returns sandbox id \"4af6ab61f0df576e77e31d1bc80e1984da3f834615166f7f746d4a675ed58565\"" Sep 13 00:00:26.880226 kubelet[3206]: E0913 00:00:26.880191 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.880226 kubelet[3206]: W0913 00:00:26.880215 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.880446 kubelet[3206]: E0913 00:00:26.880238 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.880757 kubelet[3206]: E0913 00:00:26.880735 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.880757 kubelet[3206]: W0913 00:00:26.880751 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.880924 kubelet[3206]: E0913 00:00:26.880764 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.881185 kubelet[3206]: E0913 00:00:26.881163 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.881185 kubelet[3206]: W0913 00:00:26.881181 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.881439 kubelet[3206]: E0913 00:00:26.881193 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.882342 kubelet[3206]: E0913 00:00:26.882310 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.882342 kubelet[3206]: W0913 00:00:26.882332 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.882342 kubelet[3206]: E0913 00:00:26.882344 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.882935 kubelet[3206]: E0913 00:00:26.882900 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.882935 kubelet[3206]: W0913 00:00:26.882921 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.882935 kubelet[3206]: E0913 00:00:26.882936 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.883392 kubelet[3206]: E0913 00:00:26.883368 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.883392 kubelet[3206]: W0913 00:00:26.883386 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.883548 kubelet[3206]: E0913 00:00:26.883400 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.884155 kubelet[3206]: E0913 00:00:26.884130 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.884155 kubelet[3206]: W0913 00:00:26.884148 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.884319 kubelet[3206]: E0913 00:00:26.884161 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.884576 kubelet[3206]: E0913 00:00:26.884521 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.884576 kubelet[3206]: W0913 00:00:26.884536 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.884576 kubelet[3206]: E0913 00:00:26.884548 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.885323 kubelet[3206]: E0913 00:00:26.885295 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.885323 kubelet[3206]: W0913 00:00:26.885314 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.885727 kubelet[3206]: E0913 00:00:26.885329 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.885727 kubelet[3206]: E0913 00:00:26.885512 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.885727 kubelet[3206]: W0913 00:00:26.885521 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.885727 kubelet[3206]: E0913 00:00:26.885530 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.886182 kubelet[3206]: E0913 00:00:26.886154 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.886182 kubelet[3206]: W0913 00:00:26.886175 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.886323 kubelet[3206]: E0913 00:00:26.886189 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.886654 kubelet[3206]: E0913 00:00:26.886635 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.886654 kubelet[3206]: W0913 00:00:26.886650 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.886776 kubelet[3206]: E0913 00:00:26.886666 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.887235 kubelet[3206]: E0913 00:00:26.887209 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.887235 kubelet[3206]: W0913 00:00:26.887227 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.887421 kubelet[3206]: E0913 00:00:26.887239 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.887837 kubelet[3206]: E0913 00:00:26.887814 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.887837 kubelet[3206]: W0913 00:00:26.887831 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.888482 kubelet[3206]: E0913 00:00:26.887844 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.888854 kubelet[3206]: E0913 00:00:26.888831 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.888854 kubelet[3206]: W0913 00:00:26.888849 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.888994 kubelet[3206]: E0913 00:00:26.888862 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.890339 kubelet[3206]: E0913 00:00:26.890312 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.890339 kubelet[3206]: W0913 00:00:26.890331 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.890655 kubelet[3206]: E0913 00:00:26.890344 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.890655 kubelet[3206]: E0913 00:00:26.890494 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.890655 kubelet[3206]: W0913 00:00:26.890502 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.890655 kubelet[3206]: E0913 00:00:26.890512 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.891770 kubelet[3206]: E0913 00:00:26.891733 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.891770 kubelet[3206]: W0913 00:00:26.891768 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.891943 kubelet[3206]: E0913 00:00:26.891783 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.892046 kubelet[3206]: E0913 00:00:26.891971 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.892046 kubelet[3206]: W0913 00:00:26.891980 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.892046 kubelet[3206]: E0913 00:00:26.891989 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.892179 kubelet[3206]: E0913 00:00:26.892164 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.892179 kubelet[3206]: W0913 00:00:26.892177 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.892648 kubelet[3206]: E0913 00:00:26.892187 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.892648 kubelet[3206]: E0913 00:00:26.892356 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.892648 kubelet[3206]: W0913 00:00:26.892364 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.892648 kubelet[3206]: E0913 00:00:26.892372 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.892648 kubelet[3206]: E0913 00:00:26.892508 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.892648 kubelet[3206]: W0913 00:00:26.892515 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.892648 kubelet[3206]: E0913 00:00:26.892523 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.892849 kubelet[3206]: E0913 00:00:26.892657 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.892849 kubelet[3206]: W0913 00:00:26.892665 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.892849 kubelet[3206]: E0913 00:00:26.892675 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.892849 kubelet[3206]: E0913 00:00:26.892820 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.892849 kubelet[3206]: W0913 00:00:26.892827 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.892849 kubelet[3206]: E0913 00:00:26.892836 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.893737 kubelet[3206]: E0913 00:00:26.893636 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.893737 kubelet[3206]: W0913 00:00:26.893658 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.893737 kubelet[3206]: E0913 00:00:26.893674 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:26.938815 kubelet[3206]: E0913 00:00:26.938775 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:26.938815 kubelet[3206]: W0913 00:00:26.938800 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:26.938815 kubelet[3206]: E0913 00:00:26.938820 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:28.333807 kubelet[3206]: E0913 00:00:28.333760 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tlnn9" podUID="3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67" Sep 13 00:00:30.334476 kubelet[3206]: E0913 00:00:30.334420 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tlnn9" podUID="3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67" Sep 13 00:00:32.334168 kubelet[3206]: E0913 00:00:32.334123 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tlnn9" podUID="3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67" Sep 13 00:00:34.334447 kubelet[3206]: E0913 00:00:34.334391 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tlnn9" podUID="3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67" Sep 13 00:00:36.333887 kubelet[3206]: E0913 00:00:36.333823 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tlnn9" podUID="3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67" Sep 13 00:00:38.333858 kubelet[3206]: E0913 00:00:38.333779 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tlnn9" podUID="3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67" Sep 13 00:00:38.358267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1185196346.mount: Deactivated successfully. Sep 13 00:00:38.801648 containerd[1701]: time="2025-09-13T00:00:38.801594997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:38.804487 containerd[1701]: time="2025-09-13T00:00:38.804356403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 13 00:00:38.808425 containerd[1701]: time="2025-09-13T00:00:38.807853972Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:38.812214 containerd[1701]: time="2025-09-13T00:00:38.812166942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:38.813095 containerd[1701]: time="2025-09-13T00:00:38.813059384Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 12.069551841s" Sep 13 00:00:38.813095 containerd[1701]: time="2025-09-13T00:00:38.813091104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 13 00:00:38.814448 containerd[1701]: time="2025-09-13T00:00:38.814408908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:00:38.832935 containerd[1701]: time="2025-09-13T00:00:38.832897432Z" level=info msg="CreateContainer within sandbox \"b1d6291513622cdf703c2a11120c373808f249f3d8f211700b727070b64741c1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:00:38.869631 containerd[1701]: time="2025-09-13T00:00:38.869586561Z" level=info msg="CreateContainer within sandbox \"b1d6291513622cdf703c2a11120c373808f249f3d8f211700b727070b64741c1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cf4e343d2146e73ecb8dc7f5ad1ad1474410b758e7bd868a2ed6cde94a06d58b\"" Sep 13 00:00:38.870289 containerd[1701]: time="2025-09-13T00:00:38.870246642Z" level=info msg="StartContainer for \"cf4e343d2146e73ecb8dc7f5ad1ad1474410b758e7bd868a2ed6cde94a06d58b\"" Sep 13 00:00:38.893284 systemd[1]: Started cri-containerd-cf4e343d2146e73ecb8dc7f5ad1ad1474410b758e7bd868a2ed6cde94a06d58b.scope - libcontainer container cf4e343d2146e73ecb8dc7f5ad1ad1474410b758e7bd868a2ed6cde94a06d58b. Sep 13 00:00:38.927581 containerd[1701]: time="2025-09-13T00:00:38.927519140Z" level=info msg="StartContainer for \"cf4e343d2146e73ecb8dc7f5ad1ad1474410b758e7bd868a2ed6cde94a06d58b\" returns successfully" Sep 13 00:00:39.446096 kubelet[3206]: E0913 00:00:39.446055 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.446096 kubelet[3206]: W0913 00:00:39.446084 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.446655 kubelet[3206]: E0913 00:00:39.446141 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.446655 kubelet[3206]: E0913 00:00:39.446344 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.446655 kubelet[3206]: W0913 00:00:39.446353 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.446655 kubelet[3206]: E0913 00:00:39.446412 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.446655 kubelet[3206]: E0913 00:00:39.446603 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.446655 kubelet[3206]: W0913 00:00:39.446631 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.446655 kubelet[3206]: E0913 00:00:39.446642 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.446963 kubelet[3206]: E0913 00:00:39.446818 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.446963 kubelet[3206]: W0913 00:00:39.446827 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.446963 kubelet[3206]: E0913 00:00:39.446837 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.447048 kubelet[3206]: E0913 00:00:39.447006 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.447048 kubelet[3206]: W0913 00:00:39.447016 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.447048 kubelet[3206]: E0913 00:00:39.447025 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.447328 kubelet[3206]: E0913 00:00:39.447312 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.447328 kubelet[3206]: W0913 00:00:39.447325 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.447421 kubelet[3206]: E0913 00:00:39.447334 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.447498 kubelet[3206]: E0913 00:00:39.447483 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.447498 kubelet[3206]: W0913 00:00:39.447495 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.447574 kubelet[3206]: E0913 00:00:39.447504 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.447737 kubelet[3206]: E0913 00:00:39.447722 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.447737 kubelet[3206]: W0913 00:00:39.447734 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.447830 kubelet[3206]: E0913 00:00:39.447742 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.447913 kubelet[3206]: E0913 00:00:39.447899 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.447913 kubelet[3206]: W0913 00:00:39.447911 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.447992 kubelet[3206]: E0913 00:00:39.447921 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.448171 kubelet[3206]: E0913 00:00:39.448155 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.448171 kubelet[3206]: W0913 00:00:39.448167 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.448268 kubelet[3206]: E0913 00:00:39.448175 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.448331 kubelet[3206]: E0913 00:00:39.448315 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.448331 kubelet[3206]: W0913 00:00:39.448328 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.448409 kubelet[3206]: E0913 00:00:39.448336 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.448582 kubelet[3206]: E0913 00:00:39.448566 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.448582 kubelet[3206]: W0913 00:00:39.448578 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.448677 kubelet[3206]: E0913 00:00:39.448586 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.448790 kubelet[3206]: E0913 00:00:39.448771 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.448790 kubelet[3206]: W0913 00:00:39.448783 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.448969 kubelet[3206]: E0913 00:00:39.448792 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.449058 kubelet[3206]: E0913 00:00:39.449042 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.449058 kubelet[3206]: W0913 00:00:39.449055 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.449175 kubelet[3206]: E0913 00:00:39.449065 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.449252 kubelet[3206]: E0913 00:00:39.449230 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.449252 kubelet[3206]: W0913 00:00:39.449238 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.449252 kubelet[3206]: E0913 00:00:39.449246 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.465868 kubelet[3206]: E0913 00:00:39.465799 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.465868 kubelet[3206]: W0913 00:00:39.465821 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.465868 kubelet[3206]: E0913 00:00:39.465839 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.466467 kubelet[3206]: E0913 00:00:39.466297 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.466467 kubelet[3206]: W0913 00:00:39.466312 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.466467 kubelet[3206]: E0913 00:00:39.466324 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.466833 kubelet[3206]: E0913 00:00:39.466549 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.466833 kubelet[3206]: W0913 00:00:39.466558 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.466833 kubelet[3206]: E0913 00:00:39.466569 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.467012 kubelet[3206]: E0913 00:00:39.467000 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.467095 kubelet[3206]: W0913 00:00:39.467082 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.467286 kubelet[3206]: E0913 00:00:39.467155 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.467603 kubelet[3206]: E0913 00:00:39.467515 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.467603 kubelet[3206]: W0913 00:00:39.467531 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.467603 kubelet[3206]: E0913 00:00:39.467542 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.467900 kubelet[3206]: E0913 00:00:39.467887 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.468138 kubelet[3206]: W0913 00:00:39.467958 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.468138 kubelet[3206]: E0913 00:00:39.467975 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.468495 kubelet[3206]: E0913 00:00:39.468420 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.468495 kubelet[3206]: W0913 00:00:39.468433 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.468495 kubelet[3206]: E0913 00:00:39.468444 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.468884 kubelet[3206]: E0913 00:00:39.468775 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.468884 kubelet[3206]: W0913 00:00:39.468787 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.468884 kubelet[3206]: E0913 00:00:39.468798 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.469345 kubelet[3206]: E0913 00:00:39.469183 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.469345 kubelet[3206]: W0913 00:00:39.469196 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.469345 kubelet[3206]: E0913 00:00:39.469208 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.469470 kubelet[3206]: E0913 00:00:39.469443 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.469470 kubelet[3206]: W0913 00:00:39.469467 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.469540 kubelet[3206]: E0913 00:00:39.469480 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.470050 kubelet[3206]: E0913 00:00:39.470030 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.470050 kubelet[3206]: W0913 00:00:39.470043 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.470050 kubelet[3206]: E0913 00:00:39.470052 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.470301 kubelet[3206]: E0913 00:00:39.470283 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.470301 kubelet[3206]: W0913 00:00:39.470298 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.470415 kubelet[3206]: E0913 00:00:39.470309 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.470792 kubelet[3206]: E0913 00:00:39.470658 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.470792 kubelet[3206]: W0913 00:00:39.470674 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.470792 kubelet[3206]: E0913 00:00:39.470687 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.471041 kubelet[3206]: E0913 00:00:39.470970 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.471041 kubelet[3206]: W0913 00:00:39.470982 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.471041 kubelet[3206]: E0913 00:00:39.470993 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.471446 kubelet[3206]: E0913 00:00:39.471352 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.471446 kubelet[3206]: W0913 00:00:39.471365 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.471446 kubelet[3206]: E0913 00:00:39.471376 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.471877 kubelet[3206]: E0913 00:00:39.471719 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.471877 kubelet[3206]: W0913 00:00:39.471732 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.471877 kubelet[3206]: E0913 00:00:39.471744 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.471986 kubelet[3206]: E0913 00:00:39.471968 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.471986 kubelet[3206]: W0913 00:00:39.471979 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.472031 kubelet[3206]: E0913 00:00:39.471990 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:39.472412 kubelet[3206]: E0913 00:00:39.472361 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:39.472412 kubelet[3206]: W0913 00:00:39.472374 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:39.472412 kubelet[3206]: E0913 00:00:39.472388 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.334352 kubelet[3206]: E0913 00:00:40.334279 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tlnn9" podUID="3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67" Sep 13 00:00:40.422742 kubelet[3206]: I0913 00:00:40.422354 3206 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:00:40.454949 kubelet[3206]: E0913 00:00:40.454914 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.454949 kubelet[3206]: W0913 00:00:40.454942 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.455336 kubelet[3206]: E0913 00:00:40.454963 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.455336 kubelet[3206]: E0913 00:00:40.455131 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.455336 kubelet[3206]: W0913 00:00:40.455139 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.455336 kubelet[3206]: E0913 00:00:40.455149 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.455336 kubelet[3206]: E0913 00:00:40.455292 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.455336 kubelet[3206]: W0913 00:00:40.455299 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.455336 kubelet[3206]: E0913 00:00:40.455307 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.455492 kubelet[3206]: E0913 00:00:40.455440 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.455492 kubelet[3206]: W0913 00:00:40.455449 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.455492 kubelet[3206]: E0913 00:00:40.455459 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.455611 kubelet[3206]: E0913 00:00:40.455596 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.455611 kubelet[3206]: W0913 00:00:40.455609 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.455674 kubelet[3206]: E0913 00:00:40.455618 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.455755 kubelet[3206]: E0913 00:00:40.455746 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.455755 kubelet[3206]: W0913 00:00:40.455753 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.455816 kubelet[3206]: E0913 00:00:40.455761 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.455894 kubelet[3206]: E0913 00:00:40.455881 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.455894 kubelet[3206]: W0913 00:00:40.455892 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.455953 kubelet[3206]: E0913 00:00:40.455902 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.456051 kubelet[3206]: E0913 00:00:40.456029 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.456051 kubelet[3206]: W0913 00:00:40.456050 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.456124 kubelet[3206]: E0913 00:00:40.456057 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.456227 kubelet[3206]: E0913 00:00:40.456214 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.456227 kubelet[3206]: W0913 00:00:40.456225 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.456284 kubelet[3206]: E0913 00:00:40.456234 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.456371 kubelet[3206]: E0913 00:00:40.456357 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.456371 kubelet[3206]: W0913 00:00:40.456369 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.456440 kubelet[3206]: E0913 00:00:40.456380 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.456508 kubelet[3206]: E0913 00:00:40.456496 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.456508 kubelet[3206]: W0913 00:00:40.456507 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.456570 kubelet[3206]: E0913 00:00:40.456515 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.456642 kubelet[3206]: E0913 00:00:40.456630 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.456642 kubelet[3206]: W0913 00:00:40.456640 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.456701 kubelet[3206]: E0913 00:00:40.456648 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.456839 kubelet[3206]: E0913 00:00:40.456821 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.456839 kubelet[3206]: W0913 00:00:40.456838 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.456906 kubelet[3206]: E0913 00:00:40.456849 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.457026 kubelet[3206]: E0913 00:00:40.457012 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.457026 kubelet[3206]: W0913 00:00:40.457025 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.457087 kubelet[3206]: E0913 00:00:40.457034 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.457192 kubelet[3206]: E0913 00:00:40.457179 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.457192 kubelet[3206]: W0913 00:00:40.457191 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.457256 kubelet[3206]: E0913 00:00:40.457200 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.472859 kubelet[3206]: E0913 00:00:40.472781 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.472859 kubelet[3206]: W0913 00:00:40.472802 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.472859 kubelet[3206]: E0913 00:00:40.472819 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.473152 kubelet[3206]: E0913 00:00:40.473037 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.473152 kubelet[3206]: W0913 00:00:40.473047 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.473152 kubelet[3206]: E0913 00:00:40.473058 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.473545 kubelet[3206]: E0913 00:00:40.473438 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.473545 kubelet[3206]: W0913 00:00:40.473452 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.473545 kubelet[3206]: E0913 00:00:40.473464 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.473769 kubelet[3206]: E0913 00:00:40.473699 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.473769 kubelet[3206]: W0913 00:00:40.473710 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.473769 kubelet[3206]: E0913 00:00:40.473721 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.474089 kubelet[3206]: E0913 00:00:40.474000 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.474089 kubelet[3206]: W0913 00:00:40.474011 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.474089 kubelet[3206]: E0913 00:00:40.474021 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.474503 kubelet[3206]: E0913 00:00:40.474352 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.474503 kubelet[3206]: W0913 00:00:40.474365 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.474503 kubelet[3206]: E0913 00:00:40.474376 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.474692 kubelet[3206]: E0913 00:00:40.474606 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.474692 kubelet[3206]: W0913 00:00:40.474622 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.474692 kubelet[3206]: E0913 00:00:40.474634 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.474867 kubelet[3206]: E0913 00:00:40.474851 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.474867 kubelet[3206]: W0913 00:00:40.474865 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.474952 kubelet[3206]: E0913 00:00:40.474878 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.475034 kubelet[3206]: E0913 00:00:40.475021 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.475034 kubelet[3206]: W0913 00:00:40.475032 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.475144 kubelet[3206]: E0913 00:00:40.475041 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.475219 kubelet[3206]: E0913 00:00:40.475205 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.475261 kubelet[3206]: W0913 00:00:40.475221 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.475261 kubelet[3206]: E0913 00:00:40.475230 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.475428 kubelet[3206]: E0913 00:00:40.475413 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.475428 kubelet[3206]: W0913 00:00:40.475427 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.475498 kubelet[3206]: E0913 00:00:40.475436 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.475912 kubelet[3206]: E0913 00:00:40.475779 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.475912 kubelet[3206]: W0913 00:00:40.475794 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.475912 kubelet[3206]: E0913 00:00:40.475806 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.476174 kubelet[3206]: E0913 00:00:40.476077 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.476174 kubelet[3206]: W0913 00:00:40.476089 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.476174 kubelet[3206]: E0913 00:00:40.476100 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.476569 kubelet[3206]: E0913 00:00:40.476456 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.476569 kubelet[3206]: W0913 00:00:40.476472 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.476569 kubelet[3206]: E0913 00:00:40.476483 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.476754 kubelet[3206]: E0913 00:00:40.476722 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.476754 kubelet[3206]: W0913 00:00:40.476733 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.476754 kubelet[3206]: E0913 00:00:40.476743 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.477123 kubelet[3206]: E0913 00:00:40.477015 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.477123 kubelet[3206]: W0913 00:00:40.477026 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.477123 kubelet[3206]: E0913 00:00:40.477036 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.477864 kubelet[3206]: E0913 00:00:40.477388 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.477864 kubelet[3206]: W0913 00:00:40.477399 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.477864 kubelet[3206]: E0913 00:00:40.477410 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:40.478175 kubelet[3206]: E0913 00:00:40.478162 3206 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:00:40.478249 kubelet[3206]: W0913 00:00:40.478237 3206 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:00:40.478372 kubelet[3206]: E0913 00:00:40.478361 3206 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:00:42.270661 containerd[1701]: time="2025-09-13T00:00:42.270608080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:42.273632 containerd[1701]: time="2025-09-13T00:00:42.273586967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 13 00:00:42.279646 containerd[1701]: time="2025-09-13T00:00:42.279350101Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:42.285479 containerd[1701]: time="2025-09-13T00:00:42.285444396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:42.286578 containerd[1701]: time="2025-09-13T00:00:42.286201918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 3.47175373s" Sep 13 00:00:42.286578 containerd[1701]: time="2025-09-13T00:00:42.286234518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 13 00:00:42.296510 containerd[1701]: time="2025-09-13T00:00:42.296469503Z" level=info msg="CreateContainer within sandbox \"4af6ab61f0df576e77e31d1bc80e1984da3f834615166f7f746d4a675ed58565\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:00:42.326790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1156756451.mount: Deactivated successfully. Sep 13 00:00:42.333427 kubelet[3206]: E0913 00:00:42.333366 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tlnn9" podUID="3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67" Sep 13 00:00:42.342770 containerd[1701]: time="2025-09-13T00:00:42.342652054Z" level=info msg="CreateContainer within sandbox \"4af6ab61f0df576e77e31d1bc80e1984da3f834615166f7f746d4a675ed58565\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"69b431674776d3984e2d52675ed85fded2e89a0c2a96f50c08d618d944ac98eb\"" Sep 13 00:00:42.344276 containerd[1701]: time="2025-09-13T00:00:42.344238498Z" level=info msg="StartContainer for \"69b431674776d3984e2d52675ed85fded2e89a0c2a96f50c08d618d944ac98eb\"" Sep 13 00:00:42.376289 systemd[1]: Started cri-containerd-69b431674776d3984e2d52675ed85fded2e89a0c2a96f50c08d618d944ac98eb.scope - libcontainer container 69b431674776d3984e2d52675ed85fded2e89a0c2a96f50c08d618d944ac98eb. Sep 13 00:00:42.404231 containerd[1701]: time="2025-09-13T00:00:42.404049842Z" level=info msg="StartContainer for \"69b431674776d3984e2d52675ed85fded2e89a0c2a96f50c08d618d944ac98eb\" returns successfully" Sep 13 00:00:42.410499 systemd[1]: cri-containerd-69b431674776d3984e2d52675ed85fded2e89a0c2a96f50c08d618d944ac98eb.scope: Deactivated successfully. Sep 13 00:00:42.441020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69b431674776d3984e2d52675ed85fded2e89a0c2a96f50c08d618d944ac98eb-rootfs.mount: Deactivated successfully. Sep 13 00:00:42.456430 kubelet[3206]: I0913 00:00:42.455894 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5db4f7f57-blxrh" podStartSLOduration=4.382170196 podStartE2EDuration="16.455877527s" podCreationTimestamp="2025-09-13 00:00:26 +0000 UTC" firstStartedPulling="2025-09-13 00:00:26.740408896 +0000 UTC m=+23.535179801" lastFinishedPulling="2025-09-13 00:00:38.814116227 +0000 UTC m=+35.608887132" observedRunningTime="2025-09-13 00:00:39.435936206 +0000 UTC m=+36.230707111" watchObservedRunningTime="2025-09-13 00:00:42.455877527 +0000 UTC m=+39.250648392" Sep 13 00:00:42.879172 kubelet[3206]: I0913 00:00:42.738765 3206 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:00:43.443770 containerd[1701]: time="2025-09-13T00:00:43.443694386Z" level=info msg="shim disconnected" id=69b431674776d3984e2d52675ed85fded2e89a0c2a96f50c08d618d944ac98eb namespace=k8s.io Sep 13 00:00:43.443770 containerd[1701]: time="2025-09-13T00:00:43.443756587Z" level=warning msg="cleaning up after shim disconnected" id=69b431674776d3984e2d52675ed85fded2e89a0c2a96f50c08d618d944ac98eb namespace=k8s.io Sep 13 00:00:43.443770 containerd[1701]: time="2025-09-13T00:00:43.443765387Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:00:44.333828 kubelet[3206]: E0913 00:00:44.333776 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tlnn9" podUID="3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67" Sep 13 00:00:44.434813 containerd[1701]: time="2025-09-13T00:00:44.434757411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:00:46.334093 kubelet[3206]: E0913 00:00:46.334032 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tlnn9" podUID="3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67" Sep 13 00:00:48.334190 kubelet[3206]: E0913 00:00:48.334145 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tlnn9" podUID="3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67" Sep 13 00:00:49.748169 containerd[1701]: time="2025-09-13T00:00:49.748118555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:49.751680 containerd[1701]: time="2025-09-13T00:00:49.751642563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 13 00:00:49.755271 containerd[1701]: time="2025-09-13T00:00:49.755211052Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:49.762561 containerd[1701]: time="2025-09-13T00:00:49.762253029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:00:49.763068 containerd[1701]: time="2025-09-13T00:00:49.763018671Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 5.3282205s" Sep 13 00:00:49.763068 containerd[1701]: time="2025-09-13T00:00:49.763054391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 13 00:00:49.771164 containerd[1701]: time="2025-09-13T00:00:49.771124530Z" level=info msg="CreateContainer within sandbox \"4af6ab61f0df576e77e31d1bc80e1984da3f834615166f7f746d4a675ed58565\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:00:49.818568 containerd[1701]: time="2025-09-13T00:00:49.818515644Z" level=info msg="CreateContainer within sandbox \"4af6ab61f0df576e77e31d1bc80e1984da3f834615166f7f746d4a675ed58565\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e5513cf5b85f220013aa4f2917071ea9eec9252f096c2369a8e260495960847e\"" Sep 13 00:00:49.819255 containerd[1701]: time="2025-09-13T00:00:49.819037766Z" level=info msg="StartContainer for \"e5513cf5b85f220013aa4f2917071ea9eec9252f096c2369a8e260495960847e\"" Sep 13 00:00:49.856316 systemd[1]: Started cri-containerd-e5513cf5b85f220013aa4f2917071ea9eec9252f096c2369a8e260495960847e.scope - libcontainer container e5513cf5b85f220013aa4f2917071ea9eec9252f096c2369a8e260495960847e. Sep 13 00:00:49.886365 containerd[1701]: time="2025-09-13T00:00:49.886318007Z" level=info msg="StartContainer for \"e5513cf5b85f220013aa4f2917071ea9eec9252f096c2369a8e260495960847e\" returns successfully" Sep 13 00:00:50.334088 kubelet[3206]: E0913 00:00:50.334021 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tlnn9" podUID="3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67" Sep 13 00:00:51.120665 containerd[1701]: time="2025-09-13T00:00:51.120533601Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:00:51.123220 systemd[1]: cri-containerd-e5513cf5b85f220013aa4f2917071ea9eec9252f096c2369a8e260495960847e.scope: Deactivated successfully. Sep 13 00:00:51.144601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5513cf5b85f220013aa4f2917071ea9eec9252f096c2369a8e260495960847e-rootfs.mount: Deactivated successfully. Sep 13 00:00:51.168628 kubelet[3206]: I0913 00:00:51.168594 3206 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:00:51.991420 systemd[1]: Created slice kubepods-besteffort-pod3fc266a6_dfea_443a_8b62_40a366a0b751.slice - libcontainer container kubepods-besteffort-pod3fc266a6_dfea_443a_8b62_40a366a0b751.slice. Sep 13 00:00:52.017638 containerd[1701]: time="2025-09-13T00:00:52.015397346Z" level=info msg="shim disconnected" id=e5513cf5b85f220013aa4f2917071ea9eec9252f096c2369a8e260495960847e namespace=k8s.io Sep 13 00:00:52.017638 containerd[1701]: time="2025-09-13T00:00:52.015563946Z" level=warning msg="cleaning up after shim disconnected" id=e5513cf5b85f220013aa4f2917071ea9eec9252f096c2369a8e260495960847e namespace=k8s.io Sep 13 00:00:52.017638 containerd[1701]: time="2025-09-13T00:00:52.015621427Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:00:52.036300 systemd[1]: Created slice kubepods-besteffort-pod60ec3209_7b4d_4322_84da_268cece8c800.slice - libcontainer container kubepods-besteffort-pod60ec3209_7b4d_4322_84da_268cece8c800.slice. Sep 13 00:00:52.050252 kubelet[3206]: I0913 00:00:52.050209 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fc266a6-dfea-443a-8b62-40a366a0b751-tigera-ca-bundle\") pod \"calico-kube-controllers-5564b5cb6-lt2mp\" (UID: \"3fc266a6-dfea-443a-8b62-40a366a0b751\") " pod="calico-system/calico-kube-controllers-5564b5cb6-lt2mp" Sep 13 00:00:52.050252 kubelet[3206]: I0913 00:00:52.050254 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx9sw\" (UniqueName: \"kubernetes.io/projected/ecd9d5e5-4b3f-4586-9384-58b93046b59b-kube-api-access-xx9sw\") pod \"calico-apiserver-f6b9bbb9d-cs9x9\" (UID: \"ecd9d5e5-4b3f-4586-9384-58b93046b59b\") " pod="calico-apiserver/calico-apiserver-f6b9bbb9d-cs9x9" Sep 13 00:00:52.050609 kubelet[3206]: I0913 00:00:52.050275 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60ec3209-7b4d-4322-84da-268cece8c800-whisker-ca-bundle\") pod \"whisker-748b96b4cb-584tl\" (UID: \"60ec3209-7b4d-4322-84da-268cece8c800\") " pod="calico-system/whisker-748b96b4cb-584tl" Sep 13 00:00:52.050609 kubelet[3206]: I0913 00:00:52.050291 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzxjz\" (UniqueName: \"kubernetes.io/projected/60ec3209-7b4d-4322-84da-268cece8c800-kube-api-access-qzxjz\") pod \"whisker-748b96b4cb-584tl\" (UID: \"60ec3209-7b4d-4322-84da-268cece8c800\") " pod="calico-system/whisker-748b96b4cb-584tl" Sep 13 00:00:52.050609 kubelet[3206]: I0913 00:00:52.050310 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/60ec3209-7b4d-4322-84da-268cece8c800-whisker-backend-key-pair\") pod \"whisker-748b96b4cb-584tl\" (UID: \"60ec3209-7b4d-4322-84da-268cece8c800\") " pod="calico-system/whisker-748b96b4cb-584tl" Sep 13 00:00:52.050609 kubelet[3206]: I0913 00:00:52.050327 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wncl5\" (UniqueName: \"kubernetes.io/projected/3fc266a6-dfea-443a-8b62-40a366a0b751-kube-api-access-wncl5\") pod \"calico-kube-controllers-5564b5cb6-lt2mp\" (UID: \"3fc266a6-dfea-443a-8b62-40a366a0b751\") " pod="calico-system/calico-kube-controllers-5564b5cb6-lt2mp" Sep 13 00:00:52.050609 kubelet[3206]: I0913 00:00:52.050350 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ecd9d5e5-4b3f-4586-9384-58b93046b59b-calico-apiserver-certs\") pod \"calico-apiserver-f6b9bbb9d-cs9x9\" (UID: \"ecd9d5e5-4b3f-4586-9384-58b93046b59b\") " pod="calico-apiserver/calico-apiserver-f6b9bbb9d-cs9x9" Sep 13 00:00:52.060455 systemd[1]: Created slice kubepods-besteffort-pod3af0fb4e_a0fb_491d_8bd0_e7b0ffc69d67.slice - libcontainer container kubepods-besteffort-pod3af0fb4e_a0fb_491d_8bd0_e7b0ffc69d67.slice. Sep 13 00:00:52.066985 containerd[1701]: time="2025-09-13T00:00:52.066016976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tlnn9,Uid:3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67,Namespace:calico-system,Attempt:0,}" Sep 13 00:00:52.070048 systemd[1]: Created slice kubepods-besteffort-podecd9d5e5_4b3f_4586_9384_58b93046b59b.slice - libcontainer container kubepods-besteffort-podecd9d5e5_4b3f_4586_9384_58b93046b59b.slice. Sep 13 00:00:52.076556 systemd[1]: Created slice kubepods-burstable-podac6ed623_78ab_4da1_b5bf_5e6d5fcf0386.slice - libcontainer container kubepods-burstable-podac6ed623_78ab_4da1_b5bf_5e6d5fcf0386.slice. Sep 13 00:00:52.084299 systemd[1]: Created slice kubepods-besteffort-podaba7588f_007a_4a5b_a73c_d11a0b3eb891.slice - libcontainer container kubepods-besteffort-podaba7588f_007a_4a5b_a73c_d11a0b3eb891.slice. Sep 13 00:00:52.093469 systemd[1]: Created slice kubepods-besteffort-podd20e0726_7801_43dd_aff4_c9d38bb82a12.slice - libcontainer container kubepods-besteffort-podd20e0726_7801_43dd_aff4_c9d38bb82a12.slice. Sep 13 00:00:52.100466 systemd[1]: Created slice kubepods-burstable-podd9018d05_ac4d_4e32_951d_a33c403e0852.slice - libcontainer container kubepods-burstable-podd9018d05_ac4d_4e32_951d_a33c403e0852.slice. Sep 13 00:00:52.150875 kubelet[3206]: I0913 00:00:52.150696 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk9l7\" (UniqueName: \"kubernetes.io/projected/d9018d05-ac4d-4e32-951d-a33c403e0852-kube-api-access-kk9l7\") pod \"coredns-674b8bbfcf-pthfz\" (UID: \"d9018d05-ac4d-4e32-951d-a33c403e0852\") " pod="kube-system/coredns-674b8bbfcf-pthfz" Sep 13 00:00:52.150875 kubelet[3206]: I0913 00:00:52.150745 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4ghp\" (UniqueName: \"kubernetes.io/projected/ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386-kube-api-access-v4ghp\") pod \"coredns-674b8bbfcf-sv5f6\" (UID: \"ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386\") " pod="kube-system/coredns-674b8bbfcf-sv5f6" Sep 13 00:00:52.150875 kubelet[3206]: I0913 00:00:52.150793 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d20e0726-7801-43dd-aff4-c9d38bb82a12-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-2s2jz\" (UID: \"d20e0726-7801-43dd-aff4-c9d38bb82a12\") " pod="calico-system/goldmane-54d579b49d-2s2jz" Sep 13 00:00:52.150875 kubelet[3206]: I0913 00:00:52.150822 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d20e0726-7801-43dd-aff4-c9d38bb82a12-goldmane-key-pair\") pod \"goldmane-54d579b49d-2s2jz\" (UID: \"d20e0726-7801-43dd-aff4-c9d38bb82a12\") " pod="calico-system/goldmane-54d579b49d-2s2jz" Sep 13 00:00:52.150875 kubelet[3206]: I0913 00:00:52.150840 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv7qv\" (UniqueName: \"kubernetes.io/projected/d20e0726-7801-43dd-aff4-c9d38bb82a12-kube-api-access-mv7qv\") pod \"goldmane-54d579b49d-2s2jz\" (UID: \"d20e0726-7801-43dd-aff4-c9d38bb82a12\") " pod="calico-system/goldmane-54d579b49d-2s2jz" Sep 13 00:00:52.152547 kubelet[3206]: I0913 00:00:52.152515 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9018d05-ac4d-4e32-951d-a33c403e0852-config-volume\") pod \"coredns-674b8bbfcf-pthfz\" (UID: \"d9018d05-ac4d-4e32-951d-a33c403e0852\") " pod="kube-system/coredns-674b8bbfcf-pthfz" Sep 13 00:00:52.152740 kubelet[3206]: I0913 00:00:52.152631 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d20e0726-7801-43dd-aff4-c9d38bb82a12-config\") pod \"goldmane-54d579b49d-2s2jz\" (UID: \"d20e0726-7801-43dd-aff4-c9d38bb82a12\") " pod="calico-system/goldmane-54d579b49d-2s2jz" Sep 13 00:00:52.154456 kubelet[3206]: I0913 00:00:52.153279 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386-config-volume\") pod \"coredns-674b8bbfcf-sv5f6\" (UID: \"ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386\") " pod="kube-system/coredns-674b8bbfcf-sv5f6" Sep 13 00:00:52.154456 kubelet[3206]: I0913 00:00:52.153313 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aba7588f-007a-4a5b-a73c-d11a0b3eb891-calico-apiserver-certs\") pod \"calico-apiserver-f6b9bbb9d-tt85v\" (UID: \"aba7588f-007a-4a5b-a73c-d11a0b3eb891\") " pod="calico-apiserver/calico-apiserver-f6b9bbb9d-tt85v" Sep 13 00:00:52.154456 kubelet[3206]: I0913 00:00:52.153358 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hvl5\" (UniqueName: \"kubernetes.io/projected/aba7588f-007a-4a5b-a73c-d11a0b3eb891-kube-api-access-7hvl5\") pod \"calico-apiserver-f6b9bbb9d-tt85v\" (UID: \"aba7588f-007a-4a5b-a73c-d11a0b3eb891\") " pod="calico-apiserver/calico-apiserver-f6b9bbb9d-tt85v" Sep 13 00:00:52.214266 containerd[1701]: time="2025-09-13T00:00:52.214203418Z" level=error msg="Failed to destroy network for sandbox \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.214645 containerd[1701]: time="2025-09-13T00:00:52.214609299Z" level=error msg="encountered an error cleaning up failed sandbox \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.214711 containerd[1701]: time="2025-09-13T00:00:52.214678059Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tlnn9,Uid:3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.215363 kubelet[3206]: E0913 00:00:52.214924 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.215363 kubelet[3206]: E0913 00:00:52.214990 3206 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tlnn9" Sep 13 00:00:52.215363 kubelet[3206]: E0913 00:00:52.215009 3206 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tlnn9" Sep 13 00:00:52.215514 kubelet[3206]: E0913 00:00:52.215061 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tlnn9_calico-system(3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tlnn9_calico-system(3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tlnn9" podUID="3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67" Sep 13 00:00:52.295428 containerd[1701]: time="2025-09-13T00:00:52.295289394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5564b5cb6-lt2mp,Uid:3fc266a6-dfea-443a-8b62-40a366a0b751,Namespace:calico-system,Attempt:0,}" Sep 13 00:00:52.357711 containerd[1701]: time="2025-09-13T00:00:52.357357249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-748b96b4cb-584tl,Uid:60ec3209-7b4d-4322-84da-268cece8c800,Namespace:calico-system,Attempt:0,}" Sep 13 00:00:52.375881 containerd[1701]: time="2025-09-13T00:00:52.375843129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6b9bbb9d-cs9x9,Uid:ecd9d5e5-4b3f-4586-9384-58b93046b59b,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:00:52.381241 containerd[1701]: time="2025-09-13T00:00:52.381193901Z" level=error msg="Failed to destroy network for sandbox \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.382134 containerd[1701]: time="2025-09-13T00:00:52.381932223Z" level=error msg="encountered an error cleaning up failed sandbox \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.382134 containerd[1701]: time="2025-09-13T00:00:52.381992703Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5564b5cb6-lt2mp,Uid:3fc266a6-dfea-443a-8b62-40a366a0b751,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.382282 kubelet[3206]: E0913 00:00:52.382213 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.382282 kubelet[3206]: E0913 00:00:52.382260 3206 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5564b5cb6-lt2mp" Sep 13 00:00:52.382352 kubelet[3206]: E0913 00:00:52.382282 3206 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5564b5cb6-lt2mp" Sep 13 00:00:52.382352 kubelet[3206]: E0913 00:00:52.382325 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5564b5cb6-lt2mp_calico-system(3fc266a6-dfea-443a-8b62-40a366a0b751)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5564b5cb6-lt2mp_calico-system(3fc266a6-dfea-443a-8b62-40a366a0b751)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5564b5cb6-lt2mp" podUID="3fc266a6-dfea-443a-8b62-40a366a0b751" Sep 13 00:00:52.383775 containerd[1701]: time="2025-09-13T00:00:52.383745747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sv5f6,Uid:ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386,Namespace:kube-system,Attempt:0,}" Sep 13 00:00:52.391993 containerd[1701]: time="2025-09-13T00:00:52.391777444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6b9bbb9d-tt85v,Uid:aba7588f-007a-4a5b-a73c-d11a0b3eb891,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:00:52.398300 containerd[1701]: time="2025-09-13T00:00:52.398260418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-2s2jz,Uid:d20e0726-7801-43dd-aff4-c9d38bb82a12,Namespace:calico-system,Attempt:0,}" Sep 13 00:00:52.404721 containerd[1701]: time="2025-09-13T00:00:52.404448992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pthfz,Uid:d9018d05-ac4d-4e32-951d-a33c403e0852,Namespace:kube-system,Attempt:0,}" Sep 13 00:00:52.465488 containerd[1701]: time="2025-09-13T00:00:52.465423844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:00:52.467381 kubelet[3206]: I0913 00:00:52.467252 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Sep 13 00:00:52.468973 kubelet[3206]: I0913 00:00:52.468890 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Sep 13 00:00:52.469269 containerd[1701]: time="2025-09-13T00:00:52.469146132Z" level=info msg="StopPodSandbox for \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\"" Sep 13 00:00:52.469511 containerd[1701]: time="2025-09-13T00:00:52.469454813Z" level=info msg="Ensure that sandbox f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103 in task-service has been cleanup successfully" Sep 13 00:00:52.472839 containerd[1701]: time="2025-09-13T00:00:52.470144534Z" level=info msg="StopPodSandbox for \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\"" Sep 13 00:00:52.473413 containerd[1701]: time="2025-09-13T00:00:52.473231821Z" level=info msg="Ensure that sandbox 251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1 in task-service has been cleanup successfully" Sep 13 00:00:52.539442 containerd[1701]: time="2025-09-13T00:00:52.539222324Z" level=error msg="StopPodSandbox for \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\" failed" error="failed to destroy network for sandbox \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.539442 containerd[1701]: time="2025-09-13T00:00:52.539345085Z" level=error msg="StopPodSandbox for \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\" failed" error="failed to destroy network for sandbox \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.539633 kubelet[3206]: E0913 00:00:52.539519 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Sep 13 00:00:52.539633 kubelet[3206]: E0913 00:00:52.539575 3206 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103"} Sep 13 00:00:52.539633 kubelet[3206]: E0913 00:00:52.539624 3206 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3fc266a6-dfea-443a-8b62-40a366a0b751\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:52.539788 kubelet[3206]: E0913 00:00:52.539645 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3fc266a6-dfea-443a-8b62-40a366a0b751\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5564b5cb6-lt2mp" podUID="3fc266a6-dfea-443a-8b62-40a366a0b751" Sep 13 00:00:52.539788 kubelet[3206]: E0913 00:00:52.539675 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Sep 13 00:00:52.539788 kubelet[3206]: E0913 00:00:52.539688 3206 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1"} Sep 13 00:00:52.539788 kubelet[3206]: E0913 00:00:52.539703 3206 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:52.540097 kubelet[3206]: E0913 00:00:52.539742 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tlnn9" podUID="3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67" Sep 13 00:00:52.546767 containerd[1701]: time="2025-09-13T00:00:52.546650101Z" level=error msg="Failed to destroy network for sandbox \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.548197 containerd[1701]: time="2025-09-13T00:00:52.548013504Z" level=error msg="encountered an error cleaning up failed sandbox \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.548197 containerd[1701]: time="2025-09-13T00:00:52.548084984Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-748b96b4cb-584tl,Uid:60ec3209-7b4d-4322-84da-268cece8c800,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.548417 kubelet[3206]: E0913 00:00:52.548313 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.548606 kubelet[3206]: E0913 00:00:52.548573 3206 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-748b96b4cb-584tl" Sep 13 00:00:52.548662 kubelet[3206]: E0913 00:00:52.548613 3206 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-748b96b4cb-584tl" Sep 13 00:00:52.548709 kubelet[3206]: E0913 00:00:52.548675 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-748b96b4cb-584tl_calico-system(60ec3209-7b4d-4322-84da-268cece8c800)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-748b96b4cb-584tl_calico-system(60ec3209-7b4d-4322-84da-268cece8c800)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-748b96b4cb-584tl" podUID="60ec3209-7b4d-4322-84da-268cece8c800" Sep 13 00:00:52.696287 containerd[1701]: time="2025-09-13T00:00:52.696150465Z" level=error msg="Failed to destroy network for sandbox \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.696871 containerd[1701]: time="2025-09-13T00:00:52.696722427Z" level=error msg="encountered an error cleaning up failed sandbox \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.696871 containerd[1701]: time="2025-09-13T00:00:52.696775827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6b9bbb9d-tt85v,Uid:aba7588f-007a-4a5b-a73c-d11a0b3eb891,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.697294 kubelet[3206]: E0913 00:00:52.697019 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.697294 kubelet[3206]: E0913 00:00:52.697077 3206 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f6b9bbb9d-tt85v" Sep 13 00:00:52.697294 kubelet[3206]: E0913 00:00:52.697119 3206 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f6b9bbb9d-tt85v" Sep 13 00:00:52.697429 kubelet[3206]: E0913 00:00:52.697168 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f6b9bbb9d-tt85v_calico-apiserver(aba7588f-007a-4a5b-a73c-d11a0b3eb891)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f6b9bbb9d-tt85v_calico-apiserver(aba7588f-007a-4a5b-a73c-d11a0b3eb891)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f6b9bbb9d-tt85v" podUID="aba7588f-007a-4a5b-a73c-d11a0b3eb891" Sep 13 00:00:52.702178 containerd[1701]: time="2025-09-13T00:00:52.701953358Z" level=error msg="Failed to destroy network for sandbox \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.702362 containerd[1701]: time="2025-09-13T00:00:52.702307799Z" level=error msg="encountered an error cleaning up failed sandbox \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.702436 containerd[1701]: time="2025-09-13T00:00:52.702358399Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6b9bbb9d-cs9x9,Uid:ecd9d5e5-4b3f-4586-9384-58b93046b59b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.702654 kubelet[3206]: E0913 00:00:52.702563 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.702654 kubelet[3206]: E0913 00:00:52.702626 3206 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f6b9bbb9d-cs9x9" Sep 13 00:00:52.702654 kubelet[3206]: E0913 00:00:52.702646 3206 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f6b9bbb9d-cs9x9" Sep 13 00:00:52.702872 kubelet[3206]: E0913 00:00:52.702694 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f6b9bbb9d-cs9x9_calico-apiserver(ecd9d5e5-4b3f-4586-9384-58b93046b59b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f6b9bbb9d-cs9x9_calico-apiserver(ecd9d5e5-4b3f-4586-9384-58b93046b59b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f6b9bbb9d-cs9x9" podUID="ecd9d5e5-4b3f-4586-9384-58b93046b59b" Sep 13 00:00:52.714713 containerd[1701]: time="2025-09-13T00:00:52.714212265Z" level=error msg="Failed to destroy network for sandbox \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.715711 containerd[1701]: time="2025-09-13T00:00:52.715594068Z" level=error msg="encountered an error cleaning up failed sandbox \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.715711 containerd[1701]: time="2025-09-13T00:00:52.715672108Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sv5f6,Uid:ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.716173 kubelet[3206]: E0913 00:00:52.716082 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.716242 kubelet[3206]: E0913 00:00:52.716195 3206 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sv5f6" Sep 13 00:00:52.716242 kubelet[3206]: E0913 00:00:52.716218 3206 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sv5f6" Sep 13 00:00:52.716325 kubelet[3206]: E0913 00:00:52.716273 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-sv5f6_kube-system(ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-sv5f6_kube-system(ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-sv5f6" podUID="ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386" Sep 13 00:00:52.718510 containerd[1701]: time="2025-09-13T00:00:52.718466714Z" level=error msg="Failed to destroy network for sandbox \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.718815 containerd[1701]: time="2025-09-13T00:00:52.718784795Z" level=error msg="encountered an error cleaning up failed sandbox \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.718866 containerd[1701]: time="2025-09-13T00:00:52.718837355Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-2s2jz,Uid:d20e0726-7801-43dd-aff4-c9d38bb82a12,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.719092 kubelet[3206]: E0913 00:00:52.719049 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.719172 kubelet[3206]: E0913 00:00:52.719128 3206 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-2s2jz" Sep 13 00:00:52.719172 kubelet[3206]: E0913 00:00:52.719148 3206 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-2s2jz" Sep 13 00:00:52.719235 kubelet[3206]: E0913 00:00:52.719192 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-2s2jz_calico-system(d20e0726-7801-43dd-aff4-c9d38bb82a12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-2s2jz_calico-system(d20e0726-7801-43dd-aff4-c9d38bb82a12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-2s2jz" podUID="d20e0726-7801-43dd-aff4-c9d38bb82a12" Sep 13 00:00:52.721150 containerd[1701]: time="2025-09-13T00:00:52.720867439Z" level=error msg="Failed to destroy network for sandbox \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.721605 containerd[1701]: time="2025-09-13T00:00:52.721552681Z" level=error msg="encountered an error cleaning up failed sandbox \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.721855 containerd[1701]: time="2025-09-13T00:00:52.721782521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pthfz,Uid:d9018d05-ac4d-4e32-951d-a33c403e0852,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.722305 kubelet[3206]: E0913 00:00:52.722262 3206 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:52.723218 kubelet[3206]: E0913 00:00:52.722310 3206 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pthfz" Sep 13 00:00:52.723218 kubelet[3206]: E0913 00:00:52.722333 3206 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pthfz" Sep 13 00:00:52.723218 kubelet[3206]: E0913 00:00:52.722374 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pthfz_kube-system(d9018d05-ac4d-4e32-951d-a33c403e0852)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pthfz_kube-system(d9018d05-ac4d-4e32-951d-a33c403e0852)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pthfz" podUID="d9018d05-ac4d-4e32-951d-a33c403e0852" Sep 13 00:00:53.178004 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1-shm.mount: Deactivated successfully. Sep 13 00:00:53.471895 kubelet[3206]: I0913 00:00:53.471786 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Sep 13 00:00:53.472771 containerd[1701]: time="2025-09-13T00:00:53.472712073Z" level=info msg="StopPodSandbox for \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\"" Sep 13 00:00:53.473426 containerd[1701]: time="2025-09-13T00:00:53.473206834Z" level=info msg="Ensure that sandbox 63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597 in task-service has been cleanup successfully" Sep 13 00:00:53.474822 kubelet[3206]: I0913 00:00:53.474201 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Sep 13 00:00:53.475383 containerd[1701]: time="2025-09-13T00:00:53.475340399Z" level=info msg="StopPodSandbox for \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\"" Sep 13 00:00:53.475539 containerd[1701]: time="2025-09-13T00:00:53.475505279Z" level=info msg="Ensure that sandbox ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e in task-service has been cleanup successfully" Sep 13 00:00:53.477064 kubelet[3206]: I0913 00:00:53.477025 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Sep 13 00:00:53.478036 containerd[1701]: time="2025-09-13T00:00:53.478010444Z" level=info msg="StopPodSandbox for \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\"" Sep 13 00:00:53.479408 containerd[1701]: time="2025-09-13T00:00:53.479383247Z" level=info msg="Ensure that sandbox 5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81 in task-service has been cleanup successfully" Sep 13 00:00:53.480589 kubelet[3206]: I0913 00:00:53.480559 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Sep 13 00:00:53.482877 containerd[1701]: time="2025-09-13T00:00:53.482520774Z" level=info msg="StopPodSandbox for \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\"" Sep 13 00:00:53.482877 containerd[1701]: time="2025-09-13T00:00:53.482654775Z" level=info msg="Ensure that sandbox 9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914 in task-service has been cleanup successfully" Sep 13 00:00:53.483815 kubelet[3206]: I0913 00:00:53.483788 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Sep 13 00:00:53.484652 containerd[1701]: time="2025-09-13T00:00:53.484625219Z" level=info msg="StopPodSandbox for \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\"" Sep 13 00:00:53.484871 containerd[1701]: time="2025-09-13T00:00:53.484847019Z" level=info msg="Ensure that sandbox a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780 in task-service has been cleanup successfully" Sep 13 00:00:53.492825 kubelet[3206]: I0913 00:00:53.492365 3206 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Sep 13 00:00:53.493755 containerd[1701]: time="2025-09-13T00:00:53.493043757Z" level=info msg="StopPodSandbox for \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\"" Sep 13 00:00:53.493755 containerd[1701]: time="2025-09-13T00:00:53.493226238Z" level=info msg="Ensure that sandbox 17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800 in task-service has been cleanup successfully" Sep 13 00:00:53.546222 containerd[1701]: time="2025-09-13T00:00:53.546167513Z" level=error msg="StopPodSandbox for \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\" failed" error="failed to destroy network for sandbox \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:53.546467 kubelet[3206]: E0913 00:00:53.546396 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Sep 13 00:00:53.546719 kubelet[3206]: E0913 00:00:53.546690 3206 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780"} Sep 13 00:00:53.546765 kubelet[3206]: E0913 00:00:53.546743 3206 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ecd9d5e5-4b3f-4586-9384-58b93046b59b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:53.546846 kubelet[3206]: E0913 00:00:53.546768 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ecd9d5e5-4b3f-4586-9384-58b93046b59b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f6b9bbb9d-cs9x9" podUID="ecd9d5e5-4b3f-4586-9384-58b93046b59b" Sep 13 00:00:53.558342 containerd[1701]: time="2025-09-13T00:00:53.558288419Z" level=error msg="StopPodSandbox for \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\" failed" error="failed to destroy network for sandbox \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:53.558854 kubelet[3206]: E0913 00:00:53.558635 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Sep 13 00:00:53.558854 kubelet[3206]: E0913 00:00:53.558693 3206 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914"} Sep 13 00:00:53.558854 kubelet[3206]: E0913 00:00:53.558732 3206 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:53.558854 kubelet[3206]: E0913 00:00:53.558756 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-sv5f6" podUID="ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386" Sep 13 00:00:53.572997 containerd[1701]: time="2025-09-13T00:00:53.572585770Z" level=error msg="StopPodSandbox for \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\" failed" error="failed to destroy network for sandbox \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:53.573560 containerd[1701]: time="2025-09-13T00:00:53.572972331Z" level=error msg="StopPodSandbox for \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\" failed" error="failed to destroy network for sandbox \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:53.573859 kubelet[3206]: E0913 00:00:53.573800 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Sep 13 00:00:53.573918 kubelet[3206]: E0913 00:00:53.573858 3206 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597"} Sep 13 00:00:53.573918 kubelet[3206]: E0913 00:00:53.573889 3206 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9018d05-ac4d-4e32-951d-a33c403e0852\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:53.574013 kubelet[3206]: E0913 00:00:53.573910 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9018d05-ac4d-4e32-951d-a33c403e0852\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pthfz" podUID="d9018d05-ac4d-4e32-951d-a33c403e0852" Sep 13 00:00:53.574013 kubelet[3206]: E0913 00:00:53.573949 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Sep 13 00:00:53.574013 kubelet[3206]: E0913 00:00:53.573962 3206 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e"} Sep 13 00:00:53.574013 kubelet[3206]: E0913 00:00:53.573978 3206 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d20e0726-7801-43dd-aff4-c9d38bb82a12\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:53.574552 kubelet[3206]: E0913 00:00:53.573993 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d20e0726-7801-43dd-aff4-c9d38bb82a12\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-2s2jz" podUID="d20e0726-7801-43dd-aff4-c9d38bb82a12" Sep 13 00:00:53.587404 containerd[1701]: time="2025-09-13T00:00:53.587289122Z" level=error msg="StopPodSandbox for \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\" failed" error="failed to destroy network for sandbox \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:53.587660 containerd[1701]: time="2025-09-13T00:00:53.587590363Z" level=error msg="StopPodSandbox for \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\" failed" error="failed to destroy network for sandbox \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:00:53.587694 kubelet[3206]: E0913 00:00:53.587509 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Sep 13 00:00:53.587694 kubelet[3206]: E0913 00:00:53.587578 3206 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81"} Sep 13 00:00:53.587694 kubelet[3206]: E0913 00:00:53.587607 3206 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aba7588f-007a-4a5b-a73c-d11a0b3eb891\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:53.587694 kubelet[3206]: E0913 00:00:53.587633 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aba7588f-007a-4a5b-a73c-d11a0b3eb891\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f6b9bbb9d-tt85v" podUID="aba7588f-007a-4a5b-a73c-d11a0b3eb891" Sep 13 00:00:53.588028 kubelet[3206]: E0913 00:00:53.588005 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Sep 13 00:00:53.588028 kubelet[3206]: E0913 00:00:53.588028 3206 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800"} Sep 13 00:00:53.588128 kubelet[3206]: E0913 00:00:53.588046 3206 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"60ec3209-7b4d-4322-84da-268cece8c800\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:00:53.588128 kubelet[3206]: E0913 00:00:53.588062 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"60ec3209-7b4d-4322-84da-268cece8c800\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-748b96b4cb-584tl" podUID="60ec3209-7b4d-4322-84da-268cece8c800" Sep 13 00:01:03.337285 containerd[1701]: time="2025-09-13T00:01:03.337239430Z" level=info msg="StopPodSandbox for \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\"" Sep 13 00:01:03.368448 containerd[1701]: time="2025-09-13T00:01:03.368398301Z" level=error msg="StopPodSandbox for \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\" failed" error="failed to destroy network for sandbox \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:01:03.368686 kubelet[3206]: E0913 00:01:03.368637 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Sep 13 00:01:03.368970 kubelet[3206]: E0913 00:01:03.368690 3206 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103"} Sep 13 00:01:03.368970 kubelet[3206]: E0913 00:01:03.368722 3206 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3fc266a6-dfea-443a-8b62-40a366a0b751\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:01:03.368970 kubelet[3206]: E0913 00:01:03.368743 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3fc266a6-dfea-443a-8b62-40a366a0b751\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5564b5cb6-lt2mp" podUID="3fc266a6-dfea-443a-8b62-40a366a0b751" Sep 13 00:01:05.335403 containerd[1701]: time="2025-09-13T00:01:05.335069527Z" level=info msg="StopPodSandbox for \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\"" Sep 13 00:01:05.356625 containerd[1701]: time="2025-09-13T00:01:05.356386735Z" level=error msg="StopPodSandbox for \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\" failed" error="failed to destroy network for sandbox \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:01:05.356784 kubelet[3206]: E0913 00:01:05.356646 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Sep 13 00:01:05.356784 kubelet[3206]: E0913 00:01:05.356701 3206 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780"} Sep 13 00:01:05.356784 kubelet[3206]: E0913 00:01:05.356731 3206 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ecd9d5e5-4b3f-4586-9384-58b93046b59b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:01:05.356784 kubelet[3206]: E0913 00:01:05.356752 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ecd9d5e5-4b3f-4586-9384-58b93046b59b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f6b9bbb9d-cs9x9" podUID="ecd9d5e5-4b3f-4586-9384-58b93046b59b" Sep 13 00:01:06.335322 containerd[1701]: time="2025-09-13T00:01:06.334279036Z" level=info msg="StopPodSandbox for \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\"" Sep 13 00:01:06.335972 containerd[1701]: time="2025-09-13T00:01:06.335639359Z" level=info msg="StopPodSandbox for \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\"" Sep 13 00:01:06.364385 containerd[1701]: time="2025-09-13T00:01:06.364296144Z" level=error msg="StopPodSandbox for \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\" failed" error="failed to destroy network for sandbox \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:01:06.364658 kubelet[3206]: E0913 00:01:06.364580 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Sep 13 00:01:06.364658 kubelet[3206]: E0913 00:01:06.364633 3206 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e"} Sep 13 00:01:06.364934 kubelet[3206]: E0913 00:01:06.364662 3206 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d20e0726-7801-43dd-aff4-c9d38bb82a12\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:01:06.364934 kubelet[3206]: E0913 00:01:06.364684 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d20e0726-7801-43dd-aff4-c9d38bb82a12\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-2s2jz" podUID="d20e0726-7801-43dd-aff4-c9d38bb82a12" Sep 13 00:01:06.366195 containerd[1701]: time="2025-09-13T00:01:06.366019788Z" level=error msg="StopPodSandbox for \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\" failed" error="failed to destroy network for sandbox \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:01:06.366273 kubelet[3206]: E0913 00:01:06.366191 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Sep 13 00:01:06.366273 kubelet[3206]: E0913 00:01:06.366228 3206 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800"} Sep 13 00:01:06.366273 kubelet[3206]: E0913 00:01:06.366250 3206 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"60ec3209-7b4d-4322-84da-268cece8c800\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:01:06.366368 kubelet[3206]: E0913 00:01:06.366267 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"60ec3209-7b4d-4322-84da-268cece8c800\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-748b96b4cb-584tl" podUID="60ec3209-7b4d-4322-84da-268cece8c800" Sep 13 00:01:07.335069 containerd[1701]: time="2025-09-13T00:01:07.334899928Z" level=info msg="StopPodSandbox for \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\"" Sep 13 00:01:07.336313 containerd[1701]: time="2025-09-13T00:01:07.335996970Z" level=info msg="StopPodSandbox for \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\"" Sep 13 00:01:07.363689 containerd[1701]: time="2025-09-13T00:01:07.363519826Z" level=error msg="StopPodSandbox for \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\" failed" error="failed to destroy network for sandbox \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:01:07.365273 kubelet[3206]: E0913 00:01:07.365236 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Sep 13 00:01:07.366160 kubelet[3206]: E0913 00:01:07.365721 3206 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1"} Sep 13 00:01:07.366160 kubelet[3206]: E0913 00:01:07.365767 3206 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:01:07.366160 kubelet[3206]: E0913 00:01:07.365789 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tlnn9" podUID="3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67" Sep 13 00:01:07.367158 containerd[1701]: time="2025-09-13T00:01:07.367090834Z" level=error msg="StopPodSandbox for \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\" failed" error="failed to destroy network for sandbox \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:01:07.367373 kubelet[3206]: E0913 00:01:07.367350 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Sep 13 00:01:07.367525 kubelet[3206]: E0913 00:01:07.367457 3206 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914"} Sep 13 00:01:07.367525 kubelet[3206]: E0913 00:01:07.367488 3206 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:01:07.367525 kubelet[3206]: E0913 00:01:07.367505 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-sv5f6" podUID="ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386" Sep 13 00:01:08.336624 containerd[1701]: time="2025-09-13T00:01:08.336472249Z" level=info msg="StopPodSandbox for \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\"" Sep 13 00:01:08.378134 containerd[1701]: time="2025-09-13T00:01:08.377992414Z" level=error msg="StopPodSandbox for \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\" failed" error="failed to destroy network for sandbox \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:01:08.378451 kubelet[3206]: E0913 00:01:08.378361 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Sep 13 00:01:08.378725 kubelet[3206]: E0913 00:01:08.378459 3206 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81"} Sep 13 00:01:08.378725 kubelet[3206]: E0913 00:01:08.378491 3206 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aba7588f-007a-4a5b-a73c-d11a0b3eb891\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:01:08.378725 kubelet[3206]: E0913 00:01:08.378514 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aba7588f-007a-4a5b-a73c-d11a0b3eb891\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f6b9bbb9d-tt85v" podUID="aba7588f-007a-4a5b-a73c-d11a0b3eb891" Sep 13 00:01:09.335465 containerd[1701]: time="2025-09-13T00:01:09.335425965Z" level=info msg="StopPodSandbox for \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\"" Sep 13 00:01:09.383899 containerd[1701]: time="2025-09-13T00:01:09.383856943Z" level=error msg="StopPodSandbox for \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\" failed" error="failed to destroy network for sandbox \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:01:09.384719 kubelet[3206]: E0913 00:01:09.384666 3206 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Sep 13 00:01:09.385056 kubelet[3206]: E0913 00:01:09.384721 3206 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597"} Sep 13 00:01:09.385056 kubelet[3206]: E0913 00:01:09.384826 3206 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9018d05-ac4d-4e32-951d-a33c403e0852\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:01:09.385056 kubelet[3206]: E0913 00:01:09.384848 3206 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9018d05-ac4d-4e32-951d-a33c403e0852\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pthfz" podUID="d9018d05-ac4d-4e32-951d-a33c403e0852" Sep 13 00:01:09.711484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount125461127.mount: Deactivated successfully. Sep 13 00:01:09.755372 containerd[1701]: time="2025-09-13T00:01:09.755325060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:09.758896 containerd[1701]: time="2025-09-13T00:01:09.758752667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 13 00:01:09.762137 containerd[1701]: time="2025-09-13T00:01:09.761866834Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:09.767227 containerd[1701]: time="2025-09-13T00:01:09.766566403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:09.767227 containerd[1701]: time="2025-09-13T00:01:09.767093724Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 17.30163172s" Sep 13 00:01:09.767227 containerd[1701]: time="2025-09-13T00:01:09.767142524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 13 00:01:09.784904 containerd[1701]: time="2025-09-13T00:01:09.784861241Z" level=info msg="CreateContainer within sandbox \"4af6ab61f0df576e77e31d1bc80e1984da3f834615166f7f746d4a675ed58565\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:01:09.828662 containerd[1701]: time="2025-09-13T00:01:09.828619210Z" level=info msg="CreateContainer within sandbox \"4af6ab61f0df576e77e31d1bc80e1984da3f834615166f7f746d4a675ed58565\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"144c452a19d7c3616ca6e46c9af25d5fb8ff106b482a4378b69a3d6e8944e489\"" Sep 13 00:01:09.829808 containerd[1701]: time="2025-09-13T00:01:09.829770812Z" level=info msg="StartContainer for \"144c452a19d7c3616ca6e46c9af25d5fb8ff106b482a4378b69a3d6e8944e489\"" Sep 13 00:01:09.853284 systemd[1]: Started cri-containerd-144c452a19d7c3616ca6e46c9af25d5fb8ff106b482a4378b69a3d6e8944e489.scope - libcontainer container 144c452a19d7c3616ca6e46c9af25d5fb8ff106b482a4378b69a3d6e8944e489. Sep 13 00:01:09.889276 containerd[1701]: time="2025-09-13T00:01:09.889224493Z" level=info msg="StartContainer for \"144c452a19d7c3616ca6e46c9af25d5fb8ff106b482a4378b69a3d6e8944e489\" returns successfully" Sep 13 00:01:10.423325 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:01:10.423477 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:01:10.548489 containerd[1701]: time="2025-09-13T00:01:10.548239876Z" level=info msg="StopPodSandbox for \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\"" Sep 13 00:01:10.580213 kubelet[3206]: I0913 00:01:10.579067 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fstqq" podStartSLOduration=1.6655673229999999 podStartE2EDuration="44.579048939s" podCreationTimestamp="2025-09-13 00:00:26 +0000 UTC" firstStartedPulling="2025-09-13 00:00:26.85438363 +0000 UTC m=+23.649154535" lastFinishedPulling="2025-09-13 00:01:09.767865286 +0000 UTC m=+66.562636151" observedRunningTime="2025-09-13 00:01:10.578242297 +0000 UTC m=+67.373013202" watchObservedRunningTime="2025-09-13 00:01:10.579048939 +0000 UTC m=+67.373819804" Sep 13 00:01:10.770533 containerd[1701]: 2025-09-13 00:01:10.701 [INFO][4544] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Sep 13 00:01:10.770533 containerd[1701]: 2025-09-13 00:01:10.704 [INFO][4544] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" iface="eth0" netns="/var/run/netns/cni-fbe883fb-2ac9-c2f1-1fe7-94785fe23afb" Sep 13 00:01:10.770533 containerd[1701]: 2025-09-13 00:01:10.704 [INFO][4544] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" iface="eth0" netns="/var/run/netns/cni-fbe883fb-2ac9-c2f1-1fe7-94785fe23afb" Sep 13 00:01:10.770533 containerd[1701]: 2025-09-13 00:01:10.705 [INFO][4544] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" iface="eth0" netns="/var/run/netns/cni-fbe883fb-2ac9-c2f1-1fe7-94785fe23afb" Sep 13 00:01:10.770533 containerd[1701]: 2025-09-13 00:01:10.705 [INFO][4544] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Sep 13 00:01:10.770533 containerd[1701]: 2025-09-13 00:01:10.706 [INFO][4544] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Sep 13 00:01:10.770533 containerd[1701]: 2025-09-13 00:01:10.748 [INFO][4578] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" HandleID="k8s-pod-network.17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Workload="ci--4081.3.5--n--4f403f96f8-k8s-whisker--748b96b4cb--584tl-eth0" Sep 13 00:01:10.770533 containerd[1701]: 2025-09-13 00:01:10.749 [INFO][4578] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:10.770533 containerd[1701]: 2025-09-13 00:01:10.750 [INFO][4578] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:10.770533 containerd[1701]: 2025-09-13 00:01:10.761 [WARNING][4578] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" HandleID="k8s-pod-network.17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Workload="ci--4081.3.5--n--4f403f96f8-k8s-whisker--748b96b4cb--584tl-eth0" Sep 13 00:01:10.770533 containerd[1701]: 2025-09-13 00:01:10.761 [INFO][4578] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" HandleID="k8s-pod-network.17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Workload="ci--4081.3.5--n--4f403f96f8-k8s-whisker--748b96b4cb--584tl-eth0" Sep 13 00:01:10.770533 containerd[1701]: 2025-09-13 00:01:10.763 [INFO][4578] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:10.770533 containerd[1701]: 2025-09-13 00:01:10.766 [INFO][4544] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Sep 13 00:01:10.770533 containerd[1701]: time="2025-09-13T00:01:10.768476925Z" level=info msg="TearDown network for sandbox \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\" successfully" Sep 13 00:01:10.770533 containerd[1701]: time="2025-09-13T00:01:10.768503485Z" level=info msg="StopPodSandbox for \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\" returns successfully" Sep 13 00:01:10.773019 systemd[1]: run-netns-cni\x2dfbe883fb\x2d2ac9\x2dc2f1\x2d1fe7\x2d94785fe23afb.mount: Deactivated successfully. Sep 13 00:01:10.873434 kubelet[3206]: I0913 00:01:10.873389 3206 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzxjz\" (UniqueName: \"kubernetes.io/projected/60ec3209-7b4d-4322-84da-268cece8c800-kube-api-access-qzxjz\") pod \"60ec3209-7b4d-4322-84da-268cece8c800\" (UID: \"60ec3209-7b4d-4322-84da-268cece8c800\") " Sep 13 00:01:10.873434 kubelet[3206]: I0913 00:01:10.873440 3206 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60ec3209-7b4d-4322-84da-268cece8c800-whisker-ca-bundle\") pod \"60ec3209-7b4d-4322-84da-268cece8c800\" (UID: \"60ec3209-7b4d-4322-84da-268cece8c800\") " Sep 13 00:01:10.873600 kubelet[3206]: I0913 00:01:10.873461 3206 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/60ec3209-7b4d-4322-84da-268cece8c800-whisker-backend-key-pair\") pod \"60ec3209-7b4d-4322-84da-268cece8c800\" (UID: \"60ec3209-7b4d-4322-84da-268cece8c800\") " Sep 13 00:01:10.876723 kubelet[3206]: I0913 00:01:10.875089 3206 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60ec3209-7b4d-4322-84da-268cece8c800-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "60ec3209-7b4d-4322-84da-268cece8c800" (UID: "60ec3209-7b4d-4322-84da-268cece8c800"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:01:10.885885 systemd[1]: var-lib-kubelet-pods-60ec3209\x2d7b4d\x2d4322\x2d84da\x2d268cece8c800-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqzxjz.mount: Deactivated successfully. Sep 13 00:01:10.887397 kubelet[3206]: I0913 00:01:10.887341 3206 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ec3209-7b4d-4322-84da-268cece8c800-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "60ec3209-7b4d-4322-84da-268cece8c800" (UID: "60ec3209-7b4d-4322-84da-268cece8c800"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:01:10.887631 systemd[1]: var-lib-kubelet-pods-60ec3209\x2d7b4d\x2d4322\x2d84da\x2d268cece8c800-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:01:10.887910 kubelet[3206]: I0913 00:01:10.887881 3206 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60ec3209-7b4d-4322-84da-268cece8c800-kube-api-access-qzxjz" (OuterVolumeSpecName: "kube-api-access-qzxjz") pod "60ec3209-7b4d-4322-84da-268cece8c800" (UID: "60ec3209-7b4d-4322-84da-268cece8c800"). InnerVolumeSpecName "kube-api-access-qzxjz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:01:10.974777 kubelet[3206]: I0913 00:01:10.974732 3206 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qzxjz\" (UniqueName: \"kubernetes.io/projected/60ec3209-7b4d-4322-84da-268cece8c800-kube-api-access-qzxjz\") on node \"ci-4081.3.5-n-4f403f96f8\" DevicePath \"\"" Sep 13 00:01:10.974777 kubelet[3206]: I0913 00:01:10.974766 3206 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60ec3209-7b4d-4322-84da-268cece8c800-whisker-ca-bundle\") on node \"ci-4081.3.5-n-4f403f96f8\" DevicePath \"\"" Sep 13 00:01:10.974777 kubelet[3206]: I0913 00:01:10.974776 3206 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/60ec3209-7b4d-4322-84da-268cece8c800-whisker-backend-key-pair\") on node \"ci-4081.3.5-n-4f403f96f8\" DevicePath \"\"" Sep 13 00:01:11.345595 systemd[1]: Removed slice kubepods-besteffort-pod60ec3209_7b4d_4322_84da_268cece8c800.slice - libcontainer container kubepods-besteffort-pod60ec3209_7b4d_4322_84da_268cece8c800.slice. Sep 13 00:01:11.675183 systemd[1]: Created slice kubepods-besteffort-podd8b30a25_a73c_4b21_aa9f_ec7fb0464bab.slice - libcontainer container kubepods-besteffort-podd8b30a25_a73c_4b21_aa9f_ec7fb0464bab.slice. Sep 13 00:01:11.778920 kubelet[3206]: I0913 00:01:11.778868 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d8b30a25-a73c-4b21-aa9f-ec7fb0464bab-whisker-backend-key-pair\") pod \"whisker-f7d8c959-hjvwh\" (UID: \"d8b30a25-a73c-4b21-aa9f-ec7fb0464bab\") " pod="calico-system/whisker-f7d8c959-hjvwh" Sep 13 00:01:11.778920 kubelet[3206]: I0913 00:01:11.778921 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkqnj\" (UniqueName: \"kubernetes.io/projected/d8b30a25-a73c-4b21-aa9f-ec7fb0464bab-kube-api-access-lkqnj\") pod \"whisker-f7d8c959-hjvwh\" (UID: \"d8b30a25-a73c-4b21-aa9f-ec7fb0464bab\") " pod="calico-system/whisker-f7d8c959-hjvwh" Sep 13 00:01:11.779332 kubelet[3206]: I0913 00:01:11.778948 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8b30a25-a73c-4b21-aa9f-ec7fb0464bab-whisker-ca-bundle\") pod \"whisker-f7d8c959-hjvwh\" (UID: \"d8b30a25-a73c-4b21-aa9f-ec7fb0464bab\") " pod="calico-system/whisker-f7d8c959-hjvwh" Sep 13 00:01:11.978812 containerd[1701]: time="2025-09-13T00:01:11.978646551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f7d8c959-hjvwh,Uid:d8b30a25-a73c-4b21-aa9f-ec7fb0464bab,Namespace:calico-system,Attempt:0,}" Sep 13 00:01:12.414214 kernel: bpftool[4743]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:01:12.810494 systemd-networkd[1566]: vxlan.calico: Link UP Sep 13 00:01:12.810501 systemd-networkd[1566]: vxlan.calico: Gained carrier Sep 13 00:01:12.902459 systemd-networkd[1566]: cali9c04cf56eab: Link UP Sep 13 00:01:12.902641 systemd-networkd[1566]: cali9c04cf56eab: Gained carrier Sep 13 00:01:12.925878 containerd[1701]: 2025-09-13 00:01:12.767 [INFO][4761] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--4f403f96f8-k8s-whisker--f7d8c959--hjvwh-eth0 whisker-f7d8c959- calico-system d8b30a25-a73c-4b21-aa9f-ec7fb0464bab 965 0 2025-09-13 00:01:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f7d8c959 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.5-n-4f403f96f8 whisker-f7d8c959-hjvwh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9c04cf56eab [] [] }} ContainerID="70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca" Namespace="calico-system" Pod="whisker-f7d8c959-hjvwh" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-whisker--f7d8c959--hjvwh-" Sep 13 00:01:12.925878 containerd[1701]: 2025-09-13 00:01:12.767 [INFO][4761] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca" Namespace="calico-system" Pod="whisker-f7d8c959-hjvwh" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-whisker--f7d8c959--hjvwh-eth0" Sep 13 00:01:12.925878 containerd[1701]: 2025-09-13 00:01:12.800 [INFO][4772] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca" HandleID="k8s-pod-network.70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca" Workload="ci--4081.3.5--n--4f403f96f8-k8s-whisker--f7d8c959--hjvwh-eth0" Sep 13 00:01:12.925878 containerd[1701]: 2025-09-13 00:01:12.800 [INFO][4772] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca" HandleID="k8s-pod-network.70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca" Workload="ci--4081.3.5--n--4f403f96f8-k8s-whisker--f7d8c959--hjvwh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-4f403f96f8", "pod":"whisker-f7d8c959-hjvwh", "timestamp":"2025-09-13 00:01:12.800300346 +0000 UTC"}, Hostname:"ci-4081.3.5-n-4f403f96f8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:01:12.925878 containerd[1701]: 2025-09-13 00:01:12.800 [INFO][4772] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:12.925878 containerd[1701]: 2025-09-13 00:01:12.800 [INFO][4772] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:12.925878 containerd[1701]: 2025-09-13 00:01:12.800 [INFO][4772] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-4f403f96f8' Sep 13 00:01:12.925878 containerd[1701]: 2025-09-13 00:01:12.814 [INFO][4772] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:12.925878 containerd[1701]: 2025-09-13 00:01:12.819 [INFO][4772] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:12.925878 containerd[1701]: 2025-09-13 00:01:12.824 [INFO][4772] ipam/ipam.go 511: Trying affinity for 192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:12.925878 containerd[1701]: 2025-09-13 00:01:12.826 [INFO][4772] ipam/ipam.go 158: Attempting to load block cidr=192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:12.925878 containerd[1701]: 2025-09-13 00:01:12.828 [INFO][4772] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:12.925878 containerd[1701]: 2025-09-13 00:01:12.828 [INFO][4772] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.77.64/26 handle="k8s-pod-network.70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:12.925878 containerd[1701]: 2025-09-13 00:01:12.832 [INFO][4772] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca Sep 13 00:01:12.925878 containerd[1701]: 2025-09-13 00:01:12.840 [INFO][4772] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.77.64/26 handle="k8s-pod-network.70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:12.925878 containerd[1701]: 2025-09-13 00:01:12.847 [INFO][4772] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.77.65/26] block=192.168.77.64/26 handle="k8s-pod-network.70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:12.925878 containerd[1701]: 2025-09-13 00:01:12.848 [INFO][4772] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.77.65/26] handle="k8s-pod-network.70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:12.925878 containerd[1701]: 2025-09-13 00:01:12.848 [INFO][4772] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:12.925878 containerd[1701]: 2025-09-13 00:01:12.848 [INFO][4772] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.65/26] IPv6=[] ContainerID="70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca" HandleID="k8s-pod-network.70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca" Workload="ci--4081.3.5--n--4f403f96f8-k8s-whisker--f7d8c959--hjvwh-eth0" Sep 13 00:01:12.927865 containerd[1701]: 2025-09-13 00:01:12.849 [INFO][4761] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca" Namespace="calico-system" Pod="whisker-f7d8c959-hjvwh" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-whisker--f7d8c959--hjvwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-whisker--f7d8c959--hjvwh-eth0", GenerateName:"whisker-f7d8c959-", Namespace:"calico-system", SelfLink:"", UID:"d8b30a25-a73c-4b21-aa9f-ec7fb0464bab", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f7d8c959", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"", Pod:"whisker-f7d8c959-hjvwh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.77.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9c04cf56eab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:12.927865 containerd[1701]: 2025-09-13 00:01:12.849 [INFO][4761] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.65/32] ContainerID="70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca" Namespace="calico-system" Pod="whisker-f7d8c959-hjvwh" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-whisker--f7d8c959--hjvwh-eth0" Sep 13 00:01:12.927865 containerd[1701]: 2025-09-13 00:01:12.849 [INFO][4761] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c04cf56eab ContainerID="70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca" Namespace="calico-system" Pod="whisker-f7d8c959-hjvwh" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-whisker--f7d8c959--hjvwh-eth0" Sep 13 00:01:12.927865 containerd[1701]: 2025-09-13 00:01:12.903 [INFO][4761] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca" Namespace="calico-system" Pod="whisker-f7d8c959-hjvwh" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-whisker--f7d8c959--hjvwh-eth0" Sep 13 00:01:12.927865 containerd[1701]: 2025-09-13 00:01:12.905 [INFO][4761] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca" Namespace="calico-system" Pod="whisker-f7d8c959-hjvwh" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-whisker--f7d8c959--hjvwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-whisker--f7d8c959--hjvwh-eth0", GenerateName:"whisker-f7d8c959-", Namespace:"calico-system", SelfLink:"", UID:"d8b30a25-a73c-4b21-aa9f-ec7fb0464bab", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 1, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f7d8c959", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca", Pod:"whisker-f7d8c959-hjvwh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.77.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9c04cf56eab", MAC:"0e:47:ac:b4:2a:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:12.927865 containerd[1701]: 2025-09-13 00:01:12.921 [INFO][4761] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca" Namespace="calico-system" Pod="whisker-f7d8c959-hjvwh" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-whisker--f7d8c959--hjvwh-eth0" Sep 13 00:01:13.109277 containerd[1701]: time="2025-09-13T00:01:13.109171015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:13.109277 containerd[1701]: time="2025-09-13T00:01:13.109233375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:13.109860 containerd[1701]: time="2025-09-13T00:01:13.109256055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:13.109860 containerd[1701]: time="2025-09-13T00:01:13.109808136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:13.139586 systemd[1]: Started cri-containerd-70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca.scope - libcontainer container 70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca. Sep 13 00:01:13.170278 containerd[1701]: time="2025-09-13T00:01:13.170219819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f7d8c959-hjvwh,Uid:d8b30a25-a73c-4b21-aa9f-ec7fb0464bab,Namespace:calico-system,Attempt:0,} returns sandbox id \"70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca\"" Sep 13 00:01:13.184371 containerd[1701]: time="2025-09-13T00:01:13.184210888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:01:13.336534 kubelet[3206]: I0913 00:01:13.336494 3206 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60ec3209-7b4d-4322-84da-268cece8c800" path="/var/lib/kubelet/pods/60ec3209-7b4d-4322-84da-268cece8c800/volumes" Sep 13 00:01:14.759417 systemd-networkd[1566]: vxlan.calico: Gained IPv6LL Sep 13 00:01:14.823370 systemd-networkd[1566]: cali9c04cf56eab: Gained IPv6LL Sep 13 00:01:15.371745 containerd[1701]: time="2025-09-13T00:01:15.371200940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:15.374401 containerd[1701]: time="2025-09-13T00:01:15.374357668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 13 00:01:15.377965 containerd[1701]: time="2025-09-13T00:01:15.377930756Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:15.382910 containerd[1701]: time="2025-09-13T00:01:15.382868567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:15.383703 containerd[1701]: time="2025-09-13T00:01:15.383553608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 2.19930424s" Sep 13 00:01:15.383703 containerd[1701]: time="2025-09-13T00:01:15.383609729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 13 00:01:15.401062 containerd[1701]: time="2025-09-13T00:01:15.401008288Z" level=info msg="CreateContainer within sandbox \"70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:01:15.440969 containerd[1701]: time="2025-09-13T00:01:15.440839579Z" level=info msg="CreateContainer within sandbox \"70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c310b8c981210019fd6772f22edb5db114b0a26ef3c5251b81cc4a4e48508f71\"" Sep 13 00:01:15.441741 containerd[1701]: time="2025-09-13T00:01:15.441538180Z" level=info msg="StartContainer for \"c310b8c981210019fd6772f22edb5db114b0a26ef3c5251b81cc4a4e48508f71\"" Sep 13 00:01:15.471339 systemd[1]: Started cri-containerd-c310b8c981210019fd6772f22edb5db114b0a26ef3c5251b81cc4a4e48508f71.scope - libcontainer container c310b8c981210019fd6772f22edb5db114b0a26ef3c5251b81cc4a4e48508f71. Sep 13 00:01:15.511792 containerd[1701]: time="2025-09-13T00:01:15.511702860Z" level=info msg="StartContainer for \"c310b8c981210019fd6772f22edb5db114b0a26ef3c5251b81cc4a4e48508f71\" returns successfully" Sep 13 00:01:15.514582 containerd[1701]: time="2025-09-13T00:01:15.514367826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:01:17.335128 containerd[1701]: time="2025-09-13T00:01:17.334987762Z" level=info msg="StopPodSandbox for \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\"" Sep 13 00:01:17.430867 containerd[1701]: 2025-09-13 00:01:17.391 [INFO][4937] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Sep 13 00:01:17.430867 containerd[1701]: 2025-09-13 00:01:17.391 [INFO][4937] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" iface="eth0" netns="/var/run/netns/cni-5f9dc022-275d-927a-4e89-7ef2a1544aee" Sep 13 00:01:17.430867 containerd[1701]: 2025-09-13 00:01:17.391 [INFO][4937] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" iface="eth0" netns="/var/run/netns/cni-5f9dc022-275d-927a-4e89-7ef2a1544aee" Sep 13 00:01:17.430867 containerd[1701]: 2025-09-13 00:01:17.391 [INFO][4937] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" iface="eth0" netns="/var/run/netns/cni-5f9dc022-275d-927a-4e89-7ef2a1544aee" Sep 13 00:01:17.430867 containerd[1701]: 2025-09-13 00:01:17.392 [INFO][4937] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Sep 13 00:01:17.430867 containerd[1701]: 2025-09-13 00:01:17.392 [INFO][4937] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Sep 13 00:01:17.430867 containerd[1701]: 2025-09-13 00:01:17.414 [INFO][4944] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" HandleID="k8s-pod-network.f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0" Sep 13 00:01:17.430867 containerd[1701]: 2025-09-13 00:01:17.414 [INFO][4944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:17.430867 containerd[1701]: 2025-09-13 00:01:17.415 [INFO][4944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:17.430867 containerd[1701]: 2025-09-13 00:01:17.425 [WARNING][4944] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" HandleID="k8s-pod-network.f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0" Sep 13 00:01:17.430867 containerd[1701]: 2025-09-13 00:01:17.425 [INFO][4944] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" HandleID="k8s-pod-network.f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0" Sep 13 00:01:17.430867 containerd[1701]: 2025-09-13 00:01:17.427 [INFO][4944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:17.430867 containerd[1701]: 2025-09-13 00:01:17.429 [INFO][4937] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Sep 13 00:01:17.433675 containerd[1701]: time="2025-09-13T00:01:17.431091580Z" level=info msg="TearDown network for sandbox \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\" successfully" Sep 13 00:01:17.433675 containerd[1701]: time="2025-09-13T00:01:17.431185340Z" level=info msg="StopPodSandbox for \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\" returns successfully" Sep 13 00:01:17.435581 systemd[1]: run-netns-cni\x2d5f9dc022\x2d275d\x2d927a\x2d4e89\x2d7ef2a1544aee.mount: Deactivated successfully. Sep 13 00:01:17.441190 containerd[1701]: time="2025-09-13T00:01:17.440847722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5564b5cb6-lt2mp,Uid:3fc266a6-dfea-443a-8b62-40a366a0b751,Namespace:calico-system,Attempt:1,}" Sep 13 00:01:17.600617 systemd-networkd[1566]: caliaa5665b195e: Link UP Sep 13 00:01:17.603236 systemd-networkd[1566]: caliaa5665b195e: Gained carrier Sep 13 00:01:17.630145 containerd[1701]: 2025-09-13 00:01:17.516 [INFO][4951] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0 calico-kube-controllers-5564b5cb6- calico-system 3fc266a6-dfea-443a-8b62-40a366a0b751 989 0 2025-09-13 00:00:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5564b5cb6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.5-n-4f403f96f8 calico-kube-controllers-5564b5cb6-lt2mp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliaa5665b195e [] [] }} ContainerID="5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653" Namespace="calico-system" Pod="calico-kube-controllers-5564b5cb6-lt2mp" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-" Sep 13 00:01:17.630145 containerd[1701]: 2025-09-13 00:01:17.516 [INFO][4951] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653" Namespace="calico-system" Pod="calico-kube-controllers-5564b5cb6-lt2mp" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0" Sep 13 00:01:17.630145 containerd[1701]: 2025-09-13 00:01:17.542 [INFO][4963] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653" HandleID="k8s-pod-network.5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0" Sep 13 00:01:17.630145 containerd[1701]: 2025-09-13 00:01:17.543 [INFO][4963] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653" HandleID="k8s-pod-network.5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b220), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-4f403f96f8", "pod":"calico-kube-controllers-5564b5cb6-lt2mp", "timestamp":"2025-09-13 00:01:17.542914954 +0000 UTC"}, Hostname:"ci-4081.3.5-n-4f403f96f8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:01:17.630145 containerd[1701]: 2025-09-13 00:01:17.543 [INFO][4963] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:17.630145 containerd[1701]: 2025-09-13 00:01:17.543 [INFO][4963] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:17.630145 containerd[1701]: 2025-09-13 00:01:17.543 [INFO][4963] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-4f403f96f8' Sep 13 00:01:17.630145 containerd[1701]: 2025-09-13 00:01:17.553 [INFO][4963] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:17.630145 containerd[1701]: 2025-09-13 00:01:17.559 [INFO][4963] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:17.630145 containerd[1701]: 2025-09-13 00:01:17.567 [INFO][4963] ipam/ipam.go 511: Trying affinity for 192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:17.630145 containerd[1701]: 2025-09-13 00:01:17.570 [INFO][4963] ipam/ipam.go 158: Attempting to load block cidr=192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:17.630145 containerd[1701]: 2025-09-13 00:01:17.572 [INFO][4963] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:17.630145 containerd[1701]: 2025-09-13 00:01:17.573 [INFO][4963] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.77.64/26 handle="k8s-pod-network.5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:17.630145 containerd[1701]: 2025-09-13 00:01:17.575 [INFO][4963] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653 Sep 13 00:01:17.630145 containerd[1701]: 2025-09-13 00:01:17.583 [INFO][4963] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.77.64/26 handle="k8s-pod-network.5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:17.630145 containerd[1701]: 2025-09-13 00:01:17.591 [INFO][4963] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.77.66/26] block=192.168.77.64/26 handle="k8s-pod-network.5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:17.630145 containerd[1701]: 2025-09-13 00:01:17.591 [INFO][4963] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.77.66/26] handle="k8s-pod-network.5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:17.630145 containerd[1701]: 2025-09-13 00:01:17.591 [INFO][4963] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:17.630145 containerd[1701]: 2025-09-13 00:01:17.591 [INFO][4963] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.66/26] IPv6=[] ContainerID="5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653" HandleID="k8s-pod-network.5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0" Sep 13 00:01:17.631470 containerd[1701]: 2025-09-13 00:01:17.594 [INFO][4951] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653" Namespace="calico-system" Pod="calico-kube-controllers-5564b5cb6-lt2mp" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0", GenerateName:"calico-kube-controllers-5564b5cb6-", Namespace:"calico-system", SelfLink:"", UID:"3fc266a6-dfea-443a-8b62-40a366a0b751", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5564b5cb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"", Pod:"calico-kube-controllers-5564b5cb6-lt2mp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa5665b195e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:17.631470 containerd[1701]: 2025-09-13 00:01:17.594 [INFO][4951] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.66/32] ContainerID="5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653" Namespace="calico-system" Pod="calico-kube-controllers-5564b5cb6-lt2mp" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0" Sep 13 00:01:17.631470 containerd[1701]: 2025-09-13 00:01:17.594 [INFO][4951] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa5665b195e ContainerID="5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653" Namespace="calico-system" Pod="calico-kube-controllers-5564b5cb6-lt2mp" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0" Sep 13 00:01:17.631470 containerd[1701]: 2025-09-13 00:01:17.604 [INFO][4951] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653" Namespace="calico-system" Pod="calico-kube-controllers-5564b5cb6-lt2mp" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0" Sep 13 00:01:17.631470 containerd[1701]: 2025-09-13 00:01:17.604 [INFO][4951] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653" Namespace="calico-system" Pod="calico-kube-controllers-5564b5cb6-lt2mp" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0", GenerateName:"calico-kube-controllers-5564b5cb6-", Namespace:"calico-system", SelfLink:"", UID:"3fc266a6-dfea-443a-8b62-40a366a0b751", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5564b5cb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653", Pod:"calico-kube-controllers-5564b5cb6-lt2mp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa5665b195e", MAC:"da:d2:15:2e:aa:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:17.631470 containerd[1701]: 2025-09-13 00:01:17.624 [INFO][4951] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653" Namespace="calico-system" Pod="calico-kube-controllers-5564b5cb6-lt2mp" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0" Sep 13 00:01:17.652695 containerd[1701]: time="2025-09-13T00:01:17.652000482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:17.652695 containerd[1701]: time="2025-09-13T00:01:17.652063082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:17.652695 containerd[1701]: time="2025-09-13T00:01:17.652093442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:17.653169 containerd[1701]: time="2025-09-13T00:01:17.652970644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:17.676965 systemd[1]: Started cri-containerd-5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653.scope - libcontainer container 5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653. Sep 13 00:01:17.711371 containerd[1701]: time="2025-09-13T00:01:17.711324497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5564b5cb6-lt2mp,Uid:3fc266a6-dfea-443a-8b62-40a366a0b751,Namespace:calico-system,Attempt:1,} returns sandbox id \"5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653\"" Sep 13 00:01:19.303386 systemd-networkd[1566]: caliaa5665b195e: Gained IPv6LL Sep 13 00:01:19.335238 containerd[1701]: time="2025-09-13T00:01:19.335191066Z" level=info msg="StopPodSandbox for \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\"" Sep 13 00:01:19.434126 containerd[1701]: 2025-09-13 00:01:19.388 [INFO][5036] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Sep 13 00:01:19.434126 containerd[1701]: 2025-09-13 00:01:19.390 [INFO][5036] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" iface="eth0" netns="/var/run/netns/cni-ac5a8a6e-ccfe-e364-8d75-bf1de8b63016" Sep 13 00:01:19.434126 containerd[1701]: 2025-09-13 00:01:19.390 [INFO][5036] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" iface="eth0" netns="/var/run/netns/cni-ac5a8a6e-ccfe-e364-8d75-bf1de8b63016" Sep 13 00:01:19.434126 containerd[1701]: 2025-09-13 00:01:19.391 [INFO][5036] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" iface="eth0" netns="/var/run/netns/cni-ac5a8a6e-ccfe-e364-8d75-bf1de8b63016" Sep 13 00:01:19.434126 containerd[1701]: 2025-09-13 00:01:19.392 [INFO][5036] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Sep 13 00:01:19.434126 containerd[1701]: 2025-09-13 00:01:19.392 [INFO][5036] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Sep 13 00:01:19.434126 containerd[1701]: 2025-09-13 00:01:19.416 [INFO][5043] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" HandleID="k8s-pod-network.251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Workload="ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0" Sep 13 00:01:19.434126 containerd[1701]: 2025-09-13 00:01:19.416 [INFO][5043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:19.434126 containerd[1701]: 2025-09-13 00:01:19.416 [INFO][5043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:19.434126 containerd[1701]: 2025-09-13 00:01:19.426 [WARNING][5043] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" HandleID="k8s-pod-network.251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Workload="ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0" Sep 13 00:01:19.434126 containerd[1701]: 2025-09-13 00:01:19.426 [INFO][5043] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" HandleID="k8s-pod-network.251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Workload="ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0" Sep 13 00:01:19.434126 containerd[1701]: 2025-09-13 00:01:19.430 [INFO][5043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:19.434126 containerd[1701]: 2025-09-13 00:01:19.432 [INFO][5036] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Sep 13 00:01:19.436856 containerd[1701]: time="2025-09-13T00:01:19.436186975Z" level=info msg="TearDown network for sandbox \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\" successfully" Sep 13 00:01:19.436856 containerd[1701]: time="2025-09-13T00:01:19.436230975Z" level=info msg="StopPodSandbox for \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\" returns successfully" Sep 13 00:01:19.437399 systemd[1]: run-netns-cni\x2dac5a8a6e\x2dccfe\x2de364\x2d8d75\x2dbf1de8b63016.mount: Deactivated successfully. Sep 13 00:01:19.441965 containerd[1701]: time="2025-09-13T00:01:19.441590467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tlnn9,Uid:3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67,Namespace:calico-system,Attempt:1,}" Sep 13 00:01:19.609681 systemd-networkd[1566]: cali9e60d384e03: Link UP Sep 13 00:01:19.610822 systemd-networkd[1566]: cali9e60d384e03: Gained carrier Sep 13 00:01:19.643153 containerd[1701]: 2025-09-13 00:01:19.519 [INFO][5049] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0 csi-node-driver- calico-system 3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67 998 0 2025-09-13 00:00:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.5-n-4f403f96f8 csi-node-driver-tlnn9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9e60d384e03 [] [] }} ContainerID="a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3" Namespace="calico-system" Pod="csi-node-driver-tlnn9" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-" Sep 13 00:01:19.643153 containerd[1701]: 2025-09-13 00:01:19.519 [INFO][5049] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3" Namespace="calico-system" Pod="csi-node-driver-tlnn9" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0" Sep 13 00:01:19.643153 containerd[1701]: 2025-09-13 00:01:19.547 [INFO][5061] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3" HandleID="k8s-pod-network.a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3" Workload="ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0" Sep 13 00:01:19.643153 containerd[1701]: 2025-09-13 00:01:19.547 [INFO][5061] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3" HandleID="k8s-pod-network.a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3" Workload="ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b070), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-4f403f96f8", "pod":"csi-node-driver-tlnn9", "timestamp":"2025-09-13 00:01:19.547416748 +0000 UTC"}, Hostname:"ci-4081.3.5-n-4f403f96f8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:01:19.643153 containerd[1701]: 2025-09-13 00:01:19.547 [INFO][5061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:19.643153 containerd[1701]: 2025-09-13 00:01:19.547 [INFO][5061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:19.643153 containerd[1701]: 2025-09-13 00:01:19.547 [INFO][5061] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-4f403f96f8' Sep 13 00:01:19.643153 containerd[1701]: 2025-09-13 00:01:19.558 [INFO][5061] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:19.643153 containerd[1701]: 2025-09-13 00:01:19.564 [INFO][5061] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:19.643153 containerd[1701]: 2025-09-13 00:01:19.570 [INFO][5061] ipam/ipam.go 511: Trying affinity for 192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:19.643153 containerd[1701]: 2025-09-13 00:01:19.574 [INFO][5061] ipam/ipam.go 158: Attempting to load block cidr=192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:19.643153 containerd[1701]: 2025-09-13 00:01:19.577 [INFO][5061] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:19.643153 containerd[1701]: 2025-09-13 00:01:19.577 [INFO][5061] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.77.64/26 handle="k8s-pod-network.a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:19.643153 containerd[1701]: 2025-09-13 00:01:19.579 [INFO][5061] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3 Sep 13 00:01:19.643153 containerd[1701]: 2025-09-13 00:01:19.586 [INFO][5061] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.77.64/26 handle="k8s-pod-network.a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:19.643153 containerd[1701]: 2025-09-13 00:01:19.599 [INFO][5061] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.77.67/26] block=192.168.77.64/26 handle="k8s-pod-network.a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:19.643153 containerd[1701]: 2025-09-13 00:01:19.601 [INFO][5061] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.77.67/26] handle="k8s-pod-network.a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:19.643153 containerd[1701]: 2025-09-13 00:01:19.602 [INFO][5061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:19.643153 containerd[1701]: 2025-09-13 00:01:19.602 [INFO][5061] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.67/26] IPv6=[] ContainerID="a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3" HandleID="k8s-pod-network.a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3" Workload="ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0" Sep 13 00:01:19.645452 containerd[1701]: 2025-09-13 00:01:19.604 [INFO][5049] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3" Namespace="calico-system" Pod="csi-node-driver-tlnn9" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"", Pod:"csi-node-driver-tlnn9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9e60d384e03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:19.645452 containerd[1701]: 2025-09-13 00:01:19.605 [INFO][5049] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.67/32] ContainerID="a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3" Namespace="calico-system" Pod="csi-node-driver-tlnn9" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0" Sep 13 00:01:19.645452 containerd[1701]: 2025-09-13 00:01:19.605 [INFO][5049] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e60d384e03 ContainerID="a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3" Namespace="calico-system" Pod="csi-node-driver-tlnn9" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0" Sep 13 00:01:19.645452 containerd[1701]: 2025-09-13 00:01:19.611 [INFO][5049] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3" Namespace="calico-system" Pod="csi-node-driver-tlnn9" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0" Sep 13 00:01:19.645452 containerd[1701]: 2025-09-13 00:01:19.618 [INFO][5049] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3" Namespace="calico-system" Pod="csi-node-driver-tlnn9" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3", Pod:"csi-node-driver-tlnn9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9e60d384e03", MAC:"be:75:4e:f6:41:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:19.645452 containerd[1701]: 2025-09-13 00:01:19.637 [INFO][5049] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3" Namespace="calico-system" Pod="csi-node-driver-tlnn9" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0" Sep 13 00:01:19.666905 containerd[1701]: time="2025-09-13T00:01:19.666818779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:19.667088 containerd[1701]: time="2025-09-13T00:01:19.666874539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:19.667088 containerd[1701]: time="2025-09-13T00:01:19.666885779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:19.667088 containerd[1701]: time="2025-09-13T00:01:19.666962179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:19.693319 systemd[1]: Started cri-containerd-a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3.scope - libcontainer container a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3. Sep 13 00:01:19.718969 containerd[1701]: time="2025-09-13T00:01:19.718925377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tlnn9,Uid:3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67,Namespace:calico-system,Attempt:1,} returns sandbox id \"a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3\"" Sep 13 00:01:20.335969 containerd[1701]: time="2025-09-13T00:01:20.335921899Z" level=info msg="StopPodSandbox for \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\"" Sep 13 00:01:20.337761 containerd[1701]: time="2025-09-13T00:01:20.337078862Z" level=info msg="StopPodSandbox for \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\"" Sep 13 00:01:20.505291 containerd[1701]: 2025-09-13 00:01:20.424 [INFO][5136] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Sep 13 00:01:20.505291 containerd[1701]: 2025-09-13 00:01:20.424 [INFO][5136] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" iface="eth0" netns="/var/run/netns/cni-1f5eb565-f633-43de-0d30-9668b091f228" Sep 13 00:01:20.505291 containerd[1701]: 2025-09-13 00:01:20.424 [INFO][5136] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" iface="eth0" netns="/var/run/netns/cni-1f5eb565-f633-43de-0d30-9668b091f228" Sep 13 00:01:20.505291 containerd[1701]: 2025-09-13 00:01:20.425 [INFO][5136] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" iface="eth0" netns="/var/run/netns/cni-1f5eb565-f633-43de-0d30-9668b091f228" Sep 13 00:01:20.505291 containerd[1701]: 2025-09-13 00:01:20.425 [INFO][5136] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Sep 13 00:01:20.505291 containerd[1701]: 2025-09-13 00:01:20.425 [INFO][5136] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Sep 13 00:01:20.505291 containerd[1701]: 2025-09-13 00:01:20.476 [INFO][5152] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" HandleID="k8s-pod-network.a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0" Sep 13 00:01:20.505291 containerd[1701]: 2025-09-13 00:01:20.476 [INFO][5152] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:20.505291 containerd[1701]: 2025-09-13 00:01:20.476 [INFO][5152] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:20.505291 containerd[1701]: 2025-09-13 00:01:20.493 [WARNING][5152] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" HandleID="k8s-pod-network.a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0" Sep 13 00:01:20.505291 containerd[1701]: 2025-09-13 00:01:20.493 [INFO][5152] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" HandleID="k8s-pod-network.a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0" Sep 13 00:01:20.505291 containerd[1701]: 2025-09-13 00:01:20.495 [INFO][5152] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:20.505291 containerd[1701]: 2025-09-13 00:01:20.500 [INFO][5136] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Sep 13 00:01:20.508377 containerd[1701]: time="2025-09-13T00:01:20.505686925Z" level=info msg="TearDown network for sandbox \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\" successfully" Sep 13 00:01:20.508377 containerd[1701]: time="2025-09-13T00:01:20.508371811Z" level=info msg="StopPodSandbox for \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\" returns successfully" Sep 13 00:01:20.510628 containerd[1701]: time="2025-09-13T00:01:20.510428135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6b9bbb9d-cs9x9,Uid:ecd9d5e5-4b3f-4586-9384-58b93046b59b,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:01:20.513226 systemd[1]: run-netns-cni\x2d1f5eb565\x2df633\x2d43de\x2d0d30\x2d9668b091f228.mount: Deactivated successfully. Sep 13 00:01:20.521544 containerd[1701]: 2025-09-13 00:01:20.453 [INFO][5135] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Sep 13 00:01:20.521544 containerd[1701]: 2025-09-13 00:01:20.453 [INFO][5135] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" iface="eth0" netns="/var/run/netns/cni-af64594b-632e-9cd5-e764-012dac6129d6" Sep 13 00:01:20.521544 containerd[1701]: 2025-09-13 00:01:20.453 [INFO][5135] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" iface="eth0" netns="/var/run/netns/cni-af64594b-632e-9cd5-e764-012dac6129d6" Sep 13 00:01:20.521544 containerd[1701]: 2025-09-13 00:01:20.454 [INFO][5135] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" iface="eth0" netns="/var/run/netns/cni-af64594b-632e-9cd5-e764-012dac6129d6" Sep 13 00:01:20.521544 containerd[1701]: 2025-09-13 00:01:20.454 [INFO][5135] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Sep 13 00:01:20.521544 containerd[1701]: 2025-09-13 00:01:20.454 [INFO][5135] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Sep 13 00:01:20.521544 containerd[1701]: 2025-09-13 00:01:20.491 [INFO][5157] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" HandleID="k8s-pod-network.ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Workload="ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0" Sep 13 00:01:20.521544 containerd[1701]: 2025-09-13 00:01:20.491 [INFO][5157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:20.521544 containerd[1701]: 2025-09-13 00:01:20.495 [INFO][5157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:20.521544 containerd[1701]: 2025-09-13 00:01:20.514 [WARNING][5157] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" HandleID="k8s-pod-network.ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Workload="ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0" Sep 13 00:01:20.521544 containerd[1701]: 2025-09-13 00:01:20.514 [INFO][5157] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" HandleID="k8s-pod-network.ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Workload="ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0" Sep 13 00:01:20.521544 containerd[1701]: 2025-09-13 00:01:20.516 [INFO][5157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:20.521544 containerd[1701]: 2025-09-13 00:01:20.518 [INFO][5135] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Sep 13 00:01:20.522640 containerd[1701]: time="2025-09-13T00:01:20.522346123Z" level=info msg="TearDown network for sandbox \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\" successfully" Sep 13 00:01:20.522640 containerd[1701]: time="2025-09-13T00:01:20.522378563Z" level=info msg="StopPodSandbox for \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\" returns successfully" Sep 13 00:01:20.525774 containerd[1701]: time="2025-09-13T00:01:20.525369009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-2s2jz,Uid:d20e0726-7801-43dd-aff4-c9d38bb82a12,Namespace:calico-system,Attempt:1,}" Sep 13 00:01:20.527781 systemd[1]: run-netns-cni\x2daf64594b\x2d632e\x2d9cd5\x2de764\x2d012dac6129d6.mount: Deactivated successfully. Sep 13 00:01:20.812437 systemd-networkd[1566]: calif3c534ffb14: Link UP Sep 13 00:01:20.814856 systemd-networkd[1566]: calif3c534ffb14: Gained carrier Sep 13 00:01:20.848142 containerd[1701]: 2025-09-13 00:01:20.683 [INFO][5167] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0 calico-apiserver-f6b9bbb9d- calico-apiserver ecd9d5e5-4b3f-4586-9384-58b93046b59b 1008 0 2025-09-13 00:00:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f6b9bbb9d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-4f403f96f8 calico-apiserver-f6b9bbb9d-cs9x9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif3c534ffb14 [] [] }} ContainerID="53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb" Namespace="calico-apiserver" Pod="calico-apiserver-f6b9bbb9d-cs9x9" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-" Sep 13 00:01:20.848142 containerd[1701]: 2025-09-13 00:01:20.683 [INFO][5167] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb" Namespace="calico-apiserver" Pod="calico-apiserver-f6b9bbb9d-cs9x9" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0" Sep 13 00:01:20.848142 containerd[1701]: 2025-09-13 00:01:20.726 [INFO][5193] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb" HandleID="k8s-pod-network.53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0" Sep 13 00:01:20.848142 containerd[1701]: 2025-09-13 00:01:20.727 [INFO][5193] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb" HandleID="k8s-pod-network.53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3930), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-4f403f96f8", "pod":"calico-apiserver-f6b9bbb9d-cs9x9", "timestamp":"2025-09-13 00:01:20.726339826 +0000 UTC"}, Hostname:"ci-4081.3.5-n-4f403f96f8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:01:20.848142 containerd[1701]: 2025-09-13 00:01:20.727 [INFO][5193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:20.848142 containerd[1701]: 2025-09-13 00:01:20.727 [INFO][5193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:20.848142 containerd[1701]: 2025-09-13 00:01:20.727 [INFO][5193] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-4f403f96f8' Sep 13 00:01:20.848142 containerd[1701]: 2025-09-13 00:01:20.742 [INFO][5193] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:20.848142 containerd[1701]: 2025-09-13 00:01:20.750 [INFO][5193] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:20.848142 containerd[1701]: 2025-09-13 00:01:20.756 [INFO][5193] ipam/ipam.go 511: Trying affinity for 192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:20.848142 containerd[1701]: 2025-09-13 00:01:20.759 [INFO][5193] ipam/ipam.go 158: Attempting to load block cidr=192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:20.848142 containerd[1701]: 2025-09-13 00:01:20.762 [INFO][5193] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:20.848142 containerd[1701]: 2025-09-13 00:01:20.762 [INFO][5193] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.77.64/26 handle="k8s-pod-network.53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:20.848142 containerd[1701]: 2025-09-13 00:01:20.765 [INFO][5193] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb Sep 13 00:01:20.848142 containerd[1701]: 2025-09-13 00:01:20.774 [INFO][5193] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.77.64/26 handle="k8s-pod-network.53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:20.848142 containerd[1701]: 2025-09-13 00:01:20.792 [INFO][5193] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.77.68/26] block=192.168.77.64/26 handle="k8s-pod-network.53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:20.848142 containerd[1701]: 2025-09-13 00:01:20.792 [INFO][5193] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.77.68/26] handle="k8s-pod-network.53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:20.848142 containerd[1701]: 2025-09-13 00:01:20.792 [INFO][5193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:20.848142 containerd[1701]: 2025-09-13 00:01:20.792 [INFO][5193] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.68/26] IPv6=[] ContainerID="53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb" HandleID="k8s-pod-network.53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0" Sep 13 00:01:20.849582 containerd[1701]: 2025-09-13 00:01:20.798 [INFO][5167] cni-plugin/k8s.go 418: Populated endpoint ContainerID="53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb" Namespace="calico-apiserver" Pod="calico-apiserver-f6b9bbb9d-cs9x9" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0", GenerateName:"calico-apiserver-f6b9bbb9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ecd9d5e5-4b3f-4586-9384-58b93046b59b", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f6b9bbb9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"", Pod:"calico-apiserver-f6b9bbb9d-cs9x9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif3c534ffb14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:20.849582 containerd[1701]: 2025-09-13 00:01:20.799 [INFO][5167] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.68/32] ContainerID="53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb" Namespace="calico-apiserver" Pod="calico-apiserver-f6b9bbb9d-cs9x9" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0" Sep 13 00:01:20.849582 containerd[1701]: 2025-09-13 00:01:20.799 [INFO][5167] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3c534ffb14 ContainerID="53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb" Namespace="calico-apiserver" Pod="calico-apiserver-f6b9bbb9d-cs9x9" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0" Sep 13 00:01:20.849582 containerd[1701]: 2025-09-13 00:01:20.816 [INFO][5167] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb" Namespace="calico-apiserver" Pod="calico-apiserver-f6b9bbb9d-cs9x9" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0" Sep 13 00:01:20.849582 containerd[1701]: 2025-09-13 00:01:20.818 [INFO][5167] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb" Namespace="calico-apiserver" Pod="calico-apiserver-f6b9bbb9d-cs9x9" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0", GenerateName:"calico-apiserver-f6b9bbb9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ecd9d5e5-4b3f-4586-9384-58b93046b59b", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f6b9bbb9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb", Pod:"calico-apiserver-f6b9bbb9d-cs9x9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif3c534ffb14", MAC:"12:d7:44:23:28:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:20.849582 containerd[1701]: 2025-09-13 00:01:20.841 [INFO][5167] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb" Namespace="calico-apiserver" Pod="calico-apiserver-f6b9bbb9d-cs9x9" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0" Sep 13 00:01:20.903163 containerd[1701]: time="2025-09-13T00:01:20.902891267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:20.903163 containerd[1701]: time="2025-09-13T00:01:20.902955307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:20.903163 containerd[1701]: time="2025-09-13T00:01:20.902972067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:20.903617 containerd[1701]: time="2025-09-13T00:01:20.903059747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:20.923839 systemd-networkd[1566]: cali5f23ce8f4a9: Link UP Sep 13 00:01:20.926309 systemd-networkd[1566]: cali5f23ce8f4a9: Gained carrier Sep 13 00:01:20.948912 systemd[1]: Started cri-containerd-53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb.scope - libcontainer container 53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb. Sep 13 00:01:20.963085 containerd[1701]: 2025-09-13 00:01:20.664 [INFO][5178] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0 goldmane-54d579b49d- calico-system d20e0726-7801-43dd-aff4-c9d38bb82a12 1009 0 2025-09-13 00:00:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.5-n-4f403f96f8 goldmane-54d579b49d-2s2jz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5f23ce8f4a9 [] [] }} ContainerID="68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186" Namespace="calico-system" Pod="goldmane-54d579b49d-2s2jz" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-" Sep 13 00:01:20.963085 containerd[1701]: 2025-09-13 00:01:20.664 [INFO][5178] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186" Namespace="calico-system" Pod="goldmane-54d579b49d-2s2jz" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0" Sep 13 00:01:20.963085 containerd[1701]: 2025-09-13 00:01:20.762 [INFO][5191] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186" HandleID="k8s-pod-network.68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186" Workload="ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0" Sep 13 00:01:20.963085 containerd[1701]: 2025-09-13 00:01:20.763 [INFO][5191] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186" HandleID="k8s-pod-network.68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186" Workload="ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000376040), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-4f403f96f8", "pod":"goldmane-54d579b49d-2s2jz", "timestamp":"2025-09-13 00:01:20.762938309 +0000 UTC"}, Hostname:"ci-4081.3.5-n-4f403f96f8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:01:20.963085 containerd[1701]: 2025-09-13 00:01:20.763 [INFO][5191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:20.963085 containerd[1701]: 2025-09-13 00:01:20.792 [INFO][5191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:20.963085 containerd[1701]: 2025-09-13 00:01:20.793 [INFO][5191] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-4f403f96f8' Sep 13 00:01:20.963085 containerd[1701]: 2025-09-13 00:01:20.844 [INFO][5191] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:20.963085 containerd[1701]: 2025-09-13 00:01:20.852 [INFO][5191] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:20.963085 containerd[1701]: 2025-09-13 00:01:20.867 [INFO][5191] ipam/ipam.go 511: Trying affinity for 192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:20.963085 containerd[1701]: 2025-09-13 00:01:20.872 [INFO][5191] ipam/ipam.go 158: Attempting to load block cidr=192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:20.963085 containerd[1701]: 2025-09-13 00:01:20.878 [INFO][5191] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:20.963085 containerd[1701]: 2025-09-13 00:01:20.878 [INFO][5191] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.77.64/26 handle="k8s-pod-network.68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:20.963085 containerd[1701]: 2025-09-13 00:01:20.881 [INFO][5191] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186 Sep 13 00:01:20.963085 containerd[1701]: 2025-09-13 00:01:20.892 [INFO][5191] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.77.64/26 handle="k8s-pod-network.68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:20.963085 containerd[1701]: 2025-09-13 00:01:20.909 [INFO][5191] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.77.69/26] block=192.168.77.64/26 handle="k8s-pod-network.68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:20.963085 containerd[1701]: 2025-09-13 00:01:20.909 [INFO][5191] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.77.69/26] handle="k8s-pod-network.68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:20.963085 containerd[1701]: 2025-09-13 00:01:20.909 [INFO][5191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:20.963085 containerd[1701]: 2025-09-13 00:01:20.909 [INFO][5191] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.69/26] IPv6=[] ContainerID="68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186" HandleID="k8s-pod-network.68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186" Workload="ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0" Sep 13 00:01:20.964874 containerd[1701]: 2025-09-13 00:01:20.915 [INFO][5178] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186" Namespace="calico-system" Pod="goldmane-54d579b49d-2s2jz" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"d20e0726-7801-43dd-aff4-c9d38bb82a12", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"", Pod:"goldmane-54d579b49d-2s2jz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.77.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5f23ce8f4a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:20.964874 containerd[1701]: 2025-09-13 00:01:20.916 [INFO][5178] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.69/32] ContainerID="68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186" Namespace="calico-system" Pod="goldmane-54d579b49d-2s2jz" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0" Sep 13 00:01:20.964874 containerd[1701]: 2025-09-13 00:01:20.916 [INFO][5178] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f23ce8f4a9 ContainerID="68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186" Namespace="calico-system" Pod="goldmane-54d579b49d-2s2jz" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0" Sep 13 00:01:20.964874 containerd[1701]: 2025-09-13 00:01:20.927 [INFO][5178] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186" Namespace="calico-system" Pod="goldmane-54d579b49d-2s2jz" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0" Sep 13 00:01:20.964874 containerd[1701]: 2025-09-13 00:01:20.928 [INFO][5178] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186" Namespace="calico-system" Pod="goldmane-54d579b49d-2s2jz" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"d20e0726-7801-43dd-aff4-c9d38bb82a12", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186", Pod:"goldmane-54d579b49d-2s2jz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.77.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5f23ce8f4a9", MAC:"f6:cc:7d:f1:2e:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:20.964874 containerd[1701]: 2025-09-13 00:01:20.959 [INFO][5178] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186" Namespace="calico-system" Pod="goldmane-54d579b49d-2s2jz" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0" Sep 13 00:01:21.009122 containerd[1701]: time="2025-09-13T00:01:21.008749068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:21.009122 containerd[1701]: time="2025-09-13T00:01:21.008816148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:21.009122 containerd[1701]: time="2025-09-13T00:01:21.008832068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:21.009122 containerd[1701]: time="2025-09-13T00:01:21.008919828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:21.032615 systemd-networkd[1566]: cali9e60d384e03: Gained IPv6LL Sep 13 00:01:21.035359 systemd[1]: Started cri-containerd-68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186.scope - libcontainer container 68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186. Sep 13 00:01:21.053487 containerd[1701]: time="2025-09-13T00:01:21.053429849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6b9bbb9d-cs9x9,Uid:ecd9d5e5-4b3f-4586-9384-58b93046b59b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb\"" Sep 13 00:01:21.183930 containerd[1701]: time="2025-09-13T00:01:21.183886825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-2s2jz,Uid:d20e0726-7801-43dd-aff4-c9d38bb82a12,Namespace:calico-system,Attempt:1,} returns sandbox id \"68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186\"" Sep 13 00:01:21.318139 containerd[1701]: time="2025-09-13T00:01:21.317957730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:21.321547 containerd[1701]: time="2025-09-13T00:01:21.321334858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 13 00:01:21.325354 containerd[1701]: time="2025-09-13T00:01:21.325147106Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:21.346364 containerd[1701]: time="2025-09-13T00:01:21.346283514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:21.347907 containerd[1701]: time="2025-09-13T00:01:21.347472437Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 5.833063931s" Sep 13 00:01:21.347907 containerd[1701]: time="2025-09-13T00:01:21.347518437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 13 00:01:21.351805 containerd[1701]: time="2025-09-13T00:01:21.351254446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:01:21.358968 containerd[1701]: time="2025-09-13T00:01:21.358912023Z" level=info msg="CreateContainer within sandbox \"70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:01:21.403407 containerd[1701]: time="2025-09-13T00:01:21.403348484Z" level=info msg="CreateContainer within sandbox \"70e9d01886e54ad089b8297bc2ddf713e4ca03c834a472393dc0a872ce5a19ca\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d6e8100cca128f74452c5f01d2f361df1d56833cd3f8bd572143a31bae0fdee2\"" Sep 13 00:01:21.404836 containerd[1701]: time="2025-09-13T00:01:21.403911165Z" level=info msg="StartContainer for \"d6e8100cca128f74452c5f01d2f361df1d56833cd3f8bd572143a31bae0fdee2\"" Sep 13 00:01:21.429320 systemd[1]: Started cri-containerd-d6e8100cca128f74452c5f01d2f361df1d56833cd3f8bd572143a31bae0fdee2.scope - libcontainer container d6e8100cca128f74452c5f01d2f361df1d56833cd3f8bd572143a31bae0fdee2. Sep 13 00:01:21.440630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount193583833.mount: Deactivated successfully. Sep 13 00:01:21.475355 containerd[1701]: time="2025-09-13T00:01:21.475300127Z" level=info msg="StartContainer for \"d6e8100cca128f74452c5f01d2f361df1d56833cd3f8bd572143a31bae0fdee2\" returns successfully" Sep 13 00:01:21.843968 systemd[1]: Started sshd@7-10.200.20.16:22-10.200.16.10:54080.service - OpenSSH per-connection server daemon (10.200.16.10:54080). Sep 13 00:01:22.272979 sshd[5355]: Accepted publickey for core from 10.200.16.10 port 54080 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:01:22.275318 sshd[5355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:22.280487 systemd-logind[1678]: New session 10 of user core. Sep 13 00:01:22.287292 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:01:22.335010 containerd[1701]: time="2025-09-13T00:01:22.334682720Z" level=info msg="StopPodSandbox for \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\"" Sep 13 00:01:22.335010 containerd[1701]: time="2025-09-13T00:01:22.334792040Z" level=info msg="StopPodSandbox for \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\"" Sep 13 00:01:22.340079 containerd[1701]: time="2025-09-13T00:01:22.339774931Z" level=info msg="StopPodSandbox for \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\"" Sep 13 00:01:22.418095 kubelet[3206]: I0913 00:01:22.418029 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-f7d8c959-hjvwh" podStartSLOduration=3.249586189 podStartE2EDuration="11.418008429s" podCreationTimestamp="2025-09-13 00:01:11 +0000 UTC" firstStartedPulling="2025-09-13 00:01:13.180970961 +0000 UTC m=+69.975741866" lastFinishedPulling="2025-09-13 00:01:21.349393241 +0000 UTC m=+78.144164106" observedRunningTime="2025-09-13 00:01:21.608949431 +0000 UTC m=+78.403720336" watchObservedRunningTime="2025-09-13 00:01:22.418008429 +0000 UTC m=+79.212779374" Sep 13 00:01:22.504514 containerd[1701]: 2025-09-13 00:01:22.418 [INFO][5394] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Sep 13 00:01:22.504514 containerd[1701]: 2025-09-13 00:01:22.419 [INFO][5394] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" iface="eth0" netns="/var/run/netns/cni-a8f0f2b0-eb1c-371b-74a4-107f6df2dc5c" Sep 13 00:01:22.504514 containerd[1701]: 2025-09-13 00:01:22.421 [INFO][5394] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" iface="eth0" netns="/var/run/netns/cni-a8f0f2b0-eb1c-371b-74a4-107f6df2dc5c" Sep 13 00:01:22.504514 containerd[1701]: 2025-09-13 00:01:22.422 [INFO][5394] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" iface="eth0" netns="/var/run/netns/cni-a8f0f2b0-eb1c-371b-74a4-107f6df2dc5c" Sep 13 00:01:22.504514 containerd[1701]: 2025-09-13 00:01:22.422 [INFO][5394] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Sep 13 00:01:22.504514 containerd[1701]: 2025-09-13 00:01:22.422 [INFO][5394] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Sep 13 00:01:22.504514 containerd[1701]: 2025-09-13 00:01:22.469 [INFO][5404] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" HandleID="k8s-pod-network.9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0" Sep 13 00:01:22.504514 containerd[1701]: 2025-09-13 00:01:22.470 [INFO][5404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:22.504514 containerd[1701]: 2025-09-13 00:01:22.470 [INFO][5404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:22.504514 containerd[1701]: 2025-09-13 00:01:22.489 [WARNING][5404] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" HandleID="k8s-pod-network.9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0" Sep 13 00:01:22.504514 containerd[1701]: 2025-09-13 00:01:22.489 [INFO][5404] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" HandleID="k8s-pod-network.9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0" Sep 13 00:01:22.504514 containerd[1701]: 2025-09-13 00:01:22.499 [INFO][5404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:22.504514 containerd[1701]: 2025-09-13 00:01:22.502 [INFO][5394] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Sep 13 00:01:22.504514 containerd[1701]: time="2025-09-13T00:01:22.503982304Z" level=info msg="TearDown network for sandbox \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\" successfully" Sep 13 00:01:22.504514 containerd[1701]: time="2025-09-13T00:01:22.504010144Z" level=info msg="StopPodSandbox for \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\" returns successfully" Sep 13 00:01:22.508934 systemd[1]: run-netns-cni\x2da8f0f2b0\x2deb1c\x2d371b\x2d74a4\x2d107f6df2dc5c.mount: Deactivated successfully. Sep 13 00:01:22.511854 containerd[1701]: time="2025-09-13T00:01:22.509348317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sv5f6,Uid:ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386,Namespace:kube-system,Attempt:1,}" Sep 13 00:01:22.522449 containerd[1701]: 2025-09-13 00:01:22.447 [INFO][5378] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Sep 13 00:01:22.522449 containerd[1701]: 2025-09-13 00:01:22.447 [INFO][5378] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" iface="eth0" netns="/var/run/netns/cni-9f3bad5e-3bee-4508-fcbf-df20fa5c1e7c" Sep 13 00:01:22.522449 containerd[1701]: 2025-09-13 00:01:22.448 [INFO][5378] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" iface="eth0" netns="/var/run/netns/cni-9f3bad5e-3bee-4508-fcbf-df20fa5c1e7c" Sep 13 00:01:22.522449 containerd[1701]: 2025-09-13 00:01:22.448 [INFO][5378] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" iface="eth0" netns="/var/run/netns/cni-9f3bad5e-3bee-4508-fcbf-df20fa5c1e7c" Sep 13 00:01:22.522449 containerd[1701]: 2025-09-13 00:01:22.448 [INFO][5378] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Sep 13 00:01:22.522449 containerd[1701]: 2025-09-13 00:01:22.448 [INFO][5378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Sep 13 00:01:22.522449 containerd[1701]: 2025-09-13 00:01:22.487 [INFO][5413] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" HandleID="k8s-pod-network.5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0" Sep 13 00:01:22.522449 containerd[1701]: 2025-09-13 00:01:22.487 [INFO][5413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:22.522449 containerd[1701]: 2025-09-13 00:01:22.499 [INFO][5413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:22.522449 containerd[1701]: 2025-09-13 00:01:22.514 [WARNING][5413] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" HandleID="k8s-pod-network.5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0" Sep 13 00:01:22.522449 containerd[1701]: 2025-09-13 00:01:22.514 [INFO][5413] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" HandleID="k8s-pod-network.5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0" Sep 13 00:01:22.522449 containerd[1701]: 2025-09-13 00:01:22.518 [INFO][5413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:22.522449 containerd[1701]: 2025-09-13 00:01:22.520 [INFO][5378] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Sep 13 00:01:22.523586 containerd[1701]: time="2025-09-13T00:01:22.523491229Z" level=info msg="TearDown network for sandbox \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\" successfully" Sep 13 00:01:22.523747 containerd[1701]: time="2025-09-13T00:01:22.523672869Z" level=info msg="StopPodSandbox for \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\" returns successfully" Sep 13 00:01:22.525971 containerd[1701]: time="2025-09-13T00:01:22.525094712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6b9bbb9d-tt85v,Uid:aba7588f-007a-4a5b-a73c-d11a0b3eb891,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:01:22.530014 systemd[1]: run-netns-cni\x2d9f3bad5e\x2d3bee\x2d4508\x2dfcbf\x2ddf20fa5c1e7c.mount: Deactivated successfully. Sep 13 00:01:22.547284 containerd[1701]: 2025-09-13 00:01:22.442 [INFO][5384] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Sep 13 00:01:22.547284 containerd[1701]: 2025-09-13 00:01:22.442 [INFO][5384] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" iface="eth0" netns="/var/run/netns/cni-5e8883bb-8e4a-4784-3c79-cc3d7a991fe1" Sep 13 00:01:22.547284 containerd[1701]: 2025-09-13 00:01:22.444 [INFO][5384] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" iface="eth0" netns="/var/run/netns/cni-5e8883bb-8e4a-4784-3c79-cc3d7a991fe1" Sep 13 00:01:22.547284 containerd[1701]: 2025-09-13 00:01:22.446 [INFO][5384] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" iface="eth0" netns="/var/run/netns/cni-5e8883bb-8e4a-4784-3c79-cc3d7a991fe1" Sep 13 00:01:22.547284 containerd[1701]: 2025-09-13 00:01:22.446 [INFO][5384] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Sep 13 00:01:22.547284 containerd[1701]: 2025-09-13 00:01:22.446 [INFO][5384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Sep 13 00:01:22.547284 containerd[1701]: 2025-09-13 00:01:22.492 [INFO][5411] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" HandleID="k8s-pod-network.63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0" Sep 13 00:01:22.547284 containerd[1701]: 2025-09-13 00:01:22.492 [INFO][5411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:22.547284 containerd[1701]: 2025-09-13 00:01:22.518 [INFO][5411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:22.547284 containerd[1701]: 2025-09-13 00:01:22.539 [WARNING][5411] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" HandleID="k8s-pod-network.63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0" Sep 13 00:01:22.547284 containerd[1701]: 2025-09-13 00:01:22.539 [INFO][5411] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" HandleID="k8s-pod-network.63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0" Sep 13 00:01:22.547284 containerd[1701]: 2025-09-13 00:01:22.542 [INFO][5411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:22.547284 containerd[1701]: 2025-09-13 00:01:22.545 [INFO][5384] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Sep 13 00:01:22.550622 containerd[1701]: time="2025-09-13T00:01:22.548086365Z" level=info msg="TearDown network for sandbox \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\" successfully" Sep 13 00:01:22.550622 containerd[1701]: time="2025-09-13T00:01:22.548477965Z" level=info msg="StopPodSandbox for \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\" returns successfully" Sep 13 00:01:22.553117 containerd[1701]: time="2025-09-13T00:01:22.552541455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pthfz,Uid:d9018d05-ac4d-4e32-951d-a33c403e0852,Namespace:kube-system,Attempt:1,}" Sep 13 00:01:22.553515 systemd[1]: run-netns-cni\x2d5e8883bb\x2d8e4a\x2d4784\x2d3c79\x2dcc3d7a991fe1.mount: Deactivated successfully. Sep 13 00:01:22.695697 systemd-networkd[1566]: cali5f23ce8f4a9: Gained IPv6LL Sep 13 00:01:22.745159 sshd[5355]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:22.753579 systemd[1]: sshd@7-10.200.20.16:22-10.200.16.10:54080.service: Deactivated successfully. Sep 13 00:01:22.759009 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:01:22.770885 systemd-logind[1678]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:01:22.778940 systemd-logind[1678]: Removed session 10. Sep 13 00:01:22.808204 systemd-networkd[1566]: calie775441c720: Link UP Sep 13 00:01:22.811410 systemd-networkd[1566]: calie775441c720: Gained carrier Sep 13 00:01:22.823269 systemd-networkd[1566]: calif3c534ffb14: Gained IPv6LL Sep 13 00:01:22.853477 containerd[1701]: 2025-09-13 00:01:22.625 [INFO][5433] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0 coredns-674b8bbfcf- kube-system ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386 1068 0 2025-09-13 00:00:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-4f403f96f8 coredns-674b8bbfcf-sv5f6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie775441c720 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c" Namespace="kube-system" Pod="coredns-674b8bbfcf-sv5f6" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-" Sep 13 00:01:22.853477 containerd[1701]: 2025-09-13 00:01:22.626 [INFO][5433] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c" Namespace="kube-system" Pod="coredns-674b8bbfcf-sv5f6" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0" Sep 13 00:01:22.853477 containerd[1701]: 2025-09-13 00:01:22.685 [INFO][5446] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c" HandleID="k8s-pod-network.4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0" Sep 13 00:01:22.853477 containerd[1701]: 2025-09-13 00:01:22.686 [INFO][5446] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c" HandleID="k8s-pod-network.4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d740), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-4f403f96f8", "pod":"coredns-674b8bbfcf-sv5f6", "timestamp":"2025-09-13 00:01:22.685826837 +0000 UTC"}, Hostname:"ci-4081.3.5-n-4f403f96f8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:01:22.853477 containerd[1701]: 2025-09-13 00:01:22.686 [INFO][5446] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:22.853477 containerd[1701]: 2025-09-13 00:01:22.686 [INFO][5446] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:22.853477 containerd[1701]: 2025-09-13 00:01:22.686 [INFO][5446] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-4f403f96f8' Sep 13 00:01:22.853477 containerd[1701]: 2025-09-13 00:01:22.717 [INFO][5446] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:22.853477 containerd[1701]: 2025-09-13 00:01:22.726 [INFO][5446] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:22.853477 containerd[1701]: 2025-09-13 00:01:22.736 [INFO][5446] ipam/ipam.go 511: Trying affinity for 192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:22.853477 containerd[1701]: 2025-09-13 00:01:22.742 [INFO][5446] ipam/ipam.go 158: Attempting to load block cidr=192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:22.853477 containerd[1701]: 2025-09-13 00:01:22.748 [INFO][5446] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:22.853477 containerd[1701]: 2025-09-13 00:01:22.749 [INFO][5446] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.77.64/26 handle="k8s-pod-network.4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:22.853477 containerd[1701]: 2025-09-13 00:01:22.753 [INFO][5446] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c Sep 13 00:01:22.853477 containerd[1701]: 2025-09-13 00:01:22.766 [INFO][5446] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.77.64/26 handle="k8s-pod-network.4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:22.853477 containerd[1701]: 2025-09-13 00:01:22.787 [INFO][5446] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.77.70/26] block=192.168.77.64/26 handle="k8s-pod-network.4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:22.853477 containerd[1701]: 2025-09-13 00:01:22.787 [INFO][5446] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.77.70/26] handle="k8s-pod-network.4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:22.853477 containerd[1701]: 2025-09-13 00:01:22.787 [INFO][5446] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:22.853477 containerd[1701]: 2025-09-13 00:01:22.787 [INFO][5446] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.70/26] IPv6=[] ContainerID="4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c" HandleID="k8s-pod-network.4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0" Sep 13 00:01:22.854432 containerd[1701]: 2025-09-13 00:01:22.797 [INFO][5433] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c" Namespace="kube-system" Pod="coredns-674b8bbfcf-sv5f6" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"", Pod:"coredns-674b8bbfcf-sv5f6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie775441c720", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:22.854432 containerd[1701]: 2025-09-13 00:01:22.797 [INFO][5433] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.70/32] ContainerID="4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c" Namespace="kube-system" Pod="coredns-674b8bbfcf-sv5f6" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0" Sep 13 00:01:22.854432 containerd[1701]: 2025-09-13 00:01:22.797 [INFO][5433] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie775441c720 ContainerID="4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c" Namespace="kube-system" Pod="coredns-674b8bbfcf-sv5f6" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0" Sep 13 00:01:22.854432 containerd[1701]: 2025-09-13 00:01:22.812 [INFO][5433] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c" Namespace="kube-system" Pod="coredns-674b8bbfcf-sv5f6" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0" Sep 13 00:01:22.854432 containerd[1701]: 2025-09-13 00:01:22.812 [INFO][5433] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c" Namespace="kube-system" Pod="coredns-674b8bbfcf-sv5f6" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c", Pod:"coredns-674b8bbfcf-sv5f6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie775441c720", MAC:"fa:70:62:f7:a0:24", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:22.854432 containerd[1701]: 2025-09-13 00:01:22.845 [INFO][5433] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c" Namespace="kube-system" Pod="coredns-674b8bbfcf-sv5f6" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0" Sep 13 00:01:22.899305 systemd-networkd[1566]: cali6d49fa350f6: Link UP Sep 13 00:01:22.901154 systemd-networkd[1566]: cali6d49fa350f6: Gained carrier Sep 13 00:01:22.904689 containerd[1701]: time="2025-09-13T00:01:22.904548614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:22.905069 containerd[1701]: time="2025-09-13T00:01:22.904919055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:22.905069 containerd[1701]: time="2025-09-13T00:01:22.905005495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:22.905349 containerd[1701]: time="2025-09-13T00:01:22.905148416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:22.928801 containerd[1701]: 2025-09-13 00:01:22.725 [INFO][5450] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0 coredns-674b8bbfcf- kube-system d9018d05-ac4d-4e32-951d-a33c403e0852 1069 0 2025-09-13 00:00:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-4f403f96f8 coredns-674b8bbfcf-pthfz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6d49fa350f6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pthfz" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-" Sep 13 00:01:22.928801 containerd[1701]: 2025-09-13 00:01:22.726 [INFO][5450] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pthfz" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0" Sep 13 00:01:22.928801 containerd[1701]: 2025-09-13 00:01:22.789 [INFO][5486] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f" HandleID="k8s-pod-network.740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0" Sep 13 00:01:22.928801 containerd[1701]: 2025-09-13 00:01:22.790 [INFO][5486] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f" HandleID="k8s-pod-network.740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000255790), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-4f403f96f8", "pod":"coredns-674b8bbfcf-pthfz", "timestamp":"2025-09-13 00:01:22.789920674 +0000 UTC"}, Hostname:"ci-4081.3.5-n-4f403f96f8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:01:22.928801 containerd[1701]: 2025-09-13 00:01:22.790 [INFO][5486] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:22.928801 containerd[1701]: 2025-09-13 00:01:22.790 [INFO][5486] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:22.928801 containerd[1701]: 2025-09-13 00:01:22.790 [INFO][5486] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-4f403f96f8' Sep 13 00:01:22.928801 containerd[1701]: 2025-09-13 00:01:22.820 [INFO][5486] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:22.928801 containerd[1701]: 2025-09-13 00:01:22.828 [INFO][5486] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:22.928801 containerd[1701]: 2025-09-13 00:01:22.836 [INFO][5486] ipam/ipam.go 511: Trying affinity for 192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:22.928801 containerd[1701]: 2025-09-13 00:01:22.838 [INFO][5486] ipam/ipam.go 158: Attempting to load block cidr=192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:22.928801 containerd[1701]: 2025-09-13 00:01:22.847 [INFO][5486] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:22.928801 containerd[1701]: 2025-09-13 00:01:22.847 [INFO][5486] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.77.64/26 handle="k8s-pod-network.740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:22.928801 containerd[1701]: 2025-09-13 00:01:22.850 [INFO][5486] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f Sep 13 00:01:22.928801 containerd[1701]: 2025-09-13 00:01:22.860 [INFO][5486] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.77.64/26 handle="k8s-pod-network.740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:22.928801 containerd[1701]: 2025-09-13 00:01:22.880 [INFO][5486] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.77.71/26] block=192.168.77.64/26 handle="k8s-pod-network.740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:22.928801 containerd[1701]: 2025-09-13 00:01:22.880 [INFO][5486] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.77.71/26] handle="k8s-pod-network.740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:22.928801 containerd[1701]: 2025-09-13 00:01:22.880 [INFO][5486] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:22.928801 containerd[1701]: 2025-09-13 00:01:22.880 [INFO][5486] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.71/26] IPv6=[] ContainerID="740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f" HandleID="k8s-pod-network.740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0" Sep 13 00:01:22.929450 containerd[1701]: 2025-09-13 00:01:22.890 [INFO][5450] cni-plugin/k8s.go 418: Populated endpoint ContainerID="740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pthfz" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d9018d05-ac4d-4e32-951d-a33c403e0852", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"", Pod:"coredns-674b8bbfcf-pthfz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6d49fa350f6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:22.929450 containerd[1701]: 2025-09-13 00:01:22.890 [INFO][5450] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.71/32] ContainerID="740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pthfz" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0" Sep 13 00:01:22.929450 containerd[1701]: 2025-09-13 00:01:22.890 [INFO][5450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d49fa350f6 ContainerID="740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pthfz" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0" Sep 13 00:01:22.929450 containerd[1701]: 2025-09-13 00:01:22.901 [INFO][5450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pthfz" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0" Sep 13 00:01:22.929450 containerd[1701]: 2025-09-13 00:01:22.902 [INFO][5450] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pthfz" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d9018d05-ac4d-4e32-951d-a33c403e0852", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f", Pod:"coredns-674b8bbfcf-pthfz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6d49fa350f6", MAC:"62:a0:6a:43:16:12", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:22.929450 containerd[1701]: 2025-09-13 00:01:22.923 [INFO][5450] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pthfz" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0" Sep 13 00:01:22.944354 systemd[1]: Started cri-containerd-4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c.scope - libcontainer container 4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c. Sep 13 00:01:22.996725 systemd-networkd[1566]: cali9dd5970d87e: Link UP Sep 13 00:01:23.001494 systemd-networkd[1566]: cali9dd5970d87e: Gained carrier Sep 13 00:01:23.034650 containerd[1701]: time="2025-09-13T00:01:23.034193829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:23.034650 containerd[1701]: time="2025-09-13T00:01:23.034270669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:23.034650 containerd[1701]: time="2025-09-13T00:01:23.034287189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:23.034650 containerd[1701]: time="2025-09-13T00:01:23.034379549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:23.039615 containerd[1701]: 2025-09-13 00:01:22.713 [INFO][5451] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0 calico-apiserver-f6b9bbb9d- calico-apiserver aba7588f-007a-4a5b-a73c-d11a0b3eb891 1070 0 2025-09-13 00:00:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f6b9bbb9d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-4f403f96f8 calico-apiserver-f6b9bbb9d-tt85v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9dd5970d87e [] [] }} ContainerID="1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c" Namespace="calico-apiserver" Pod="calico-apiserver-f6b9bbb9d-tt85v" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-" Sep 13 00:01:23.039615 containerd[1701]: 2025-09-13 00:01:22.713 [INFO][5451] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c" Namespace="calico-apiserver" Pod="calico-apiserver-f6b9bbb9d-tt85v" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0" Sep 13 00:01:23.039615 containerd[1701]: 2025-09-13 00:01:22.813 [INFO][5481] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c" HandleID="k8s-pod-network.1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0" Sep 13 00:01:23.039615 containerd[1701]: 2025-09-13 00:01:22.813 [INFO][5481] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c" HandleID="k8s-pod-network.1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330a30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-4f403f96f8", "pod":"calico-apiserver-f6b9bbb9d-tt85v", "timestamp":"2025-09-13 00:01:22.810981682 +0000 UTC"}, Hostname:"ci-4081.3.5-n-4f403f96f8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:01:23.039615 containerd[1701]: 2025-09-13 00:01:22.813 [INFO][5481] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:01:23.039615 containerd[1701]: 2025-09-13 00:01:22.880 [INFO][5481] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:01:23.039615 containerd[1701]: 2025-09-13 00:01:22.881 [INFO][5481] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-4f403f96f8' Sep 13 00:01:23.039615 containerd[1701]: 2025-09-13 00:01:22.922 [INFO][5481] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:23.039615 containerd[1701]: 2025-09-13 00:01:22.939 [INFO][5481] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:23.039615 containerd[1701]: 2025-09-13 00:01:22.947 [INFO][5481] ipam/ipam.go 511: Trying affinity for 192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:23.039615 containerd[1701]: 2025-09-13 00:01:22.950 [INFO][5481] ipam/ipam.go 158: Attempting to load block cidr=192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:23.039615 containerd[1701]: 2025-09-13 00:01:22.956 [INFO][5481] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.77.64/26 host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:23.039615 containerd[1701]: 2025-09-13 00:01:22.956 [INFO][5481] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.77.64/26 handle="k8s-pod-network.1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:23.039615 containerd[1701]: 2025-09-13 00:01:22.958 [INFO][5481] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c Sep 13 00:01:23.039615 containerd[1701]: 2025-09-13 00:01:22.968 [INFO][5481] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.77.64/26 handle="k8s-pod-network.1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:23.039615 containerd[1701]: 2025-09-13 00:01:22.982 [INFO][5481] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.77.72/26] block=192.168.77.64/26 handle="k8s-pod-network.1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:23.039615 containerd[1701]: 2025-09-13 00:01:22.982 [INFO][5481] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.77.72/26] handle="k8s-pod-network.1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c" host="ci-4081.3.5-n-4f403f96f8" Sep 13 00:01:23.039615 containerd[1701]: 2025-09-13 00:01:22.982 [INFO][5481] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:01:23.039615 containerd[1701]: 2025-09-13 00:01:22.982 [INFO][5481] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.72/26] IPv6=[] ContainerID="1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c" HandleID="k8s-pod-network.1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0" Sep 13 00:01:23.040388 containerd[1701]: 2025-09-13 00:01:22.989 [INFO][5451] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c" Namespace="calico-apiserver" Pod="calico-apiserver-f6b9bbb9d-tt85v" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0", GenerateName:"calico-apiserver-f6b9bbb9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"aba7588f-007a-4a5b-a73c-d11a0b3eb891", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f6b9bbb9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"", Pod:"calico-apiserver-f6b9bbb9d-tt85v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9dd5970d87e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:23.040388 containerd[1701]: 2025-09-13 00:01:22.989 [INFO][5451] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.72/32] ContainerID="1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c" Namespace="calico-apiserver" Pod="calico-apiserver-f6b9bbb9d-tt85v" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0" Sep 13 00:01:23.040388 containerd[1701]: 2025-09-13 00:01:22.989 [INFO][5451] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9dd5970d87e ContainerID="1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c" Namespace="calico-apiserver" Pod="calico-apiserver-f6b9bbb9d-tt85v" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0" Sep 13 00:01:23.040388 containerd[1701]: 2025-09-13 00:01:23.002 [INFO][5451] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c" Namespace="calico-apiserver" Pod="calico-apiserver-f6b9bbb9d-tt85v" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0" Sep 13 00:01:23.040388 containerd[1701]: 2025-09-13 00:01:23.008 [INFO][5451] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c" Namespace="calico-apiserver" Pod="calico-apiserver-f6b9bbb9d-tt85v" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0", GenerateName:"calico-apiserver-f6b9bbb9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"aba7588f-007a-4a5b-a73c-d11a0b3eb891", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f6b9bbb9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c", Pod:"calico-apiserver-f6b9bbb9d-tt85v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9dd5970d87e", MAC:"4a:bc:64:59:d2:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:01:23.040388 containerd[1701]: 2025-09-13 00:01:23.027 [INFO][5451] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c" Namespace="calico-apiserver" Pod="calico-apiserver-f6b9bbb9d-tt85v" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0" Sep 13 00:01:23.045625 containerd[1701]: time="2025-09-13T00:01:23.045204294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sv5f6,Uid:ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386,Namespace:kube-system,Attempt:1,} returns sandbox id \"4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c\"" Sep 13 00:01:23.066176 containerd[1701]: time="2025-09-13T00:01:23.065352980Z" level=info msg="CreateContainer within sandbox \"4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:01:23.070630 systemd[1]: Started cri-containerd-740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f.scope - libcontainer container 740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f. Sep 13 00:01:23.085211 containerd[1701]: time="2025-09-13T00:01:23.082632219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:01:23.085211 containerd[1701]: time="2025-09-13T00:01:23.085205505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:01:23.085211 containerd[1701]: time="2025-09-13T00:01:23.085232745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:23.085792 containerd[1701]: time="2025-09-13T00:01:23.085453545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:01:23.109370 systemd[1]: Started cri-containerd-1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c.scope - libcontainer container 1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c. Sep 13 00:01:23.124388 containerd[1701]: time="2025-09-13T00:01:23.124219033Z" level=info msg="CreateContainer within sandbox \"4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ad99ad3a01986cb695878559de9147955a665f5143e1c6c314ff91a8e9f7fff3\"" Sep 13 00:01:23.127284 containerd[1701]: time="2025-09-13T00:01:23.126389038Z" level=info msg="StartContainer for \"ad99ad3a01986cb695878559de9147955a665f5143e1c6c314ff91a8e9f7fff3\"" Sep 13 00:01:23.156308 containerd[1701]: time="2025-09-13T00:01:23.156169465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pthfz,Uid:d9018d05-ac4d-4e32-951d-a33c403e0852,Namespace:kube-system,Attempt:1,} returns sandbox id \"740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f\"" Sep 13 00:01:23.169059 containerd[1701]: time="2025-09-13T00:01:23.168751174Z" level=info msg="CreateContainer within sandbox \"740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:01:23.188374 systemd[1]: Started cri-containerd-ad99ad3a01986cb695878559de9147955a665f5143e1c6c314ff91a8e9f7fff3.scope - libcontainer container ad99ad3a01986cb695878559de9147955a665f5143e1c6c314ff91a8e9f7fff3. Sep 13 00:01:23.226044 containerd[1701]: time="2025-09-13T00:01:23.225899943Z" level=info msg="CreateContainer within sandbox \"740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"15529fccef431eddc2e52dbfd545091687b2a9acd35e04557ad60e6c9416d25c\"" Sep 13 00:01:23.227337 containerd[1701]: time="2025-09-13T00:01:23.226787345Z" level=info msg="StartContainer for \"15529fccef431eddc2e52dbfd545091687b2a9acd35e04557ad60e6c9416d25c\"" Sep 13 00:01:23.268875 containerd[1701]: time="2025-09-13T00:01:23.268818160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f6b9bbb9d-tt85v,Uid:aba7588f-007a-4a5b-a73c-d11a0b3eb891,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c\"" Sep 13 00:01:23.289010 containerd[1701]: time="2025-09-13T00:01:23.287844963Z" level=info msg="StartContainer for \"ad99ad3a01986cb695878559de9147955a665f5143e1c6c314ff91a8e9f7fff3\" returns successfully" Sep 13 00:01:23.289203 systemd[1]: Started cri-containerd-15529fccef431eddc2e52dbfd545091687b2a9acd35e04557ad60e6c9416d25c.scope - libcontainer container 15529fccef431eddc2e52dbfd545091687b2a9acd35e04557ad60e6c9416d25c. Sep 13 00:01:23.338861 containerd[1701]: time="2025-09-13T00:01:23.338599718Z" level=info msg="StartContainer for \"15529fccef431eddc2e52dbfd545091687b2a9acd35e04557ad60e6c9416d25c\" returns successfully" Sep 13 00:01:23.641023 kubelet[3206]: I0913 00:01:23.638679 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sv5f6" podStartSLOduration=74.638654436 podStartE2EDuration="1m14.638654436s" podCreationTimestamp="2025-09-13 00:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:01:23.637318393 +0000 UTC m=+80.432089298" watchObservedRunningTime="2025-09-13 00:01:23.638654436 +0000 UTC m=+80.433425341" Sep 13 00:01:23.847290 systemd-networkd[1566]: calie775441c720: Gained IPv6LL Sep 13 00:01:24.231287 systemd-networkd[1566]: cali6d49fa350f6: Gained IPv6LL Sep 13 00:01:24.629127 kubelet[3206]: I0913 00:01:24.627834 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pthfz" podStartSLOduration=75.627814592 podStartE2EDuration="1m15.627814592s" podCreationTimestamp="2025-09-13 00:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:01:23.668757864 +0000 UTC m=+80.463528849" watchObservedRunningTime="2025-09-13 00:01:24.627814592 +0000 UTC m=+81.422585497" Sep 13 00:01:24.807360 systemd-networkd[1566]: cali9dd5970d87e: Gained IPv6LL Sep 13 00:01:27.391580 containerd[1701]: time="2025-09-13T00:01:27.391256159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:27.395195 containerd[1701]: time="2025-09-13T00:01:27.395072167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 13 00:01:27.400298 containerd[1701]: time="2025-09-13T00:01:27.399920178Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:27.405322 containerd[1701]: time="2025-09-13T00:01:27.405277151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:27.406576 containerd[1701]: time="2025-09-13T00:01:27.405910272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 6.054613346s" Sep 13 00:01:27.406576 containerd[1701]: time="2025-09-13T00:01:27.405946392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 13 00:01:27.409132 containerd[1701]: time="2025-09-13T00:01:27.408192677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:01:27.431916 containerd[1701]: time="2025-09-13T00:01:27.431878531Z" level=info msg="CreateContainer within sandbox \"5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:01:27.484530 containerd[1701]: time="2025-09-13T00:01:27.484482250Z" level=info msg="CreateContainer within sandbox \"5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"43c7dacdd054577fa7e31147a979855beae33d3baf59c236dc2fd9afe4c8b491\"" Sep 13 00:01:27.486094 containerd[1701]: time="2025-09-13T00:01:27.486056053Z" level=info msg="StartContainer for \"43c7dacdd054577fa7e31147a979855beae33d3baf59c236dc2fd9afe4c8b491\"" Sep 13 00:01:27.536315 systemd[1]: Started cri-containerd-43c7dacdd054577fa7e31147a979855beae33d3baf59c236dc2fd9afe4c8b491.scope - libcontainer container 43c7dacdd054577fa7e31147a979855beae33d3baf59c236dc2fd9afe4c8b491. Sep 13 00:01:27.607385 containerd[1701]: time="2025-09-13T00:01:27.607247047Z" level=info msg="StartContainer for \"43c7dacdd054577fa7e31147a979855beae33d3baf59c236dc2fd9afe4c8b491\" returns successfully" Sep 13 00:01:27.835429 systemd[1]: Started sshd@8-10.200.20.16:22-10.200.16.10:54086.service - OpenSSH per-connection server daemon (10.200.16.10:54086). Sep 13 00:01:28.265249 sshd[5809]: Accepted publickey for core from 10.200.16.10 port 54086 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:01:28.267522 sshd[5809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:28.274654 systemd-logind[1678]: New session 11 of user core. Sep 13 00:01:28.279551 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:01:28.682301 kubelet[3206]: I0913 00:01:28.682238 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5564b5cb6-lt2mp" podStartSLOduration=52.992443593 podStartE2EDuration="1m2.682006117s" podCreationTimestamp="2025-09-13 00:00:26 +0000 UTC" firstStartedPulling="2025-09-13 00:01:17.717888791 +0000 UTC m=+74.512659696" lastFinishedPulling="2025-09-13 00:01:27.407451315 +0000 UTC m=+84.202222220" observedRunningTime="2025-09-13 00:01:27.660257327 +0000 UTC m=+84.455028232" watchObservedRunningTime="2025-09-13 00:01:28.682006117 +0000 UTC m=+85.476777022" Sep 13 00:01:28.701262 sshd[5809]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:28.706224 systemd[1]: sshd@8-10.200.20.16:22-10.200.16.10:54086.service: Deactivated successfully. Sep 13 00:01:28.709507 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:01:28.710315 systemd-logind[1678]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:01:28.711192 systemd-logind[1678]: Removed session 11. Sep 13 00:01:33.784396 systemd[1]: Started sshd@9-10.200.20.16:22-10.200.16.10:43296.service - OpenSSH per-connection server daemon (10.200.16.10:43296). Sep 13 00:01:33.884371 containerd[1701]: time="2025-09-13T00:01:33.884314222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:33.887801 containerd[1701]: time="2025-09-13T00:01:33.887556793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 13 00:01:33.893148 containerd[1701]: time="2025-09-13T00:01:33.891748166Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:33.897426 containerd[1701]: time="2025-09-13T00:01:33.897319384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:33.898387 containerd[1701]: time="2025-09-13T00:01:33.898347148Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 6.490115791s" Sep 13 00:01:33.898387 containerd[1701]: time="2025-09-13T00:01:33.898382028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 13 00:01:33.899977 containerd[1701]: time="2025-09-13T00:01:33.899942793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:01:33.910382 containerd[1701]: time="2025-09-13T00:01:33.910335346Z" level=info msg="CreateContainer within sandbox \"a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:01:33.951336 containerd[1701]: time="2025-09-13T00:01:33.951283158Z" level=info msg="CreateContainer within sandbox \"a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ac3cb0c9ff37039c86ac2978617e6aeb9638f2c37e8ada969a738ad6ba2f6e4f\"" Sep 13 00:01:33.951826 containerd[1701]: time="2025-09-13T00:01:33.951803360Z" level=info msg="StartContainer for \"ac3cb0c9ff37039c86ac2978617e6aeb9638f2c37e8ada969a738ad6ba2f6e4f\"" Sep 13 00:01:33.991386 systemd[1]: Started cri-containerd-ac3cb0c9ff37039c86ac2978617e6aeb9638f2c37e8ada969a738ad6ba2f6e4f.scope - libcontainer container ac3cb0c9ff37039c86ac2978617e6aeb9638f2c37e8ada969a738ad6ba2f6e4f. Sep 13 00:01:34.028642 containerd[1701]: time="2025-09-13T00:01:34.028457287Z" level=info msg="StartContainer for \"ac3cb0c9ff37039c86ac2978617e6aeb9638f2c37e8ada969a738ad6ba2f6e4f\" returns successfully" Sep 13 00:01:34.196359 sshd[5851]: Accepted publickey for core from 10.200.16.10 port 43296 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:01:34.198727 sshd[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:34.204645 systemd-logind[1678]: New session 12 of user core. Sep 13 00:01:34.213377 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:01:34.580570 sshd[5851]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:34.587605 systemd[1]: sshd@9-10.200.20.16:22-10.200.16.10:43296.service: Deactivated successfully. Sep 13 00:01:34.589095 systemd-logind[1678]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:01:34.590883 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:01:34.592956 systemd-logind[1678]: Removed session 12. Sep 13 00:01:34.660449 systemd[1]: Started sshd@10-10.200.20.16:22-10.200.16.10:43310.service - OpenSSH per-connection server daemon (10.200.16.10:43310). Sep 13 00:01:35.079239 sshd[5906]: Accepted publickey for core from 10.200.16.10 port 43310 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:01:35.230478 sshd[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:35.235004 systemd-logind[1678]: New session 13 of user core. Sep 13 00:01:35.244275 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:01:35.693272 sshd[5906]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:35.698712 systemd[1]: Started sshd@11-10.200.20.16:22-10.200.16.10:43314.service - OpenSSH per-connection server daemon (10.200.16.10:43314). Sep 13 00:01:35.702031 systemd[1]: sshd@10-10.200.20.16:22-10.200.16.10:43310.service: Deactivated successfully. Sep 13 00:01:35.705752 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:01:35.706588 systemd-logind[1678]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:01:35.708209 systemd-logind[1678]: Removed session 13. Sep 13 00:01:36.154819 sshd[5916]: Accepted publickey for core from 10.200.16.10 port 43314 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:01:36.158532 sshd[5916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:36.166320 systemd-logind[1678]: New session 14 of user core. Sep 13 00:01:36.171337 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:01:36.384352 containerd[1701]: time="2025-09-13T00:01:36.384298624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:36.388873 containerd[1701]: time="2025-09-13T00:01:36.388821554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 13 00:01:36.393767 containerd[1701]: time="2025-09-13T00:01:36.392839164Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:36.632474 containerd[1701]: time="2025-09-13T00:01:36.630748532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:36.632474 containerd[1701]: time="2025-09-13T00:01:36.631816454Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.731840061s" Sep 13 00:01:36.632474 containerd[1701]: time="2025-09-13T00:01:36.631848134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 00:01:36.633234 containerd[1701]: time="2025-09-13T00:01:36.633195778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:01:36.749430 containerd[1701]: time="2025-09-13T00:01:36.749231094Z" level=info msg="CreateContainer within sandbox \"53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:01:36.799510 containerd[1701]: time="2025-09-13T00:01:36.799464334Z" level=info msg="CreateContainer within sandbox \"53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b4fbf867dd019ee23f046a90b30f05b82d15ddfaedf2d638f59dad9f4144c7d2\"" Sep 13 00:01:36.800052 containerd[1701]: time="2025-09-13T00:01:36.800020536Z" level=info msg="StartContainer for \"b4fbf867dd019ee23f046a90b30f05b82d15ddfaedf2d638f59dad9f4144c7d2\"" Sep 13 00:01:36.858282 systemd[1]: Started cri-containerd-b4fbf867dd019ee23f046a90b30f05b82d15ddfaedf2d638f59dad9f4144c7d2.scope - libcontainer container b4fbf867dd019ee23f046a90b30f05b82d15ddfaedf2d638f59dad9f4144c7d2. Sep 13 00:01:36.913388 containerd[1701]: time="2025-09-13T00:01:36.912868445Z" level=info msg="StartContainer for \"b4fbf867dd019ee23f046a90b30f05b82d15ddfaedf2d638f59dad9f4144c7d2\" returns successfully" Sep 13 00:01:37.148019 sshd[5916]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:37.151726 systemd[1]: sshd@11-10.200.20.16:22-10.200.16.10:43314.service: Deactivated successfully. Sep 13 00:01:37.155714 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:01:37.157501 systemd-logind[1678]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:01:37.159007 systemd-logind[1678]: Removed session 14. Sep 13 00:01:38.695165 kubelet[3206]: I0913 00:01:38.695074 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f6b9bbb9d-cs9x9" podStartSLOduration=63.119541299 podStartE2EDuration="1m18.695056818s" podCreationTimestamp="2025-09-13 00:00:20 +0000 UTC" firstStartedPulling="2025-09-13 00:01:21.057282498 +0000 UTC m=+77.852053403" lastFinishedPulling="2025-09-13 00:01:36.632798057 +0000 UTC m=+93.427568922" observedRunningTime="2025-09-13 00:01:37.668372808 +0000 UTC m=+94.463143713" watchObservedRunningTime="2025-09-13 00:01:38.695056818 +0000 UTC m=+95.489827723" Sep 13 00:01:40.320041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1244417314.mount: Deactivated successfully. Sep 13 00:01:41.081022 containerd[1701]: time="2025-09-13T00:01:41.080976622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:41.084148 containerd[1701]: time="2025-09-13T00:01:41.084091029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 13 00:01:41.088627 containerd[1701]: time="2025-09-13T00:01:41.088562640Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:41.099621 containerd[1701]: time="2025-09-13T00:01:41.099532867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:41.100603 containerd[1701]: time="2025-09-13T00:01:41.100468269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 4.467234011s" Sep 13 00:01:41.100603 containerd[1701]: time="2025-09-13T00:01:41.100504909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 13 00:01:41.104371 containerd[1701]: time="2025-09-13T00:01:41.102431554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:01:41.108559 containerd[1701]: time="2025-09-13T00:01:41.108516488Z" level=info msg="CreateContainer within sandbox \"68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:01:41.157467 containerd[1701]: time="2025-09-13T00:01:41.157334445Z" level=info msg="CreateContainer within sandbox \"68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0a3472cf121639b3ac8387bbf9cd661475533ff20696068253e211ff1bdf16c0\"" Sep 13 00:01:41.158483 containerd[1701]: time="2025-09-13T00:01:41.158348048Z" level=info msg="StartContainer for \"0a3472cf121639b3ac8387bbf9cd661475533ff20696068253e211ff1bdf16c0\"" Sep 13 00:01:41.202535 systemd[1]: Started cri-containerd-0a3472cf121639b3ac8387bbf9cd661475533ff20696068253e211ff1bdf16c0.scope - libcontainer container 0a3472cf121639b3ac8387bbf9cd661475533ff20696068253e211ff1bdf16c0. Sep 13 00:01:41.271960 containerd[1701]: time="2025-09-13T00:01:41.271875640Z" level=info msg="StartContainer for \"0a3472cf121639b3ac8387bbf9cd661475533ff20696068253e211ff1bdf16c0\" returns successfully" Sep 13 00:01:41.420353 containerd[1701]: time="2025-09-13T00:01:41.420302397Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:41.423716 containerd[1701]: time="2025-09-13T00:01:41.423529005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:01:41.428037 containerd[1701]: time="2025-09-13T00:01:41.427983775Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 324.80854ms" Sep 13 00:01:41.428037 containerd[1701]: time="2025-09-13T00:01:41.428034576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 00:01:41.430716 containerd[1701]: time="2025-09-13T00:01:41.430533982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:01:41.439713 containerd[1701]: time="2025-09-13T00:01:41.439664363Z" level=info msg="CreateContainer within sandbox \"1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:01:41.484746 containerd[1701]: time="2025-09-13T00:01:41.484617351Z" level=info msg="CreateContainer within sandbox \"1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"87223b306ede1cbcb938abbca34edb271dd42f5efa06a57c4e5a0b8f06d6c412\"" Sep 13 00:01:41.485612 containerd[1701]: time="2025-09-13T00:01:41.485381593Z" level=info msg="StartContainer for \"87223b306ede1cbcb938abbca34edb271dd42f5efa06a57c4e5a0b8f06d6c412\"" Sep 13 00:01:41.510286 systemd[1]: Started cri-containerd-87223b306ede1cbcb938abbca34edb271dd42f5efa06a57c4e5a0b8f06d6c412.scope - libcontainer container 87223b306ede1cbcb938abbca34edb271dd42f5efa06a57c4e5a0b8f06d6c412. Sep 13 00:01:41.549768 containerd[1701]: time="2025-09-13T00:01:41.549728428Z" level=info msg="StartContainer for \"87223b306ede1cbcb938abbca34edb271dd42f5efa06a57c4e5a0b8f06d6c412\" returns successfully" Sep 13 00:01:41.695725 kubelet[3206]: I0913 00:01:41.694906 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f6b9bbb9d-tt85v" podStartSLOduration=63.538754448 podStartE2EDuration="1m21.694886657s" podCreationTimestamp="2025-09-13 00:00:20 +0000 UTC" firstStartedPulling="2025-09-13 00:01:23.272802289 +0000 UTC m=+80.067573194" lastFinishedPulling="2025-09-13 00:01:41.428934538 +0000 UTC m=+98.223705403" observedRunningTime="2025-09-13 00:01:41.690939287 +0000 UTC m=+98.485710192" watchObservedRunningTime="2025-09-13 00:01:41.694886657 +0000 UTC m=+98.489657522" Sep 13 00:01:41.718695 kubelet[3206]: I0913 00:01:41.718475 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-2s2jz" podStartSLOduration=55.803967754 podStartE2EDuration="1m15.717735511s" podCreationTimestamp="2025-09-13 00:00:26 +0000 UTC" firstStartedPulling="2025-09-13 00:01:21.187814634 +0000 UTC m=+77.982585539" lastFinishedPulling="2025-09-13 00:01:41.101582391 +0000 UTC m=+97.896353296" observedRunningTime="2025-09-13 00:01:41.71726255 +0000 UTC m=+98.512033455" watchObservedRunningTime="2025-09-13 00:01:41.717735511 +0000 UTC m=+98.512506416" Sep 13 00:01:41.938452 systemd[1]: Started sshd@12-10.200.20.16:22-10.200.16.10:54212.service - OpenSSH per-connection server daemon (10.200.16.10:54212). Sep 13 00:01:42.369253 sshd[6145]: Accepted publickey for core from 10.200.16.10 port 54212 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:01:42.373786 sshd[6145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:42.381183 systemd-logind[1678]: New session 15 of user core. Sep 13 00:01:42.384292 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:01:42.740024 systemd[1]: run-containerd-runc-k8s.io-0a3472cf121639b3ac8387bbf9cd661475533ff20696068253e211ff1bdf16c0-runc.yhoPQd.mount: Deactivated successfully. Sep 13 00:01:42.874895 sshd[6145]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:42.883731 systemd[1]: sshd@12-10.200.20.16:22-10.200.16.10:54212.service: Deactivated successfully. Sep 13 00:01:42.888712 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:01:42.891488 systemd-logind[1678]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:01:42.893411 systemd-logind[1678]: Removed session 15. Sep 13 00:01:43.144672 containerd[1701]: time="2025-09-13T00:01:43.144609979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:43.158585 containerd[1701]: time="2025-09-13T00:01:43.158536532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 13 00:01:43.164615 containerd[1701]: time="2025-09-13T00:01:43.164569626Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:43.184040 containerd[1701]: time="2025-09-13T00:01:43.183993033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:01:43.184784 containerd[1701]: time="2025-09-13T00:01:43.184750155Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.754040093s" Sep 13 00:01:43.184847 containerd[1701]: time="2025-09-13T00:01:43.184788955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 13 00:01:43.198646 containerd[1701]: time="2025-09-13T00:01:43.198539628Z" level=info msg="CreateContainer within sandbox \"a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:01:43.242656 containerd[1701]: time="2025-09-13T00:01:43.242598214Z" level=info msg="CreateContainer within sandbox \"a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5f68bb28fe807c4445b9406e40617e05e734b7291d80d4f64f0207524e91ddef\"" Sep 13 00:01:43.244737 containerd[1701]: time="2025-09-13T00:01:43.243264655Z" level=info msg="StartContainer for \"5f68bb28fe807c4445b9406e40617e05e734b7291d80d4f64f0207524e91ddef\"" Sep 13 00:01:43.285331 systemd[1]: Started cri-containerd-5f68bb28fe807c4445b9406e40617e05e734b7291d80d4f64f0207524e91ddef.scope - libcontainer container 5f68bb28fe807c4445b9406e40617e05e734b7291d80d4f64f0207524e91ddef. Sep 13 00:01:43.324705 containerd[1701]: time="2025-09-13T00:01:43.324654291Z" level=info msg="StartContainer for \"5f68bb28fe807c4445b9406e40617e05e734b7291d80d4f64f0207524e91ddef\" returns successfully" Sep 13 00:01:43.438313 kubelet[3206]: I0913 00:01:43.437821 3206 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:01:43.438313 kubelet[3206]: I0913 00:01:43.437870 3206 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:01:43.702607 kubelet[3206]: I0913 00:01:43.702157 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tlnn9" podStartSLOduration=54.244456399 podStartE2EDuration="1m17.702136518s" podCreationTimestamp="2025-09-13 00:00:26 +0000 UTC" firstStartedPulling="2025-09-13 00:01:19.728017638 +0000 UTC m=+76.522788503" lastFinishedPulling="2025-09-13 00:01:43.185697717 +0000 UTC m=+99.980468622" observedRunningTime="2025-09-13 00:01:43.701381876 +0000 UTC m=+100.496152781" watchObservedRunningTime="2025-09-13 00:01:43.702136518 +0000 UTC m=+100.496907423" Sep 13 00:01:47.516448 update_engine[1679]: I20250913 00:01:47.516223 1679 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 13 00:01:47.516448 update_engine[1679]: I20250913 00:01:47.516278 1679 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 13 00:01:47.516789 update_engine[1679]: I20250913 00:01:47.516485 1679 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 13 00:01:47.516913 update_engine[1679]: I20250913 00:01:47.516845 1679 omaha_request_params.cc:62] Current group set to lts Sep 13 00:01:47.517094 update_engine[1679]: I20250913 00:01:47.516999 1679 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 13 00:01:47.517094 update_engine[1679]: I20250913 00:01:47.517019 1679 update_attempter.cc:643] Scheduling an action processor start. Sep 13 00:01:47.517094 update_engine[1679]: I20250913 00:01:47.517040 1679 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:01:47.517920 update_engine[1679]: I20250913 00:01:47.517851 1679 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 13 00:01:47.518545 update_engine[1679]: I20250913 00:01:47.518213 1679 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:01:47.521696 update_engine[1679]: I20250913 00:01:47.518242 1679 omaha_request_action.cc:272] Request: Sep 13 00:01:47.521696 update_engine[1679]: Sep 13 00:01:47.521696 update_engine[1679]: Sep 13 00:01:47.521696 update_engine[1679]: Sep 13 00:01:47.521696 update_engine[1679]: Sep 13 00:01:47.521696 update_engine[1679]: Sep 13 00:01:47.521696 update_engine[1679]: Sep 13 00:01:47.521696 update_engine[1679]: Sep 13 00:01:47.521696 update_engine[1679]: Sep 13 00:01:47.521696 update_engine[1679]: I20250913 00:01:47.519364 1679 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:01:47.522884 update_engine[1679]: I20250913 00:01:47.522508 1679 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:01:47.522884 update_engine[1679]: I20250913 00:01:47.522830 1679 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:01:47.523341 locksmithd[1767]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 13 00:01:47.627579 update_engine[1679]: E20250913 00:01:47.627516 1679 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:01:47.627719 update_engine[1679]: I20250913 00:01:47.627622 1679 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 13 00:01:47.966462 systemd[1]: Started sshd@13-10.200.20.16:22-10.200.16.10:54228.service - OpenSSH per-connection server daemon (10.200.16.10:54228). Sep 13 00:01:48.395895 sshd[6226]: Accepted publickey for core from 10.200.16.10 port 54228 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:01:48.397152 sshd[6226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:48.401776 systemd-logind[1678]: New session 16 of user core. Sep 13 00:01:48.407290 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:01:48.820245 sshd[6226]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:48.824001 systemd-logind[1678]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:01:48.824766 systemd[1]: sshd@13-10.200.20.16:22-10.200.16.10:54228.service: Deactivated successfully. Sep 13 00:01:48.827167 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:01:48.828713 systemd-logind[1678]: Removed session 16. Sep 13 00:01:53.902356 systemd[1]: Started sshd@14-10.200.20.16:22-10.200.16.10:38744.service - OpenSSH per-connection server daemon (10.200.16.10:38744). Sep 13 00:01:54.309864 sshd[6246]: Accepted publickey for core from 10.200.16.10 port 38744 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:01:54.311461 sshd[6246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:01:54.315958 systemd-logind[1678]: New session 17 of user core. Sep 13 00:01:54.322313 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:01:54.729406 sshd[6246]: pam_unix(sshd:session): session closed for user core Sep 13 00:01:54.734717 systemd-logind[1678]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:01:54.735706 systemd[1]: sshd@14-10.200.20.16:22-10.200.16.10:38744.service: Deactivated successfully. Sep 13 00:01:54.738284 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:01:54.740732 systemd-logind[1678]: Removed session 17. Sep 13 00:01:57.514853 update_engine[1679]: I20250913 00:01:57.514333 1679 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:01:57.514853 update_engine[1679]: I20250913 00:01:57.514557 1679 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:01:57.514853 update_engine[1679]: I20250913 00:01:57.514786 1679 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:01:57.583957 update_engine[1679]: E20250913 00:01:57.583838 1679 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:01:57.583957 update_engine[1679]: I20250913 00:01:57.583925 1679 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 13 00:01:59.810386 systemd[1]: Started sshd@15-10.200.20.16:22-10.200.16.10:38752.service - OpenSSH per-connection server daemon (10.200.16.10:38752). Sep 13 00:02:00.223876 sshd[6278]: Accepted publickey for core from 10.200.16.10 port 38752 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:00.225406 sshd[6278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:00.229756 systemd-logind[1678]: New session 18 of user core. Sep 13 00:02:00.235278 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:02:00.632057 sshd[6278]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:00.635573 systemd[1]: sshd@15-10.200.20.16:22-10.200.16.10:38752.service: Deactivated successfully. Sep 13 00:02:00.637281 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:02:00.637899 systemd-logind[1678]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:02:00.639177 systemd-logind[1678]: Removed session 18. Sep 13 00:02:00.707876 systemd[1]: Started sshd@16-10.200.20.16:22-10.200.16.10:44810.service - OpenSSH per-connection server daemon (10.200.16.10:44810). Sep 13 00:02:01.122127 sshd[6291]: Accepted publickey for core from 10.200.16.10 port 44810 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:01.123640 sshd[6291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:01.128441 systemd-logind[1678]: New session 19 of user core. Sep 13 00:02:01.134291 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:02:01.730727 sshd[6291]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:01.735659 systemd[1]: sshd@16-10.200.20.16:22-10.200.16.10:44810.service: Deactivated successfully. Sep 13 00:02:01.737672 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:02:01.738794 systemd-logind[1678]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:02:01.739996 systemd-logind[1678]: Removed session 19. Sep 13 00:02:01.813800 systemd[1]: Started sshd@17-10.200.20.16:22-10.200.16.10:44816.service - OpenSSH per-connection server daemon (10.200.16.10:44816). Sep 13 00:02:02.233877 sshd[6301]: Accepted publickey for core from 10.200.16.10 port 44816 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:02.235603 sshd[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:02.240926 systemd-logind[1678]: New session 20 of user core. Sep 13 00:02:02.248293 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:02:03.194832 sshd[6301]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:03.199268 systemd[1]: sshd@17-10.200.20.16:22-10.200.16.10:44816.service: Deactivated successfully. Sep 13 00:02:03.202490 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:02:03.204043 systemd-logind[1678]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:02:03.208518 systemd-logind[1678]: Removed session 20. Sep 13 00:02:03.276506 systemd[1]: Started sshd@18-10.200.20.16:22-10.200.16.10:44820.service - OpenSSH per-connection server daemon (10.200.16.10:44820). Sep 13 00:02:03.320399 containerd[1701]: time="2025-09-13T00:02:03.320357600Z" level=info msg="StopPodSandbox for \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\"" Sep 13 00:02:03.407534 containerd[1701]: 2025-09-13 00:02:03.366 [WARNING][6334] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"d20e0726-7801-43dd-aff4-c9d38bb82a12", ResourceVersion:"1249", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186", Pod:"goldmane-54d579b49d-2s2jz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.77.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5f23ce8f4a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:03.407534 containerd[1701]: 2025-09-13 00:02:03.366 [INFO][6334] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Sep 13 00:02:03.407534 containerd[1701]: 2025-09-13 00:02:03.366 [INFO][6334] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" iface="eth0" netns="" Sep 13 00:02:03.407534 containerd[1701]: 2025-09-13 00:02:03.366 [INFO][6334] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Sep 13 00:02:03.407534 containerd[1701]: 2025-09-13 00:02:03.366 [INFO][6334] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Sep 13 00:02:03.407534 containerd[1701]: 2025-09-13 00:02:03.392 [INFO][6343] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" HandleID="k8s-pod-network.ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Workload="ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0" Sep 13 00:02:03.407534 containerd[1701]: 2025-09-13 00:02:03.392 [INFO][6343] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:03.407534 containerd[1701]: 2025-09-13 00:02:03.393 [INFO][6343] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:03.407534 containerd[1701]: 2025-09-13 00:02:03.402 [WARNING][6343] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" HandleID="k8s-pod-network.ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Workload="ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0" Sep 13 00:02:03.407534 containerd[1701]: 2025-09-13 00:02:03.402 [INFO][6343] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" HandleID="k8s-pod-network.ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Workload="ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0" Sep 13 00:02:03.407534 containerd[1701]: 2025-09-13 00:02:03.404 [INFO][6343] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:03.407534 containerd[1701]: 2025-09-13 00:02:03.405 [INFO][6334] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Sep 13 00:02:03.407534 containerd[1701]: time="2025-09-13T00:02:03.407386327Z" level=info msg="TearDown network for sandbox \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\" successfully" Sep 13 00:02:03.407534 containerd[1701]: time="2025-09-13T00:02:03.407413807Z" level=info msg="StopPodSandbox for \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\" returns successfully" Sep 13 00:02:03.408419 containerd[1701]: time="2025-09-13T00:02:03.408303529Z" level=info msg="RemovePodSandbox for \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\"" Sep 13 00:02:03.410726 containerd[1701]: time="2025-09-13T00:02:03.410683855Z" level=info msg="Forcibly stopping sandbox \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\"" Sep 13 00:02:03.482761 containerd[1701]: 2025-09-13 00:02:03.449 [WARNING][6357] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"d20e0726-7801-43dd-aff4-c9d38bb82a12", ResourceVersion:"1249", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"68ecaf49b4596714835fcd20a142a85220745fbf8da343834c40a6422b8d7186", Pod:"goldmane-54d579b49d-2s2jz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.77.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5f23ce8f4a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:03.482761 containerd[1701]: 2025-09-13 00:02:03.450 [INFO][6357] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Sep 13 00:02:03.482761 containerd[1701]: 2025-09-13 00:02:03.450 [INFO][6357] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" iface="eth0" netns="" Sep 13 00:02:03.482761 containerd[1701]: 2025-09-13 00:02:03.450 [INFO][6357] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Sep 13 00:02:03.482761 containerd[1701]: 2025-09-13 00:02:03.450 [INFO][6357] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Sep 13 00:02:03.482761 containerd[1701]: 2025-09-13 00:02:03.467 [INFO][6364] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" HandleID="k8s-pod-network.ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Workload="ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0" Sep 13 00:02:03.482761 containerd[1701]: 2025-09-13 00:02:03.467 [INFO][6364] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:03.482761 containerd[1701]: 2025-09-13 00:02:03.468 [INFO][6364] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:03.482761 containerd[1701]: 2025-09-13 00:02:03.477 [WARNING][6364] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" HandleID="k8s-pod-network.ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Workload="ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0" Sep 13 00:02:03.482761 containerd[1701]: 2025-09-13 00:02:03.477 [INFO][6364] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" HandleID="k8s-pod-network.ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Workload="ci--4081.3.5--n--4f403f96f8-k8s-goldmane--54d579b49d--2s2jz-eth0" Sep 13 00:02:03.482761 containerd[1701]: 2025-09-13 00:02:03.479 [INFO][6364] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:03.482761 containerd[1701]: 2025-09-13 00:02:03.480 [INFO][6357] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e" Sep 13 00:02:03.483958 containerd[1701]: time="2025-09-13T00:02:03.482653986Z" level=info msg="TearDown network for sandbox \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\" successfully" Sep 13 00:02:03.495096 containerd[1701]: time="2025-09-13T00:02:03.494834375Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:03.495096 containerd[1701]: time="2025-09-13T00:02:03.494948375Z" level=info msg="RemovePodSandbox \"ce019f84ce75487c7596e94f15ba66568194cfac9effb4f5df7e68648739932e\" returns successfully" Sep 13 00:02:03.495881 containerd[1701]: time="2025-09-13T00:02:03.495597977Z" level=info msg="StopPodSandbox for \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\"" Sep 13 00:02:03.577441 containerd[1701]: 2025-09-13 00:02:03.534 [WARNING][6378] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0", GenerateName:"calico-apiserver-f6b9bbb9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"aba7588f-007a-4a5b-a73c-d11a0b3eb891", ResourceVersion:"1272", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f6b9bbb9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c", Pod:"calico-apiserver-f6b9bbb9d-tt85v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9dd5970d87e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:03.577441 containerd[1701]: 2025-09-13 00:02:03.534 [INFO][6378] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Sep 13 00:02:03.577441 containerd[1701]: 2025-09-13 00:02:03.534 [INFO][6378] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" iface="eth0" netns="" Sep 13 00:02:03.577441 containerd[1701]: 2025-09-13 00:02:03.534 [INFO][6378] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Sep 13 00:02:03.577441 containerd[1701]: 2025-09-13 00:02:03.534 [INFO][6378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Sep 13 00:02:03.577441 containerd[1701]: 2025-09-13 00:02:03.557 [INFO][6385] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" HandleID="k8s-pod-network.5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0" Sep 13 00:02:03.577441 containerd[1701]: 2025-09-13 00:02:03.557 [INFO][6385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:03.577441 containerd[1701]: 2025-09-13 00:02:03.557 [INFO][6385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:03.577441 containerd[1701]: 2025-09-13 00:02:03.568 [WARNING][6385] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" HandleID="k8s-pod-network.5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0" Sep 13 00:02:03.577441 containerd[1701]: 2025-09-13 00:02:03.568 [INFO][6385] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" HandleID="k8s-pod-network.5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0" Sep 13 00:02:03.577441 containerd[1701]: 2025-09-13 00:02:03.572 [INFO][6385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:03.577441 containerd[1701]: 2025-09-13 00:02:03.575 [INFO][6378] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Sep 13 00:02:03.579070 containerd[1701]: time="2025-09-13T00:02:03.578185773Z" level=info msg="TearDown network for sandbox \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\" successfully" Sep 13 00:02:03.579070 containerd[1701]: time="2025-09-13T00:02:03.578223693Z" level=info msg="StopPodSandbox for \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\" returns successfully" Sep 13 00:02:03.580070 containerd[1701]: time="2025-09-13T00:02:03.580006337Z" level=info msg="RemovePodSandbox for \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\"" Sep 13 00:02:03.580429 containerd[1701]: time="2025-09-13T00:02:03.580077218Z" level=info msg="Forcibly stopping sandbox \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\"" Sep 13 00:02:03.683966 containerd[1701]: 2025-09-13 00:02:03.631 [WARNING][6399] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0", GenerateName:"calico-apiserver-f6b9bbb9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"aba7588f-007a-4a5b-a73c-d11a0b3eb891", ResourceVersion:"1272", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f6b9bbb9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"1d8ea07bc0ca2b3d2d1272c9c5d40bc116f2b058a2adc64de69a7d297343716c", Pod:"calico-apiserver-f6b9bbb9d-tt85v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9dd5970d87e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:03.683966 containerd[1701]: 2025-09-13 00:02:03.632 [INFO][6399] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Sep 13 00:02:03.683966 containerd[1701]: 2025-09-13 00:02:03.632 [INFO][6399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" iface="eth0" netns="" Sep 13 00:02:03.683966 containerd[1701]: 2025-09-13 00:02:03.632 [INFO][6399] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Sep 13 00:02:03.683966 containerd[1701]: 2025-09-13 00:02:03.632 [INFO][6399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Sep 13 00:02:03.683966 containerd[1701]: 2025-09-13 00:02:03.656 [INFO][6406] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" HandleID="k8s-pod-network.5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0" Sep 13 00:02:03.683966 containerd[1701]: 2025-09-13 00:02:03.656 [INFO][6406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:03.683966 containerd[1701]: 2025-09-13 00:02:03.656 [INFO][6406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:03.683966 containerd[1701]: 2025-09-13 00:02:03.676 [WARNING][6406] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" HandleID="k8s-pod-network.5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0" Sep 13 00:02:03.683966 containerd[1701]: 2025-09-13 00:02:03.676 [INFO][6406] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" HandleID="k8s-pod-network.5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--tt85v-eth0" Sep 13 00:02:03.683966 containerd[1701]: 2025-09-13 00:02:03.678 [INFO][6406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:03.683966 containerd[1701]: 2025-09-13 00:02:03.681 [INFO][6399] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81" Sep 13 00:02:03.684780 containerd[1701]: time="2025-09-13T00:02:03.684091345Z" level=info msg="TearDown network for sandbox \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\" successfully" Sep 13 00:02:03.691080 sshd[6324]: Accepted publickey for core from 10.200.16.10 port 44820 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:03.693494 sshd[6324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:03.700296 systemd-logind[1678]: New session 21 of user core. Sep 13 00:02:03.703911 containerd[1701]: time="2025-09-13T00:02:03.703871712Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:03.704239 containerd[1701]: time="2025-09-13T00:02:03.704133433Z" level=info msg="RemovePodSandbox \"5d0d16739c660a98bfedc280ab7c0cdfc51d9d169a3494898ddaca2abeb30a81\" returns successfully" Sep 13 00:02:03.704849 containerd[1701]: time="2025-09-13T00:02:03.704581354Z" level=info msg="StopPodSandbox for \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\"" Sep 13 00:02:03.706507 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:02:03.778343 containerd[1701]: 2025-09-13 00:02:03.742 [WARNING][6421] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d9018d05-ac4d-4e32-951d-a33c403e0852", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f", Pod:"coredns-674b8bbfcf-pthfz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6d49fa350f6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:03.778343 containerd[1701]: 2025-09-13 00:02:03.742 [INFO][6421] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Sep 13 00:02:03.778343 containerd[1701]: 2025-09-13 00:02:03.742 [INFO][6421] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" iface="eth0" netns="" Sep 13 00:02:03.778343 containerd[1701]: 2025-09-13 00:02:03.742 [INFO][6421] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Sep 13 00:02:03.778343 containerd[1701]: 2025-09-13 00:02:03.742 [INFO][6421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Sep 13 00:02:03.778343 containerd[1701]: 2025-09-13 00:02:03.765 [INFO][6428] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" HandleID="k8s-pod-network.63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0" Sep 13 00:02:03.778343 containerd[1701]: 2025-09-13 00:02:03.765 [INFO][6428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:03.778343 containerd[1701]: 2025-09-13 00:02:03.765 [INFO][6428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:03.778343 containerd[1701]: 2025-09-13 00:02:03.774 [WARNING][6428] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" HandleID="k8s-pod-network.63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0" Sep 13 00:02:03.778343 containerd[1701]: 2025-09-13 00:02:03.774 [INFO][6428] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" HandleID="k8s-pod-network.63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0" Sep 13 00:02:03.778343 containerd[1701]: 2025-09-13 00:02:03.775 [INFO][6428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:03.778343 containerd[1701]: 2025-09-13 00:02:03.776 [INFO][6421] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Sep 13 00:02:03.779415 containerd[1701]: time="2025-09-13T00:02:03.779052771Z" level=info msg="TearDown network for sandbox \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\" successfully" Sep 13 00:02:03.779415 containerd[1701]: time="2025-09-13T00:02:03.779083571Z" level=info msg="StopPodSandbox for \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\" returns successfully" Sep 13 00:02:03.780191 containerd[1701]: time="2025-09-13T00:02:03.779943373Z" level=info msg="RemovePodSandbox for \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\"" Sep 13 00:02:03.780191 containerd[1701]: time="2025-09-13T00:02:03.779976693Z" level=info msg="Forcibly stopping sandbox \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\"" Sep 13 00:02:03.848614 containerd[1701]: 2025-09-13 00:02:03.814 [WARNING][6442] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d9018d05-ac4d-4e32-951d-a33c403e0852", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"740b77f3e5d8647aa39a2fde1195f6dc0ad07a0364dbdc6926da9b9882fd483f", Pod:"coredns-674b8bbfcf-pthfz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6d49fa350f6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:03.848614 containerd[1701]: 2025-09-13 00:02:03.814 [INFO][6442] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Sep 13 00:02:03.848614 containerd[1701]: 2025-09-13 00:02:03.814 [INFO][6442] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" iface="eth0" netns="" Sep 13 00:02:03.848614 containerd[1701]: 2025-09-13 00:02:03.814 [INFO][6442] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Sep 13 00:02:03.848614 containerd[1701]: 2025-09-13 00:02:03.814 [INFO][6442] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Sep 13 00:02:03.848614 containerd[1701]: 2025-09-13 00:02:03.833 [INFO][6449] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" HandleID="k8s-pod-network.63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0" Sep 13 00:02:03.848614 containerd[1701]: 2025-09-13 00:02:03.833 [INFO][6449] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:03.848614 containerd[1701]: 2025-09-13 00:02:03.834 [INFO][6449] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:03.848614 containerd[1701]: 2025-09-13 00:02:03.844 [WARNING][6449] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" HandleID="k8s-pod-network.63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0" Sep 13 00:02:03.848614 containerd[1701]: 2025-09-13 00:02:03.844 [INFO][6449] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" HandleID="k8s-pod-network.63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--pthfz-eth0" Sep 13 00:02:03.848614 containerd[1701]: 2025-09-13 00:02:03.845 [INFO][6449] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:03.848614 containerd[1701]: 2025-09-13 00:02:03.847 [INFO][6442] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597" Sep 13 00:02:03.850153 containerd[1701]: time="2025-09-13T00:02:03.849185698Z" level=info msg="TearDown network for sandbox \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\" successfully" Sep 13 00:02:03.857433 containerd[1701]: time="2025-09-13T00:02:03.857382997Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:03.857547 containerd[1701]: time="2025-09-13T00:02:03.857466517Z" level=info msg="RemovePodSandbox \"63418a93fcbe5b2219383cf93f6b6fd7d606460dc48e3904ae26fec55fb3f597\" returns successfully" Sep 13 00:02:03.858073 containerd[1701]: time="2025-09-13T00:02:03.858045879Z" level=info msg="StopPodSandbox for \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\"" Sep 13 00:02:03.927631 containerd[1701]: 2025-09-13 00:02:03.892 [WARNING][6464] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0", GenerateName:"calico-apiserver-f6b9bbb9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ecd9d5e5-4b3f-4586-9384-58b93046b59b", ResourceVersion:"1214", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f6b9bbb9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb", Pod:"calico-apiserver-f6b9bbb9d-cs9x9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif3c534ffb14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:03.927631 containerd[1701]: 2025-09-13 00:02:03.892 [INFO][6464] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Sep 13 00:02:03.927631 containerd[1701]: 2025-09-13 00:02:03.892 [INFO][6464] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" iface="eth0" netns="" Sep 13 00:02:03.927631 containerd[1701]: 2025-09-13 00:02:03.892 [INFO][6464] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Sep 13 00:02:03.927631 containerd[1701]: 2025-09-13 00:02:03.892 [INFO][6464] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Sep 13 00:02:03.927631 containerd[1701]: 2025-09-13 00:02:03.912 [INFO][6472] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" HandleID="k8s-pod-network.a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0" Sep 13 00:02:03.927631 containerd[1701]: 2025-09-13 00:02:03.912 [INFO][6472] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:03.927631 containerd[1701]: 2025-09-13 00:02:03.912 [INFO][6472] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:03.927631 containerd[1701]: 2025-09-13 00:02:03.921 [WARNING][6472] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" HandleID="k8s-pod-network.a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0" Sep 13 00:02:03.927631 containerd[1701]: 2025-09-13 00:02:03.921 [INFO][6472] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" HandleID="k8s-pod-network.a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0" Sep 13 00:02:03.927631 containerd[1701]: 2025-09-13 00:02:03.923 [INFO][6472] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:03.927631 containerd[1701]: 2025-09-13 00:02:03.925 [INFO][6464] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Sep 13 00:02:03.928866 containerd[1701]: time="2025-09-13T00:02:03.928177606Z" level=info msg="TearDown network for sandbox \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\" successfully" Sep 13 00:02:03.928866 containerd[1701]: time="2025-09-13T00:02:03.928213086Z" level=info msg="StopPodSandbox for \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\" returns successfully" Sep 13 00:02:03.928866 containerd[1701]: time="2025-09-13T00:02:03.928697207Z" level=info msg="RemovePodSandbox for \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\"" Sep 13 00:02:03.928866 containerd[1701]: time="2025-09-13T00:02:03.928727647Z" level=info msg="Forcibly stopping sandbox \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\"" Sep 13 00:02:04.043599 containerd[1701]: 2025-09-13 00:02:03.976 [WARNING][6490] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0", GenerateName:"calico-apiserver-f6b9bbb9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ecd9d5e5-4b3f-4586-9384-58b93046b59b", ResourceVersion:"1214", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f6b9bbb9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"53391d8b68e78413e06dee45604ab6f33d42baada88cae5012a928583ee38abb", Pod:"calico-apiserver-f6b9bbb9d-cs9x9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif3c534ffb14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:04.043599 containerd[1701]: 2025-09-13 00:02:03.976 [INFO][6490] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Sep 13 00:02:04.043599 containerd[1701]: 2025-09-13 00:02:03.976 [INFO][6490] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" iface="eth0" netns="" Sep 13 00:02:04.043599 containerd[1701]: 2025-09-13 00:02:03.976 [INFO][6490] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Sep 13 00:02:04.043599 containerd[1701]: 2025-09-13 00:02:03.976 [INFO][6490] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Sep 13 00:02:04.043599 containerd[1701]: 2025-09-13 00:02:04.016 [INFO][6497] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" HandleID="k8s-pod-network.a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0" Sep 13 00:02:04.043599 containerd[1701]: 2025-09-13 00:02:04.016 [INFO][6497] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:04.043599 containerd[1701]: 2025-09-13 00:02:04.016 [INFO][6497] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:04.043599 containerd[1701]: 2025-09-13 00:02:04.036 [WARNING][6497] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" HandleID="k8s-pod-network.a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0" Sep 13 00:02:04.043599 containerd[1701]: 2025-09-13 00:02:04.036 [INFO][6497] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" HandleID="k8s-pod-network.a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--apiserver--f6b9bbb9d--cs9x9-eth0" Sep 13 00:02:04.043599 containerd[1701]: 2025-09-13 00:02:04.038 [INFO][6497] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:04.043599 containerd[1701]: 2025-09-13 00:02:04.041 [INFO][6490] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780" Sep 13 00:02:04.045258 containerd[1701]: time="2025-09-13T00:02:04.043542400Z" level=info msg="TearDown network for sandbox \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\" successfully" Sep 13 00:02:04.053249 containerd[1701]: time="2025-09-13T00:02:04.053202983Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:04.053518 containerd[1701]: time="2025-09-13T00:02:04.053473504Z" level=info msg="RemovePodSandbox \"a952e1d1f99149eb6e530c4e6ac5531595aa444f95a1cb78f0853f34bfe89780\" returns successfully" Sep 13 00:02:04.054380 containerd[1701]: time="2025-09-13T00:02:04.054030785Z" level=info msg="StopPodSandbox for \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\"" Sep 13 00:02:04.160619 containerd[1701]: 2025-09-13 00:02:04.109 [WARNING][6512] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67", ResourceVersion:"1268", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3", Pod:"csi-node-driver-tlnn9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9e60d384e03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:04.160619 containerd[1701]: 2025-09-13 00:02:04.110 [INFO][6512] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Sep 13 00:02:04.160619 containerd[1701]: 2025-09-13 00:02:04.110 [INFO][6512] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" iface="eth0" netns="" Sep 13 00:02:04.160619 containerd[1701]: 2025-09-13 00:02:04.110 [INFO][6512] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Sep 13 00:02:04.160619 containerd[1701]: 2025-09-13 00:02:04.110 [INFO][6512] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Sep 13 00:02:04.160619 containerd[1701]: 2025-09-13 00:02:04.144 [INFO][6519] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" HandleID="k8s-pod-network.251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Workload="ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0" Sep 13 00:02:04.160619 containerd[1701]: 2025-09-13 00:02:04.144 [INFO][6519] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:04.160619 containerd[1701]: 2025-09-13 00:02:04.144 [INFO][6519] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:04.160619 containerd[1701]: 2025-09-13 00:02:04.154 [WARNING][6519] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" HandleID="k8s-pod-network.251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Workload="ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0" Sep 13 00:02:04.160619 containerd[1701]: 2025-09-13 00:02:04.154 [INFO][6519] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" HandleID="k8s-pod-network.251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Workload="ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0" Sep 13 00:02:04.160619 containerd[1701]: 2025-09-13 00:02:04.156 [INFO][6519] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:04.160619 containerd[1701]: 2025-09-13 00:02:04.159 [INFO][6512] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Sep 13 00:02:04.162261 containerd[1701]: time="2025-09-13T00:02:04.161228400Z" level=info msg="TearDown network for sandbox \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\" successfully" Sep 13 00:02:04.162261 containerd[1701]: time="2025-09-13T00:02:04.161258200Z" level=info msg="StopPodSandbox for \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\" returns successfully" Sep 13 00:02:04.163000 containerd[1701]: time="2025-09-13T00:02:04.162739883Z" level=info msg="RemovePodSandbox for \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\"" Sep 13 00:02:04.163000 containerd[1701]: time="2025-09-13T00:02:04.162771843Z" level=info msg="Forcibly stopping sandbox \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\"" Sep 13 00:02:04.266996 sshd[6324]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:04.271083 systemd[1]: sshd@18-10.200.20.16:22-10.200.16.10:44820.service: Deactivated successfully. Sep 13 00:02:04.276962 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:02:04.280276 systemd-logind[1678]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:02:04.283843 systemd-logind[1678]: Removed session 21. Sep 13 00:02:04.287461 containerd[1701]: 2025-09-13 00:02:04.235 [WARNING][6534] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3af0fb4e-a0fb-491d-8bd0-e7b0ffc69d67", ResourceVersion:"1268", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"a66a20e7518fcc27999186856fcdd9890c5f20a576ba4c29481afc22af3fb3e3", Pod:"csi-node-driver-tlnn9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9e60d384e03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:04.287461 containerd[1701]: 2025-09-13 00:02:04.235 [INFO][6534] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Sep 13 00:02:04.287461 containerd[1701]: 2025-09-13 00:02:04.235 [INFO][6534] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" iface="eth0" netns="" Sep 13 00:02:04.287461 containerd[1701]: 2025-09-13 00:02:04.235 [INFO][6534] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Sep 13 00:02:04.287461 containerd[1701]: 2025-09-13 00:02:04.235 [INFO][6534] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Sep 13 00:02:04.287461 containerd[1701]: 2025-09-13 00:02:04.258 [INFO][6542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" HandleID="k8s-pod-network.251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Workload="ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0" Sep 13 00:02:04.287461 containerd[1701]: 2025-09-13 00:02:04.258 [INFO][6542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:04.287461 containerd[1701]: 2025-09-13 00:02:04.258 [INFO][6542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:04.287461 containerd[1701]: 2025-09-13 00:02:04.280 [WARNING][6542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" HandleID="k8s-pod-network.251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Workload="ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0" Sep 13 00:02:04.287461 containerd[1701]: 2025-09-13 00:02:04.280 [INFO][6542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" HandleID="k8s-pod-network.251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Workload="ci--4081.3.5--n--4f403f96f8-k8s-csi--node--driver--tlnn9-eth0" Sep 13 00:02:04.287461 containerd[1701]: 2025-09-13 00:02:04.283 [INFO][6542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:04.287461 containerd[1701]: 2025-09-13 00:02:04.285 [INFO][6534] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1" Sep 13 00:02:04.287916 containerd[1701]: time="2025-09-13T00:02:04.287504180Z" level=info msg="TearDown network for sandbox \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\" successfully" Sep 13 00:02:04.298984 containerd[1701]: time="2025-09-13T00:02:04.298830767Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:04.298984 containerd[1701]: time="2025-09-13T00:02:04.298980407Z" level=info msg="RemovePodSandbox \"251be6b3f3f181de49d19e4dcd1f8eedef2888cce2c0da390b50941aecbff7a1\" returns successfully" Sep 13 00:02:04.300503 containerd[1701]: time="2025-09-13T00:02:04.300469891Z" level=info msg="StopPodSandbox for \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\"" Sep 13 00:02:04.342489 systemd[1]: Started sshd@19-10.200.20.16:22-10.200.16.10:44830.service - OpenSSH per-connection server daemon (10.200.16.10:44830). Sep 13 00:02:04.384752 containerd[1701]: 2025-09-13 00:02:04.341 [WARNING][6558] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0", GenerateName:"calico-kube-controllers-5564b5cb6-", Namespace:"calico-system", SelfLink:"", UID:"3fc266a6-dfea-443a-8b62-40a366a0b751", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5564b5cb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653", Pod:"calico-kube-controllers-5564b5cb6-lt2mp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa5665b195e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:04.384752 containerd[1701]: 2025-09-13 00:02:04.343 [INFO][6558] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Sep 13 00:02:04.384752 containerd[1701]: 2025-09-13 00:02:04.343 [INFO][6558] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" iface="eth0" netns="" Sep 13 00:02:04.384752 containerd[1701]: 2025-09-13 00:02:04.343 [INFO][6558] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Sep 13 00:02:04.384752 containerd[1701]: 2025-09-13 00:02:04.343 [INFO][6558] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Sep 13 00:02:04.384752 containerd[1701]: 2025-09-13 00:02:04.368 [INFO][6567] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" HandleID="k8s-pod-network.f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0" Sep 13 00:02:04.384752 containerd[1701]: 2025-09-13 00:02:04.368 [INFO][6567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:04.384752 containerd[1701]: 2025-09-13 00:02:04.368 [INFO][6567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:04.384752 containerd[1701]: 2025-09-13 00:02:04.377 [WARNING][6567] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" HandleID="k8s-pod-network.f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0" Sep 13 00:02:04.384752 containerd[1701]: 2025-09-13 00:02:04.378 [INFO][6567] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" HandleID="k8s-pod-network.f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0" Sep 13 00:02:04.384752 containerd[1701]: 2025-09-13 00:02:04.381 [INFO][6567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:04.384752 containerd[1701]: 2025-09-13 00:02:04.382 [INFO][6558] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Sep 13 00:02:04.386890 containerd[1701]: time="2025-09-13T00:02:04.384746171Z" level=info msg="TearDown network for sandbox \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\" successfully" Sep 13 00:02:04.386890 containerd[1701]: time="2025-09-13T00:02:04.384786492Z" level=info msg="StopPodSandbox for \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\" returns successfully" Sep 13 00:02:04.386890 containerd[1701]: time="2025-09-13T00:02:04.385358293Z" level=info msg="RemovePodSandbox for \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\"" Sep 13 00:02:04.386890 containerd[1701]: time="2025-09-13T00:02:04.385390493Z" level=info msg="Forcibly stopping sandbox \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\"" Sep 13 00:02:04.458790 containerd[1701]: 2025-09-13 00:02:04.424 [WARNING][6582] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0", GenerateName:"calico-kube-controllers-5564b5cb6-", Namespace:"calico-system", SelfLink:"", UID:"3fc266a6-dfea-443a-8b62-40a366a0b751", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5564b5cb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"5e89930a6bbdcf232e2c0d14b914334660be0f2df3fe2c68276cbd23f8f69653", Pod:"calico-kube-controllers-5564b5cb6-lt2mp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa5665b195e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:04.458790 containerd[1701]: 2025-09-13 00:02:04.424 [INFO][6582] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Sep 13 00:02:04.458790 containerd[1701]: 2025-09-13 00:02:04.424 [INFO][6582] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" iface="eth0" netns="" Sep 13 00:02:04.458790 containerd[1701]: 2025-09-13 00:02:04.424 [INFO][6582] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Sep 13 00:02:04.458790 containerd[1701]: 2025-09-13 00:02:04.424 [INFO][6582] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Sep 13 00:02:04.458790 containerd[1701]: 2025-09-13 00:02:04.443 [INFO][6589] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" HandleID="k8s-pod-network.f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0" Sep 13 00:02:04.458790 containerd[1701]: 2025-09-13 00:02:04.443 [INFO][6589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:04.458790 containerd[1701]: 2025-09-13 00:02:04.443 [INFO][6589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:04.458790 containerd[1701]: 2025-09-13 00:02:04.453 [WARNING][6589] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" HandleID="k8s-pod-network.f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0" Sep 13 00:02:04.458790 containerd[1701]: 2025-09-13 00:02:04.453 [INFO][6589] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" HandleID="k8s-pod-network.f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Workload="ci--4081.3.5--n--4f403f96f8-k8s-calico--kube--controllers--5564b5cb6--lt2mp-eth0" Sep 13 00:02:04.458790 containerd[1701]: 2025-09-13 00:02:04.455 [INFO][6589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:04.458790 containerd[1701]: 2025-09-13 00:02:04.457 [INFO][6582] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103" Sep 13 00:02:04.459537 containerd[1701]: time="2025-09-13T00:02:04.459221029Z" level=info msg="TearDown network for sandbox \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\" successfully" Sep 13 00:02:04.472048 containerd[1701]: time="2025-09-13T00:02:04.471819059Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:04.472048 containerd[1701]: time="2025-09-13T00:02:04.471901579Z" level=info msg="RemovePodSandbox \"f614c9d1fa6500809a0465efc3fe5672aba943541ab62d30d7c08e5cfbaa7103\" returns successfully" Sep 13 00:02:04.472450 containerd[1701]: time="2025-09-13T00:02:04.472414740Z" level=info msg="StopPodSandbox for \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\"" Sep 13 00:02:04.544955 containerd[1701]: 2025-09-13 00:02:04.511 [WARNING][6603] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c", Pod:"coredns-674b8bbfcf-sv5f6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie775441c720", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:04.544955 containerd[1701]: 2025-09-13 00:02:04.511 [INFO][6603] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Sep 13 00:02:04.544955 containerd[1701]: 2025-09-13 00:02:04.511 [INFO][6603] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" iface="eth0" netns="" Sep 13 00:02:04.544955 containerd[1701]: 2025-09-13 00:02:04.512 [INFO][6603] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Sep 13 00:02:04.544955 containerd[1701]: 2025-09-13 00:02:04.512 [INFO][6603] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Sep 13 00:02:04.544955 containerd[1701]: 2025-09-13 00:02:04.530 [INFO][6610] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" HandleID="k8s-pod-network.9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0" Sep 13 00:02:04.544955 containerd[1701]: 2025-09-13 00:02:04.530 [INFO][6610] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:04.544955 containerd[1701]: 2025-09-13 00:02:04.530 [INFO][6610] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:04.544955 containerd[1701]: 2025-09-13 00:02:04.540 [WARNING][6610] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" HandleID="k8s-pod-network.9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0" Sep 13 00:02:04.544955 containerd[1701]: 2025-09-13 00:02:04.540 [INFO][6610] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" HandleID="k8s-pod-network.9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0" Sep 13 00:02:04.544955 containerd[1701]: 2025-09-13 00:02:04.541 [INFO][6610] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:04.544955 containerd[1701]: 2025-09-13 00:02:04.543 [INFO][6603] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Sep 13 00:02:04.545597 containerd[1701]: time="2025-09-13T00:02:04.545011833Z" level=info msg="TearDown network for sandbox \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\" successfully" Sep 13 00:02:04.545597 containerd[1701]: time="2025-09-13T00:02:04.545037033Z" level=info msg="StopPodSandbox for \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\" returns successfully" Sep 13 00:02:04.545648 containerd[1701]: time="2025-09-13T00:02:04.545596834Z" level=info msg="RemovePodSandbox for \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\"" Sep 13 00:02:04.545648 containerd[1701]: time="2025-09-13T00:02:04.545625274Z" level=info msg="Forcibly stopping sandbox \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\"" Sep 13 00:02:04.623226 containerd[1701]: 2025-09-13 00:02:04.585 [WARNING][6624] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ac6ed623-78ab-4da1-b5bf-5e6d5fcf0386", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 0, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-4f403f96f8", ContainerID:"4748664e6352d5bbbfc485284b7bab4c72c8f1521b755dddca7cc033fe36fa8c", Pod:"coredns-674b8bbfcf-sv5f6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie775441c720", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:02:04.623226 containerd[1701]: 2025-09-13 00:02:04.585 [INFO][6624] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Sep 13 00:02:04.623226 containerd[1701]: 2025-09-13 00:02:04.585 [INFO][6624] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" iface="eth0" netns="" Sep 13 00:02:04.623226 containerd[1701]: 2025-09-13 00:02:04.585 [INFO][6624] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Sep 13 00:02:04.623226 containerd[1701]: 2025-09-13 00:02:04.585 [INFO][6624] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Sep 13 00:02:04.623226 containerd[1701]: 2025-09-13 00:02:04.605 [INFO][6631] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" HandleID="k8s-pod-network.9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0" Sep 13 00:02:04.623226 containerd[1701]: 2025-09-13 00:02:04.605 [INFO][6631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:04.623226 containerd[1701]: 2025-09-13 00:02:04.605 [INFO][6631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:04.623226 containerd[1701]: 2025-09-13 00:02:04.615 [WARNING][6631] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" HandleID="k8s-pod-network.9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0" Sep 13 00:02:04.623226 containerd[1701]: 2025-09-13 00:02:04.615 [INFO][6631] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" HandleID="k8s-pod-network.9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Workload="ci--4081.3.5--n--4f403f96f8-k8s-coredns--674b8bbfcf--sv5f6-eth0" Sep 13 00:02:04.623226 containerd[1701]: 2025-09-13 00:02:04.617 [INFO][6631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:04.623226 containerd[1701]: 2025-09-13 00:02:04.619 [INFO][6624] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914" Sep 13 00:02:04.623226 containerd[1701]: time="2025-09-13T00:02:04.623088258Z" level=info msg="TearDown network for sandbox \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\" successfully" Sep 13 00:02:04.632091 containerd[1701]: time="2025-09-13T00:02:04.632026240Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:04.632551 containerd[1701]: time="2025-09-13T00:02:04.632128360Z" level=info msg="RemovePodSandbox \"9f19bdea37617420864e750a6611d237657d99847eae003fcaa0418c41a92914\" returns successfully" Sep 13 00:02:04.632859 containerd[1701]: time="2025-09-13T00:02:04.632796001Z" level=info msg="StopPodSandbox for \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\"" Sep 13 00:02:04.710714 containerd[1701]: 2025-09-13 00:02:04.667 [WARNING][6645] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-whisker--748b96b4cb--584tl-eth0" Sep 13 00:02:04.710714 containerd[1701]: 2025-09-13 00:02:04.667 [INFO][6645] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Sep 13 00:02:04.710714 containerd[1701]: 2025-09-13 00:02:04.667 [INFO][6645] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" iface="eth0" netns="" Sep 13 00:02:04.710714 containerd[1701]: 2025-09-13 00:02:04.667 [INFO][6645] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Sep 13 00:02:04.710714 containerd[1701]: 2025-09-13 00:02:04.667 [INFO][6645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Sep 13 00:02:04.710714 containerd[1701]: 2025-09-13 00:02:04.691 [INFO][6652] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" HandleID="k8s-pod-network.17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Workload="ci--4081.3.5--n--4f403f96f8-k8s-whisker--748b96b4cb--584tl-eth0" Sep 13 00:02:04.710714 containerd[1701]: 2025-09-13 00:02:04.691 [INFO][6652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:04.710714 containerd[1701]: 2025-09-13 00:02:04.691 [INFO][6652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:04.710714 containerd[1701]: 2025-09-13 00:02:04.705 [WARNING][6652] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" HandleID="k8s-pod-network.17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Workload="ci--4081.3.5--n--4f403f96f8-k8s-whisker--748b96b4cb--584tl-eth0" Sep 13 00:02:04.710714 containerd[1701]: 2025-09-13 00:02:04.705 [INFO][6652] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" HandleID="k8s-pod-network.17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Workload="ci--4081.3.5--n--4f403f96f8-k8s-whisker--748b96b4cb--584tl-eth0" Sep 13 00:02:04.710714 containerd[1701]: 2025-09-13 00:02:04.707 [INFO][6652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:04.710714 containerd[1701]: 2025-09-13 00:02:04.708 [INFO][6645] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Sep 13 00:02:04.710714 containerd[1701]: time="2025-09-13T00:02:04.710570426Z" level=info msg="TearDown network for sandbox \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\" successfully" Sep 13 00:02:04.710714 containerd[1701]: time="2025-09-13T00:02:04.710596826Z" level=info msg="StopPodSandbox for \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\" returns successfully" Sep 13 00:02:04.711551 containerd[1701]: time="2025-09-13T00:02:04.711425828Z" level=info msg="RemovePodSandbox for \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\"" Sep 13 00:02:04.711551 containerd[1701]: time="2025-09-13T00:02:04.711457589Z" level=info msg="Forcibly stopping sandbox \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\"" Sep 13 00:02:04.759931 sshd[6565]: Accepted publickey for core from 10.200.16.10 port 44830 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:04.762975 sshd[6565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:04.771381 systemd-logind[1678]: New session 22 of user core. Sep 13 00:02:04.775896 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:02:04.804719 containerd[1701]: 2025-09-13 00:02:04.747 [WARNING][6668] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" WorkloadEndpoint="ci--4081.3.5--n--4f403f96f8-k8s-whisker--748b96b4cb--584tl-eth0" Sep 13 00:02:04.804719 containerd[1701]: 2025-09-13 00:02:04.748 [INFO][6668] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Sep 13 00:02:04.804719 containerd[1701]: 2025-09-13 00:02:04.748 [INFO][6668] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" iface="eth0" netns="" Sep 13 00:02:04.804719 containerd[1701]: 2025-09-13 00:02:04.748 [INFO][6668] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Sep 13 00:02:04.804719 containerd[1701]: 2025-09-13 00:02:04.748 [INFO][6668] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Sep 13 00:02:04.804719 containerd[1701]: 2025-09-13 00:02:04.790 [INFO][6675] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" HandleID="k8s-pod-network.17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Workload="ci--4081.3.5--n--4f403f96f8-k8s-whisker--748b96b4cb--584tl-eth0" Sep 13 00:02:04.804719 containerd[1701]: 2025-09-13 00:02:04.790 [INFO][6675] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:02:04.804719 containerd[1701]: 2025-09-13 00:02:04.791 [INFO][6675] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:02:04.804719 containerd[1701]: 2025-09-13 00:02:04.800 [WARNING][6675] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" HandleID="k8s-pod-network.17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Workload="ci--4081.3.5--n--4f403f96f8-k8s-whisker--748b96b4cb--584tl-eth0" Sep 13 00:02:04.804719 containerd[1701]: 2025-09-13 00:02:04.800 [INFO][6675] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" HandleID="k8s-pod-network.17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Workload="ci--4081.3.5--n--4f403f96f8-k8s-whisker--748b96b4cb--584tl-eth0" Sep 13 00:02:04.804719 containerd[1701]: 2025-09-13 00:02:04.801 [INFO][6675] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:02:04.804719 containerd[1701]: 2025-09-13 00:02:04.803 [INFO][6668] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800" Sep 13 00:02:04.805122 containerd[1701]: time="2025-09-13T00:02:04.804769730Z" level=info msg="TearDown network for sandbox \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\" successfully" Sep 13 00:02:04.815227 containerd[1701]: time="2025-09-13T00:02:04.815173995Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:02:04.817236 containerd[1701]: time="2025-09-13T00:02:04.815260555Z" level=info msg="RemovePodSandbox \"17a75054c1194169dd58f7b252496fac8b03f2482d84f7a8136a5a2e381d3800\" returns successfully" Sep 13 00:02:05.127380 sshd[6565]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:05.130896 systemd[1]: sshd@19-10.200.20.16:22-10.200.16.10:44830.service: Deactivated successfully. Sep 13 00:02:05.134388 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:02:05.135608 systemd-logind[1678]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:02:05.136706 systemd-logind[1678]: Removed session 22. Sep 13 00:02:07.512995 update_engine[1679]: I20250913 00:02:07.512934 1679 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:02:07.513813 update_engine[1679]: I20250913 00:02:07.513545 1679 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:02:07.513813 update_engine[1679]: I20250913 00:02:07.513777 1679 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:02:07.548029 update_engine[1679]: E20250913 00:02:07.547909 1679 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:02:07.548029 update_engine[1679]: I20250913 00:02:07.547998 1679 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 13 00:02:10.208634 systemd[1]: Started sshd@20-10.200.20.16:22-10.200.16.10:59500.service - OpenSSH per-connection server daemon (10.200.16.10:59500). Sep 13 00:02:10.618462 sshd[6715]: Accepted publickey for core from 10.200.16.10 port 59500 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:10.619985 sshd[6715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:10.624182 systemd-logind[1678]: New session 23 of user core. Sep 13 00:02:10.628366 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:02:10.987480 sshd[6715]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:10.991826 systemd[1]: sshd@20-10.200.20.16:22-10.200.16.10:59500.service: Deactivated successfully. Sep 13 00:02:10.995279 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:02:10.996396 systemd-logind[1678]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:02:10.997676 systemd-logind[1678]: Removed session 23. Sep 13 00:02:16.071403 systemd[1]: Started sshd@21-10.200.20.16:22-10.200.16.10:59510.service - OpenSSH per-connection server daemon (10.200.16.10:59510). Sep 13 00:02:16.502667 sshd[6771]: Accepted publickey for core from 10.200.16.10 port 59510 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:16.505668 sshd[6771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:16.512438 systemd-logind[1678]: New session 24 of user core. Sep 13 00:02:16.517708 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:02:16.955577 sshd[6771]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:16.960053 systemd[1]: sshd@21-10.200.20.16:22-10.200.16.10:59510.service: Deactivated successfully. Sep 13 00:02:16.963414 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:02:16.971531 systemd-logind[1678]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:02:16.973320 systemd-logind[1678]: Removed session 24. Sep 13 00:02:17.514146 update_engine[1679]: I20250913 00:02:17.512777 1679 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:02:17.514146 update_engine[1679]: I20250913 00:02:17.513022 1679 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:02:17.514146 update_engine[1679]: I20250913 00:02:17.513262 1679 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:02:17.551191 update_engine[1679]: E20250913 00:02:17.548179 1679 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:02:17.551191 update_engine[1679]: I20250913 00:02:17.548262 1679 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:02:17.551191 update_engine[1679]: I20250913 00:02:17.548270 1679 omaha_request_action.cc:617] Omaha request response: Sep 13 00:02:17.551191 update_engine[1679]: E20250913 00:02:17.548365 1679 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 13 00:02:17.551191 update_engine[1679]: I20250913 00:02:17.548382 1679 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 13 00:02:17.551191 update_engine[1679]: I20250913 00:02:17.548388 1679 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:02:17.551191 update_engine[1679]: I20250913 00:02:17.548393 1679 update_attempter.cc:306] Processing Done. Sep 13 00:02:17.551191 update_engine[1679]: E20250913 00:02:17.548408 1679 update_attempter.cc:619] Update failed. Sep 13 00:02:17.551191 update_engine[1679]: I20250913 00:02:17.548413 1679 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 13 00:02:17.551191 update_engine[1679]: I20250913 00:02:17.548416 1679 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 13 00:02:17.551191 update_engine[1679]: I20250913 00:02:17.548421 1679 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 13 00:02:17.551191 update_engine[1679]: I20250913 00:02:17.548493 1679 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:02:17.551191 update_engine[1679]: I20250913 00:02:17.548518 1679 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:02:17.551191 update_engine[1679]: I20250913 00:02:17.548529 1679 omaha_request_action.cc:272] Request: Sep 13 00:02:17.551191 update_engine[1679]: Sep 13 00:02:17.551191 update_engine[1679]: Sep 13 00:02:17.551664 update_engine[1679]: Sep 13 00:02:17.551664 update_engine[1679]: Sep 13 00:02:17.551664 update_engine[1679]: Sep 13 00:02:17.551664 update_engine[1679]: Sep 13 00:02:17.551664 update_engine[1679]: I20250913 00:02:17.548535 1679 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:02:17.551664 update_engine[1679]: I20250913 00:02:17.548705 1679 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:02:17.551664 update_engine[1679]: I20250913 00:02:17.548926 1679 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:02:17.552027 locksmithd[1767]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 13 00:02:17.585322 update_engine[1679]: E20250913 00:02:17.585085 1679 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:02:17.585322 update_engine[1679]: I20250913 00:02:17.585186 1679 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:02:17.585322 update_engine[1679]: I20250913 00:02:17.585196 1679 omaha_request_action.cc:617] Omaha request response: Sep 13 00:02:17.585322 update_engine[1679]: I20250913 00:02:17.585201 1679 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:02:17.585322 update_engine[1679]: I20250913 00:02:17.585208 1679 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:02:17.585322 update_engine[1679]: I20250913 00:02:17.585212 1679 update_attempter.cc:306] Processing Done. Sep 13 00:02:17.585322 update_engine[1679]: I20250913 00:02:17.585219 1679 update_attempter.cc:310] Error event sent. Sep 13 00:02:17.585322 update_engine[1679]: I20250913 00:02:17.585230 1679 update_check_scheduler.cc:74] Next update check in 40m43s Sep 13 00:02:17.585790 locksmithd[1767]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 13 00:02:22.042289 systemd[1]: Started sshd@22-10.200.20.16:22-10.200.16.10:57700.service - OpenSSH per-connection server daemon (10.200.16.10:57700). Sep 13 00:02:22.462155 sshd[6784]: Accepted publickey for core from 10.200.16.10 port 57700 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:22.463913 sshd[6784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:22.472330 systemd-logind[1678]: New session 25 of user core. Sep 13 00:02:22.477590 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:02:22.839954 sshd[6784]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:22.843808 systemd[1]: sshd@22-10.200.20.16:22-10.200.16.10:57700.service: Deactivated successfully. Sep 13 00:02:22.845935 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:02:22.847346 systemd-logind[1678]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:02:22.848439 systemd-logind[1678]: Removed session 25. Sep 13 00:02:27.925472 systemd[1]: Started sshd@23-10.200.20.16:22-10.200.16.10:57706.service - OpenSSH per-connection server daemon (10.200.16.10:57706). Sep 13 00:02:28.343239 sshd[6797]: Accepted publickey for core from 10.200.16.10 port 57706 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:28.343980 sshd[6797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:28.349416 systemd-logind[1678]: New session 26 of user core. Sep 13 00:02:28.355290 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:02:28.789966 sshd[6797]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:28.794428 systemd[1]: sshd@23-10.200.20.16:22-10.200.16.10:57706.service: Deactivated successfully. Sep 13 00:02:28.798563 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:02:28.800733 systemd-logind[1678]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:02:28.802595 systemd-logind[1678]: Removed session 26. Sep 13 00:02:33.874629 systemd[1]: Started sshd@24-10.200.20.16:22-10.200.16.10:42702.service - OpenSSH per-connection server daemon (10.200.16.10:42702). Sep 13 00:02:34.302726 sshd[6831]: Accepted publickey for core from 10.200.16.10 port 42702 ssh2: RSA SHA256:blsec9WXBGJMQIu7Y4EANO2ooyLDRMDELziWNnUsds8 Sep 13 00:02:34.304623 sshd[6831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:34.311401 systemd-logind[1678]: New session 27 of user core. Sep 13 00:02:34.316331 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 13 00:02:34.726987 sshd[6831]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:34.730297 systemd-logind[1678]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:02:34.731252 systemd[1]: sshd@24-10.200.20.16:22-10.200.16.10:42702.service: Deactivated successfully. Sep 13 00:02:34.734815 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:02:34.737869 systemd-logind[1678]: Removed session 27.